Systems administration and Unix09 Mar 2010 at 17:18 by Jean-Marc Liotier

Solid state drives provide incredible IOPS compared to hard disks. But the consideration of cost rules them out as primary mass storage. But for most applications you would not consider storing everything in RAM either – yet RAM cache is part of any storage system. Why wouldn’t we take advantage of Solid state drives as an intermediary tier between RAM and hard disks ? This reasoning is what hierarchical storage management is about, but Sun took it one step further by integrating it into the file system as ZFS‘s Hybrid Storage Pools.

You can read a quick overview of Hybrid Storage Pools in marketing terms, but you will surely find Sun’s Adam Leventhal’s presentation more substantial as a technical introduction. And most impressive are Sun’s Brendan Gregg’s benchmarks showing 5x to 40x IOPS improvement !

Adding SSD to a ZFS storage pool is done at to locations : the ZFS intent-log (ZIL) device and the Second Level Adaptive Replacement Cache (L2ARC). Usually they are set on two separate devices, but Arnaud from Sun showed that they can share a single device just fine.

The ZIL, also known as Logzilla accelerate small, synchronous writes. It does not require a large capacity. The L2ARC, also known as Readzilla accelerates reads. For the gory details of how Logzilla and Readzilla work, Sun’s Claudia Hildebrandt’s presentation is a great source.

Creating a ZFS storage pool with one or more separate ZIL devices is dead easy, but you then may want to tune your system for performance. It costs some DRAM to reference the L2ARC, at a rate proportional to record size – between 1/40th and 1/80th of the L2ARC depending on the tuning (I have seen several different estimates) – so don’t set a L2ARC larger than your DRAM affords you.

I hope that this sort of goodness will some day come to Linux through Btrfs, but ZFS provides it right now – and it is Free software too… So I guess that in spite of my religious fervor toward the GPL, my storage server’s next operating system will be a BSD licensed one… Who would have thought ?

Social networking and Technology and The Web10 Feb 2010 at 22:06 by Jean-Marc Liotier

Yesterday, while Google Buzz was still only a rumor, I felt that there was a slight likelyhood that Google’s entry into the microblogging field would support decentralized interoperability using the OpenMicroBlogging protocol pioneered by the Status.net open source micro messaging platform. I was wrong about that, but it was quite a long shot… Speculation is a dirty job, but someone’s got to do it !

I am also surprised that there is no Twitter API, but there is plenty of other protocols on the menu that should keep us quite happy. There is already the Social Graph API, the PubSubHubbub push protocol and of course Atom Syndication and the RSS format – with the MediaRSS extension. But much more interesting is the Google Buzz documentation mention that “Over the next several months Google Buzz will introduce an API for developers, including full/read write support for posts with the Atom Publishing Protocol, rich activity notification with Activity Streams, delegated authorization with OAuth, federated comments and activities with Salmon, distributed profile and contact information with WebFinger, and much, much more“. So with all that available to third parties we may even be able to interact with Google’s content without having to deal with Gmail whose rampant portalization makes me dislike it almost as much as Facebook and Yahoo.

I’m particularly excited about Salmon, a protocol for comments and annotations to swim upstream to original update sources. For now I wonder about the compared utilities of Google Buzz and FriendFeed, but once Salmon is widely implemented it won’t matter where the comments are contributed : they will percolate everywhere and the conversation shall be united again !

Jabber and Rumors and Social networking and Technology and The Web09 Feb 2010 at 12:29 by Jean-Marc Liotier

According to a report from the Wall Street Journal mentioned by ReadWriteWeb, Google might be offering a microblogging service as soon as this week.

When Google opened Google Talk, they opened the service to XMPP/Jabber federation. As a new entrant in a saturated market, opening up is the logical move.

The collaborative messaging field as a whole cannot be considered saturated but, while it is still evolving very fast, the needs of the early adopter segment are now well served by entrenched offers such as Twitter and Facebook. Touching them will require an alternative strategy – and that may lead to opening as a way to offer attractive value to users and service providers alike.

So maybe we can cling on a faint hope that Google’s entry into the microblogging field will support decentralized interoperability using the OpenMicroBlogging protocol pioneered by the Status.net open source micro messaging platform. Shall we take a bet ?

Don’t you love bar talk speculation based on anonymous rumors ?

Marketing and Politics15 Jan 2010 at 14:22 by Jean-Marc Liotier

Nothing new, but as Paul Currion remarks, the Haïti post-earthquake crisis shows once again that media and governments alike are still operating under the rule of sensationalism :

“Nobody can deny that Haiti needs assistance right now to save lives, but it also needed assistance yesterday when the infant mortality rate was the 37th lowest in the world. When it comes to natural disasters, we – our governments, our media, ourselves – are victims of the same biases that cause impulse buying at the supermarket. Thousands of people dying from buildings falling on them instantly mobilises a huge amount of resources, but thousands of children dying from easily preventable diseases is just background noise. This is the uncomfortable reality of the aid world, but it’s not one that our media or governments really wants to hear”.

But is it possible, in a noisy media environment, to find success in promoting the long view of human capability instead of a short term view of human suffering ? Some examples do exist, but forming, out of the background noise, a coherent signal that has political impact remains a rarely solved problem.

Mobile computing12 Jan 2010 at 14:43 by Jean-Marc Liotier

After days of being pestered by Corseman, here is the list of the Android applications I use after a few months of optimizing my selection. All of them are free, but many are free as in “free beer” rather than free as in “freedom”.

Communications :

  • K-9 – Excellent IMAP client. A huge improvement over the barely usable stock Android IMAP client. It makes mobile mail a very tolerable experience. With such a name suggesting Mutt legacy, it could only be a good piece of software !
  • May 2011 update… Xabber is the only XMPP Jabber client worth using.
  • DaraIRC – a decent IRC client, except that I can’t get it to remain connected reliably while in the background.
  • PingDroid – Quick and simple client for posting to the Ping.fm multiple posting application which I have now ceased to use in favor of Pixelpipe.
  • Pixelpipe – Like Ping.fm, but better.
  • Sipdroid – SIP client. For now I only use it for testing, but it seems to be the only serious game in town.
  • ConnectBot – a very good SSH client that even does tunneling.
  • AziLink – an application that allows USB tethering for Android-based phones, without requiring root access. Here is a nice Azilink Android tethering Debian/Ubuntu startup script – don’t forget to circumvent lame HTTP user agent blocks.

Utilities :

  • Linda file manager – does the job of shuffling files around. The Android’s user-facing design is far from being file-centric, but having file management capability comes handy sometimes. You may also like Astro instead.
  • Phonalyzr – analyzes the call log and SMS log to display graphs and usage summaries that provide useful insight into your consumption. I like it a lot.
  • Barcode Scanner – In 1D or 2D, it does the job. Works for reading QR coded business cards too.
  • Taskiller – Kill those rogue background tasks that make the whole system sluggish. Why doesn’t the Android base system feature a task manager is beyond me. Having an ad-supported and unfree program for such a basic utility irks me.
  • StopWatch – Very usable chronometer and countdown.

Geography :

  • Maverick – GPS off-road navigation for Android devices. Uses OpenStreetMap, shows heading and provides offline map tiles storage.
  • Places Directory – Finds nearby point of interests from an offline database. Useful when in unknown neighborhoods. The database could be better, but it does the job for finding the nearest ATM or a fuel station.
  • Google Sky Map – Romantically show the stars to your girlfriend while discreetly checking this awesome augmented reality planetarium to compensate for your utter lack of astronomical knowledge.
  • MapDroyd – On top of displaying the sometimes excellent OpenStreetMap data, this application does it offline – which Google Maps will never do. My choice for travel where the Internet does not yet reach.
  • Vespucci OSM EditorOpenStreetMap editor capable of download and upload. But on such a cramped device, I wouldn’t use it for anything but the simplest edits.
  • Here I am – Simple application that sends SMS or mail with your coordinates and a link to Google Map.
  • Velibike – Paris Velib automated bike rental stations on Google Maps, with bikes and slots availability.

Writing :

After experimenting with loads of notepad applications, I gave up and went with GDocs which synchronizes with Google Docs. Writing on Android is a pain

I love writing and the clumsiness of working with text on this platform is a significant part of what I dislike about it. The other part has something to do with Android’s insularity in the free software world.

The Worpdress client is nice though.

Commerce :

  • Pocket Auctions for eBay – Good for sniping on the go.

Personal information management :

  • Astrid tasks/todo lists manager – Well balanced user interface, tags and synchronizes with Remember The Milk. Too bad that as with anything on Android compared to Palm OS the graphical user interface is horribly slow.
  • Facebook Sync – The lazy way to have a picture for most of your contacts. I like to see the caller’s face. Now superseded by SyncMyPix.
  • Gravatar Importer – Same as Facebook Sync, but uses your contact’s email addresses to search for a Gravatar and set it as the contact’s picture.

Sound :

  • Frequency Generator – Combine signals of various shapes and frequencies. Nice for the sound curious.
  • Ringdroid – Easily cut and use any MP3 file as a ringtone. If you want to use a song as a ringtone, you absolutely need this.
  • Robotic Guitarist – Great accompaniment for your lyrical improvisations.
  • Sonorox and Loops – Easy tune composition.
  • gStrings – A chromatic tuner.

I have yet to find a decent RSS reader that doesn’t choke on what I want to feed it.

For social networking, I use mobile optimized sites – or even the full one, and I have found that no dedicated applications are needed.

You may have noticed that for some applications in that list, I did not provide a link. That is because it is often very difficult to find the developer’s site, drowned in a sea of spammy application review sites.

Next step for me : find Maemo equivalents for all this, as I’ve got my mind set on leaving the disappointing Android environment and migrate to the Maemo platform that makes much more sense to me.

Free software and Geography and Marketing and Politics and Technology and The Web17 Dec 2009 at 13:27 by Jean-Marc Liotier

The quality of OpenStreetMap‘s work speaks for itself, but it seems that we need to speak about it too – especially now that Google is attempting to to appear as holding the moral high ground by using terms such as “citizen cartographer” that they rob of its meaning by conveniently forgetting to mention the license under which the contributed data is held. But in the eye of the public, the $50000 UNICEF donation to the  home country of the winner of the Map Maker Global Challenge lets them appear as charitable citizens.

We need to explain why it is a fraud, so that motivated aspiring cartographers are not tempted to give away their souls for free. I could understand that they sell it, but giving it to Google for free is a bit too much – we must tell them. I’m pretty sure that good geographic data available to anyone for free will do more for the least developed communities than a 50k USD grant.

Take Map Kibera for example :

“Kibera in Nairobi, Kenya, widely known as Africa’s largest slum, remains a blank spot on the map. Without basic knowledge of the geography and resources of Kibera it is impossible to have an informed discussion on how to improve the lives of residents. This November, young Kiberans create the first public digital map of their own community”.

And they did it with OpenStreetMap. To the million of people living in this former terra incognita with no chance of profiting a major mapping provider, how much do you think having at last a platform for services that require geographical information without having to pay Google or remain within the limits of the uses permitted by its license is worth ?

I answered this piece at ReadWriteWeb and I suggest that you keep an eye for opportunities to answer this sort of propaganda against libre mapping.

Marketing and Social networking and The media and The Web15 Dec 2009 at 0:24 by Jean-Marc Liotier

Today I mentioned that 15 years late, I had finally put a name on a past adolescent problem : patellofemoral pain syndrome (PFPS). As far as I understood, it is a growth related muscle unbalance that solves itself when the body reaches maturity.

As usual with most of my microblogging, I dispatch the 140 chars to several sites using Ping.fm and then follow the conversation wherever it eventually happens. In that case, a conversation developed on Facebook. Friends asked questions and gave their two cents – business as usual.

And then an interloper cut in : “Jean-Marc we can help correct your patellfemoral pain syndrome. It is the mal-tracking of your patella. Check us out at mycommercialkneesite.com”. It is not entirely spam at first sight because it is actually on-topic and even slightly informative. But it is not really taking part in the conversation either because it is a blatant plug for an infomercial site. So spam it is, but cleverly targeted at a niche audience.

I does looks like all the blatant plugs that we have been seeing for decades in forums and mailing list – usually for a short time after which the culprit mends is devious ways or ends up banned. But there is an innovative twist brought by the rise of the “real-time web” : the power of keyword filtering applied to the whole microblogging world is the enabler of large-scale conversational marketing. Obnoxious marketers attempting to pass as bona fide contributors to the conversation are no longer a merely local nuisance – they are now reaching us at a global scale and in near real-time.

Marketers barging in whenever someone utters a word that qualifies their niche are gatecrashers and will be treated as such. But I find fascinating that we now have  personalized advertising capable of targeting a niche audience in real-time as the qualifying keywords appear. Not that I like it, but you have to recognize it as a new step in the memetic arms race between advertisers and audience.

Imagine that coupled with voice recognition and some IVR scripting. Do you remember those telephone services where you get free airtime if you listen for advertising breaks ? Imagine the same concept where during the conversation someone – a human, or even a conversational automaton – comes in and says “Hey, you were telling your boyfriend about your headache ? Why don’t you try Schrufanol ? Mention SHMURZ and get the third one free !”.

Even better, add some more intelligent pattern recognition to go beyond keywords. The hopeless student who just told his pal on Schmoogle FreeVoice telling about his fear of failure at exams will immediately receive through Schmoogle AdVoice a special offer for cram school from a salesdrone who knows his name and just checked out his Facebook profile. You think this is the future ? This is probably already happening.

15 years late, I finally put a name on my past adolescent problem : patellofemoral pain syndrome (PFPS) – growth related muscle unbalance.

Military10 Dec 2009 at 11:18 by Jean-Marc Liotier

The October 2008 article “American troops in Afghanistan through the eyes of a French OMLT infantryman” gathered more than two hundred comments and will be past a hundred thousand visits by most reasonable accounts before the year ends (see 2008 traffic and 2009 traffic). I though that translating this piece would surely raise some interest, but I never expected it to be that much. More than one year later it is still sparking interest among citizens of the United States. During the Bush era, the image of France among the right wing in the United States seems to have suffered a lot, and as a result a lot of people have been genuinely surprised to read an article that showed that in spite of the politics we actually manage to work together with cordial relationships.

One year later, I am still receiving mail asking about the source of the article, from readers who enquire about its authenticity. Considering the unofficial reactions from members of the French armed forces and from readers with interests in the defense community, I have a rather good certitude about the authenticity of the original essay. With the source blog defunct and having lost touch with the original author who was not seeking public exposure and only made a couple of fleeting comments before disappearing from the media landscape, I am unable to prove anything. The author went by the pseudonym of “Merlin” and his blog was called “Le Blog de Merlin” at http://omlt3-kdk3.over-blog.com. The disappearance of the original article’s page is also a pity because that is where I exchanged comments with the author.

I do not believe that the author ever thought that his blog would get noticed significantly, even in France. It was featured in a well known blog by Jean-Dominique Merchet, a military journalist at the French daily “Liberation” who has an excellent reputation for reliability – it is his post that got the ball rolling. The original author probably does not even realize now how many American blogs and forums have been discussing his article.

Since then I have lost track of him and I do not know his real name : though military blogging is rather common in the United States, it is still quite alien to the more conservative culture of the French defense community, so it seems that most military-related people in France prefer the pseudonymous discussions in forums to the more public exposure of blogs and even there they won’t take much risks in expressing themselves – the French army is not nicknamed “la grande muette” (“the great silent”) for nothing. This rarity may be one of the reasons why this humble piece of French first hand account of recent events won such attention. But no one here expected this – to us cheese-eating surrender monkeys, the United States of America are always full of surprises !

Of course, as it benefits our relationship and the image of France in the United States, the original article and even my translation could have been an elaborate psyop by the French government. Or it could be the work of shadowy pro-French non-governmental propaganda outfits. Or a fake by someone who wants everyone to believe in one of those two hypothesis in order to later appear as exposing the evil scheming French. At that point, we enter the realm of conspiracy theories and I’m sure that some will have a great time speculating about it. But from where I sit, there is coherent case for this story to be just what it appears to be : a simple account of good working relationships.

Free software and Technology07 Dec 2009 at 12:51 by Jean-Marc Liotier

As Ars Technica announced three days ago,  Intel’s 2009 launch of its ambitious Larrabee GPU’s has been canceled : “The project has suffered a final delay that proved fatal to its graphics ambitions, so Intel will put the hardware out as a development platform for graphics and high-performance computing. But Intel’s plans to make a GPU aren’t dead; they’ve just been reset, with more news to come next year“.

I can’t wait for more news about that radical new architecture from the only major graphics hardware vendor that has a long history of producing or commissioning open source drivers for its graphics chips.

But what are we excited about ? In a nutshell : automatic vectorization for parallel execution of any known code graph with no data dependencies between iterations is why Larabee is about. That means that in many cases, the developper can take his existing code and get easy parallel execution for free.

Since I’m an utter layman in the field of processor architecture, I’ll let you read the word of Tim Sweeney of Epic Games, who provided a great deal of input into the design of LRBni. He sums up the big picture a little more eloquently and I found him cited in Michael Abrash’s April 2009 article in Dr. Dobb’s – “A First Look at the Larrabee New Instructions” :

Larrabee enables GPU-class performance on a fully general x86 CPU; most importantly, it does so in a way that is useful for a broad spectrum of applications and that is easy for developers to use. The key is that Larrabee instructions are “vector-complete.”

More precisely: Any loop written in a traditional programming language can be vectorized, to execute 16 iterations of the loop in parallel on Larrabee vector units, provided the loop body meets the following criteria:

  • Its call graph is statically known.
  • There are no data dependencies between iterations.

Shading languages like HLSL are constrained so developers can only write code meeting those criteria, guaranteeing a GPU can always shade multiple pixels in parallel. But vectorization is a much more general technology, applicable to any such loops written in any language.

This works on Larrabee because every traditional programming element — arithmetic, loops, function calls, memory reads, memory writes — has a corresponding translation to Larrabee vector instructions running it on 16 data elements simultaneously. You have: integer and floating point vector arithmetic; scatter/gather for vectorized memory operations; and comparison, masking, and merging instructions for conditionals.

This wasn’t the case with MMX, SSE and Altivec. They supported vector arithmetic, but could only read and write data from contiguous locations in memory, rather than random-access as Larrabee. So SSE was only useful for operations on data that was naturally vector-like: RGBA colors, XYZW coordinates in 3D graphics, and so on. The Larrabee instructions are suitable for vectorizing any code meeting the conditions above, even when the code was not written to operate on vector-like quantities. It can benefit every type of application!

A vital component of this is Intel’s vectorizing C++ compiler. Developers hate having to write assembly language code, and even dislike writing C++ code using SSE intrinsics, because the programming style is awkward and time-consuming. Few developers can dedicate resources to doing that, whereas Larrabee is easy; the vectorization process can be made automatic and compatible with existing code.

With cores proliferating on an more CPUs every day and an embarrassing number of applications not taking advantage of it, bringing easy parallel execution to the masses means a lot. That’s why I’m eager to see what Intel has in store for the future of Larrabee.

Jabber and Social networking05 Nov 2009 at 15:02 by Jean-Marc Liotier

On 13th May 2008, Facebook announced ”Right now we’re building a Jabber/XMPP interface for Facebook Chat. In the near future, users will be able to use Jabber/XMPP-based chat applications to connect to Facebook Chat“. The news has been greeted positively in various places everywhere.

A year later, strictly nothing had happened, and that silence has not gone unnoticed. Facebook has not even issued the slightest announcement, except a wishlist bug report comment by Charlie Cheever mentioning that “some people are working on this.  It will probably be done in a few months. Sorry the timeline isn’t more clear“.

But today the people at ProcessOne noticed that preparations for an opening have reached an advanced stage that hint at the imminence of a public XMPP service :

It now seems the launch is close as the XMPP software stack as been deployed on chat.facebook.com, as our bot at IMtrends have found out: chat.facebook.com on IMtrends.

The biggest question that remains is whether federation is on the menu. By federating with Google Talk and the rest of the XMPP world, Facebook has an opportunity to make a huge splash in instant messaging with 300 million users at once and deal a heavy blow to Yahoo and Microsoft. Will the partial ownership of Facebook by Microsoft keep them from interoperating ?

I would love to be able to chat with all those Facebook friends who will never use a chat client that was not pushed by a mass market service provider. So far, Facebook has always chosen the closed way – opening its service to a federation would be a first. I’m eager to see if Facebook can take this golden opportunity to surprise us in a good way.

Economy and Geography and Mobile computing and Networking & telecommunications02 Nov 2009 at 20:31 by Jean-Marc Liotier

Valued Lessons wrote :

A lot has been written lately about Google Maps Navigation. Google is basically giving away an incredible mapping application with good mapping data for free. Why would they do such a thing? Most of the guesses I’ve seen basically say “they like to give stuff away for free to push more advertisements”. That’s close, but everyone seems to have missed a huge detail, perhaps the most important detail of all.

Google is an advertisement company, particularly skilled at targeted advertisements. Almost all of their revenue comes from being able to show you ads that you want to see when you want to see them. What does this have to do with maps and navigation? Well, this is going to seem really obvious once you read it, but no one seems to have mentioned it yet, so here it goes:

Google will know everywhere you drive, and when.

Valued Lessons goes on to detail ways Google could use that data to refine the targeted advertisement that represents the lion’s share of Google’s revenue. But there is another reason for pushing Google Navigation…

Now that they have found the way to gather start harvesting the data at a really massive scale they are able to go head to head with all the navigation software editors that have provide traffic information. Here is a nice business model :

  • Get the free version of Google Navigation deployed to as many terminals as possible.
  • Harvest traffic data.
  • Sell traffic data as a premium service. Or just give it away and kill everyone else…

Mobile network operators I know are going to hate this. They make partnerships with the likes of TomTom, only to be entirely bypassed by Google ! I love it.

Let’s take a look at what TomTom wanted to do :

TomTom will use two main sources of information, occasionally complemented by others.

First, travel times deduced from the movement patterns of mobile phones. TomTom has made an agreement with Vodafone NL, allowing us to use (anonymously) the country’s 4 million Vodafone customers as a potential source of information and developed the technology to transform this monitoring information from the mobile network into reliable travel time information.

Secondly, historical FCD (Floating Car Data) from our own customers. Every TomTom navigation system is equipped with a GPS sensor, from which one can determine the exact location of a car.

Yes, Google can do all that too.

The process of obtaining data TomTom has developed results in highly detailed traffic information. In the Netherlands, for example, it means up-to-date travel times per road segment for approximately 20,000 km of road (see figure) and historical information per road segment for all major roads in the country, approximately 120,000 km.

TomTom has developed the technology in-house to calculate travel times across the entire road network, by processing the monitoring data from the mobile telephone network through TomTom’s Mobility Framework software.

And that’s information from before 2007… Imagine what can be done today !

Letting Google know where you go and letting Google mine that data is the reason for Google Latitude too… Latitude does not have the same mainstream appeal as a turn by turn navigation application, but with so many Google Maps customers now using it inside their car we are now talking Google scale !

Geography and Mobile computing and Networking & telecommunications and Technology30 Oct 2009 at 12:42 by Jean-Marc Liotier

Last week-end I ventured outside of the big city, in the land where cells are measured in kilometres and where signal is not taken for granted. So what surprised me was not to have to deal with only the faint echo of the network’s signal. Cell of origin location on the other hand was quite a surprising feature in that environment : sometimes it works with an error consistent with the cellular environment, but often the next moment it estimated my position to be way further from where I actually was – 34 kilometres west in this case.

The explanation is obvious : my terminal chose to attach to the cell it received best. Being physically located on a coastal escarpment, it had line of sight to a cell on the opposite side of the bay – 34 kilometres away.

But being on the edge of a very well covered area, it was regularly handed over to a nearby cell. In spite of the handover damping algorithms, this resulted in a continuous flip-flop nicely illustrated by this extract of my Brightkite account’s checkin log :

Isn’t that ugly ? Of course I won’t comment on this network’s radio planning and cell neighbourhood settings – I promised my employer I will not mention him anymore. But there has to be a better way and my device can definitely do something about it : it is already equipped with the necessary hardware.

Instant matter displacement being highly unlikely for the time being, we can posit that sudden movement of kilometre-scale distances will result in the corresponding acceleration. And the HTC Magic sports a three axis accelerometer. At that point, inertial navigation immediately springs to mind. Others have thought about it before, and it could be very useful right now for indoor navigation. But limitations seem to put that goal slightly out of reach for now.

But for our purposes the hardware at hand would be entirely sufficient : we just need rough dead reckoning to check that the cell ID change is congruent with recent acceleration. With low quality of the acceleration measurement, using it as a positioning source is out of question, but it would be suitable for dampening the flip flopping as the terminal suffers the vagaries of handover to distant cells.

So who will be the first to refine cell of origin positioning using inertial navigation as a sanity check ?

Economy and Networking & telecommunications19 Oct 2009 at 11:41 by Jean-Marc Liotier

Let’s go out on  a limb and make a prediction. In five years, in dense urban areas, you will get your ADSL at cost, provided you subscribe to your telecommunications operator’s mobile offering.

Three major trends are at play :

  • Cells are getting smaller
  • Radio throughput is increasing
  • ADSL throughput is not going anywhere

Once cell throughput approaches ADSL throughput, the value of ADSL drops to zero. Why bother with ADSL when you have unlimited traffic at decent speeds with no geographical limitations ? In Paris, I seldom bother to even switch on my Android G2’s Wi-Fi networking – now it is all UMTS, all the time.

Why not ADSL for free ? As I can even get full motion video on demand on my mobile communicator, the availability of video services on the ADSL remains an incentive only if I’m interested in high definition. But to some people, high definition is important so ADSL retains some perceived value. In addition, giving away free ADSL access bundled with a mobile subscription would be gross abuse of a dominant position by operators protected behind the barrier to entry that their license affords them – so the worse they can do is selling ADSL at cost. But it is nevertheless tempting to squeeze out the perceived value of ADSL from the consumer’s point of view in order to cut the fixed access pure player’s oxygen supply. Isn’t life so much more comfortable among oligopolistic old pals ? Marginalization of ADSL pure players will be even worse if they are not playing along in the fiber optics arms race.

So the incentives combine :

  • Users want the convenience of permanent unlimited cell access
  • Operators are happy to squeeze out ADSL pure players

As a result, cell traffic increases and leads us to the next step of this self-reinforcing process : femtocells. Spectral efficiency nearing the Shannon limit, antenna diversity, spectral multiplexing and other 3G MIMO techniques can combine to provide the peak throughput that all the shiny marketing pie in the sky presentations promise. But in operations in the field those speeds are not achieved unless you camp under the antenna. For example, LTE 2×2 MIMO is advertised at a peak throughput of 173 Mb/s but actual rates are somewhere between 4 and 24 Mb/s in 2×20 MHz. They drop sharply as distance increases and it gets worse as the cell gets crowded. So there will be strong user demand for small cells – demand theoretically exists until there is one cell per user.

Approximately 60% of mobile usage already takes place indoors, yet providing in-building coverage is a technical problem at the gigahertz frequencies used for Wimax and LTE. This is only set to get worse as the mobile continues to replace the home phone. Research indicates that, as “all you can eat” data packages become commonplace, this number is likely to reach 75% by 2011.

Doug Puley – “The macrocell is dead, long live the network”, 2008

With the user spending more than 60% of his time indoors, there will be a fixed line access nearby. Extension of the access network on top of ADSL and FTTH links is already underway to increase capacity and compress costs by getting the data off the mobile network as close to the user as possible. Femtocells work well on ADSL too. So ADSL will remain useful as a way for mobile operator to shed load from the rest of the access network. And on top of that, ADSL lets the operator reach subscribers in areas not covered by the radio network.

So to mobile operators who offer fixed line access, ADSL could soon be considered as a mere adjunct to their core offering : mobile access. That could add yet more pressure on the game of musical chairs of mobile access frequencies license allocation. Why not attempt to exclude the competition that does not own a mobile network ? That leads us to ADSL access at cost – or slightly below that if the operator is willing to be naughty and deal with the market regulator. It will happen sooner than you think.

By the way, for a wealth of data about 3GPP evolution from UMTS-HSPA to LTE & 4G, you can take a look at this  September 2009 report by Rysavy Research. It provides about all you need to know about it and it is nearly as good as what I get internally from SFR.

Approximately 60% of mobile usage already takes place indoors, yet providing in-building coverage is a technical problem at the gigahertz frequencies used for Wimax and LTE. This is only set to get worse as the mobile continues to replace the home phone. Research indicates that, as “all you can eat” data packages become commonplace, this number is likely to reach 75% by 2011.
Consumption and Free software and Knowledge management and Mobile computing and Networking & telecommunications and Systems and Technology and Unix19 Oct 2009 at 1:18 by Jean-Marc Liotier

Five months have elapsed since that first week-end when my encounter with Android was a severe case of culture shock. With significant daily experience of the device, I can now form a more mature judgement of its capabilities and its potential – of course from my own highly subjective point of view.

I still hate having to use Google Calendar and Google Contacts for synchronization.  I hope that SyncML synchronization will appear in the future, make Android a better desktop citizen and provide more choice of end points. Meanwhile I use Google. With that out of the way, let’s move on to my impressions of Android itself.

I am grateful for features such as a decent web browser on a mobile device, for a working albeit half baked packaging and distribution system, and for Google Maps which I consider both a superlative application in its own right and the current killer albeit proprietary infrastructure for location enabled applications. But the rigidly simple interface that forces behaviours upon its user feels like a straitjacket : the overbearing feeling when using Android is that its designers have decided that simplicity is to be preserved at all costs regardless of what the user prefers.

Why can’t I select a smaller font for my list items ? Would a parameter somewhere in a customization menu add too much complication ? Why won’t you show me the raw configuration data ? Is it absolutely necessary to arbitrarily limit the number of virtual desktops to three ? From the point of a user who is just getting acquainted with such a powerful platform, those are puzzling questions.

I still don’t like the Android ‘s logic, and moreover I still don’t quite understand it. Of course I manage to use that system, but after five month of daily use it still does not feel natural. Maybe it is just a skin-deep issue or maybe I am just not the target audience – but some features are definitely backwards – package management for example. For starters, the “My Downloads” list is not ordered alphabetically nor in any apparently meaningful order. Then for each upgradeable package, one must first browse to the package, then manually trigger the upgrade package, then acknowledge system privileges the upgraded package and finally clear the download notification and the update notification. Is this a joke ? This almost matches the tediousness of upgrading Windows software – an impressive feat considering that the foundations of Android package management seem serious enough. Where is my APT ?

Like any new user on a prosperous enough system, I am lost in choices – but that is an embarrassment of riches. Nevertheless, I wonder why basics such as a task manager are not installed by default. In classic Unix spirit, even the most basic system utilities are independent applications. But what is bearable and even satisfying on a system with a decent shell and package management with dependencies becomes torture when installing a package is so clumsy and upgrading it so tedious.

Tediousness in package management in particular and user interaction in general makes taming the beast an experience in frustration. When installing a bunch of competing applications and testing them takes time and effort. Experimenting is not the pleasure it normally is on a Linux system. The lack of decent text entry compounds the feeling. Clumsy text selection makes cut and paste a significant effort – something Palm did make quick, easy and painless more than ten years ago. Not implementing pointer-driven selection – what were the developers thinking ?

PIM integration has not progressed much. For a given contact, there is no way to look at a communications log that spans mail, SMS and telephony: each of them is its own separate universe. There is no way to have a list of meetings with a given contact or at given location.

But there basic functionality has been omitted too. For example when adding a phone number to an existing contact, search is disabled – you have to scroll all the way to the contact. There is no way to search the SMS archive and SMS to multiple recipients is an exercise left to applications.

Palm OS may have been unstable, incapable of contemporary operating system features, offering only basic functionality and generally way past its shelf date. But in the mind of users, it remains the benchmark against which all PIM systems are judged. And to this day I still don’t see anything beating Palm OS on its home turf of  PIM core features and basic usability.

Palm OS was a poster child for responsiveness, but on the Android everything takes time – even after I have identified and killed the various errant applications that make it even slower. Actually, the system is very fast and capable of feats such as full-motion video that were far beyond the reach of the Palm OS. But the interaction is spoilt by gratuitous use of animations for everything. Animations are useful for graphically hinting the novice user about what is going on – but then hey are only a drag. But please let me disable animations as I do on every desktop I use !

The choice of a virtual keyboard is my own mistake and I am now aware that I need a physical keyboard. After five months, I can now use the virtual keyboard with enough speed and precision for comfortable entry of a couple of sentences. But beyond that it is tiring and feels too clumsy for any meaningful work. This is a major problem for me – text entry is my daily bread and butter. I long for the Treo‘s keyboard or even the one on the Nokia E71 – they offered a great compromise of typing speed and compacity. And no multitouch on the soft keyboard means no keyboard shortcuts which renders many console applications unusable – sorry Emacs users.

The applications offering is still young and I cannot blame it for needing time to expand and mature. I also still need to familiarize myself with Android culture an develop the right habits to find my way instinctively and be more productive. After five months, we are getting there – one handed navigation has been done right. But I still believe that a large part of the user interface conventions used on the Android does not match the expectations for general computing.

It seems like everything has been meticulously designed to bury under a thick layer of Dalvik and Google plaster anything that could remind anyone of Unix. It is very frustrating to know that there is a Linux kernel under all that, and yet to suffer wading knee-deep in the marshes of toyland. The more I use Android an study it, the more I feel that Linux is a mere hardware abstraction layer and the POSIX world a distant memory. This is not the droid I’m looking for.

Books and Military18 Oct 2009 at 11:58 by Jean-Marc Liotier

Danger Close is a candid commander’s point of view of 3 Para’s deployment in Afghanistan in the early phases of the British commitment. This is an entirely subjective account, so don’t expect insight into the great game – but do expect a rare insight into the relationship between a commander and his men, in the dirt among the sangars under rocket and mortar fire. The loneliness at the top comes through very clearly.

I was stunned to discover how little means were available and how rapidly those means have been stretched almost to breaking point. As a result, during most of its time in Afghanistan, 3 Para was reduced to holding hastily fortified besieged positions in politically important towns – which is not how might expect this sort of unit to be employed.

This books is a quick read but it provides a valuable vignette of the Afghanistan conflict. It is also a story of the fortitude of the British troops in the face of highly challenging odds. 479 000 rounds fired in six months is a level of sustained combat not seen by the British Army since the end of the Korean war.

Consumption and Cycling and Roller skating12 Oct 2009 at 0:22 by Jean-Marc Liotier

As a preamble, let me declare that I am in no way affiliated with Princeton Tec and that I stand to gain or lose nothing by expressing my opinions about their products.

A year ago, I needed a light for both cycling and skating, powerful enough for seeing and being seen in urban traffic, and with the capability to rapidly switch between the two modes, which means helmet mounting. So I went searching the web. Among the most interesting finds was Bike Magazine’s December 2007’s test of eight LED trail lights – the Princeton Tec Switchback are HID not LED, but the tests were otherwise informative. I ended up with a shortlist of two candidates : the Niterider TriNewt and the Princeton Tec Switchback 3.

As MTBR’s light shootout illustrates, the Switchback 3 is far from being as powerful as the Nite Rider Trinewt, but it has twice the autonomy (six hours at full power) which is important because on a skating raid I want reliable lighting that can last the whole night. Its list price is 40% lower price and it features a blinking mode which I find useful for surviving the daily commute among zombie drivers with mobile phones. Considering the startled looks on people’s face when I ride across town, I’m guessing that the Switchback 3 is plenty powerful enough for that purpose – it even gets me well noticed in daytime, especially on blink mode.

But power is not everything. In addition to the power, the Switchback 3 has a well designed beam pattern with enough reach for moderately high speeds in the dark, and enough width close up for seeing what you are sticking your wheels into. The two outer beams provide the reach, and the diffused center beam provides the breadth. The regulated power supply ensures stable lighting power whatever the state of the lithium-ion battery (which charges in two hours). With battery, the Switchback 3 is 300 grams heavier than the Nite Rider Trinewt, but if I need to lose weight I’ll start with some body fat. In addition, the weight of the light itself is very low and the hefty remote battery can be stuck near the center of gravity, where its weight is not a concern.

The whole system is watertight – even the connectors are very well designed. During the year it endured heavy rains with no problem although one handed connection and disconnection is a bit difficult with wet gloves.

The lamp not only looks solid – it most definitely is very solid. Skating in a traffic jam, as I passed a stationary dump truck I ducked under its rear and forgot about the light topping my helmet, thus underestimating my height. The light smashed hard into the dump. While my head strapped to the helmet stopped my upper body, the rest continued and I fell down on my ass. The light took the full brunt of the shock of my skating 93 kilograms – backpack not included. That was enough to make it pop out of its otherwise sturdy quick-release tool-less helmet mount, but I was able to slide it back in right away and it is still as secure as before. There is now a dent in the frame, but the frame played its role right as the slightly recessed optics did not suffer the slightest. The system has been performing nominally ever since.

I love this light and I think that my security on the road has markedly improved since I have been wearing it. Here is another review from Crankfire and one from Metro Sucks – both go along the same lines.

My only gripe was that the extension cord was too short. For applications such as roller skating raids and even for exploring the catacombs of Paris (hint – mount it on the side of the helmet to avoid bumping into the low ceiling all the time) really for anytime a backpack is worn, the original extension cord is perfect for having the accumulator pack in the backpack and the lamp on the helmet.  But for strapping the battery on my bicycle frame while using the light on the helmet mount, it is far too short. Of course, mounting the lamp on the handlebars would not require an additional extension, but on top of the additional flexibility, helmet mounting allows me to point the beam towards the direction from which I want to attract attention – for cycling and skating, that provides appreciable extra security in dense urban traffic. So I went looking for an extension cord that is longer than the one supplied with the package, but did not find anything like that.

I asked Princeton Tec support for help and the extremly helpful Rob confirmed that instead of a longer cord I could chain two of the original ones. But I only had the one shipped with the lamp and searching for Princeton Tec Switchback extension cord only yielded pages of shops displaying the description of the accessories kit sold with the lamp. None of those seems to sell the cord itself.

I asked Rob again and his reaction utterly surprised me – he simply offered to ship me an additional extension cord… Free of any cost ! That is support above and beyond the call of duty. The Switchback’s price led me to expect good support, but I have often been disapointed by other companies pretending to care. That was not the case with Princeton Tec : those guys plainly turned a slight gripe into complete satisfaction – with a cord now long enough I have nothing left to complain about. Considering the cost of an extension cord, one could see their reaction as just good commercial sense – a happy and probably returning customer for a few dollars, but it is not everyday that I stumble upon a supplier with that sort of intelligence. It looks like I am not the only one to have had that sort of experience with them. Thank you Princeton Tec – next time I need a lamp for anything, you can be sure you’ll end up shortlisted at least !

« Previous PageNext Page »