Networking & telecommunications archived articles

Subscribe to the RSS feed for this category only

Brain dump and Knowledge management and Networking & telecommunications and Technology16 Dec 2010 at 13:19 by Jean-Marc Liotier

Piled Higher & Deeper and Savage Chickens nailed it (thanks redditors for digging them up) : we spend most of our waking hours in front of a computer display – and they are not even mentioning all the screens of devices other than a desktop computer.

According to a disturbing number of my parent’s generation, sitting in from of a computer makes me a computer scientist and what I’m doing there is “computing”. They couldn’t be further from the truth : as Edsger Dijkstra stated, “computer science is no more about computers than astronomy is about telescopes”.

The optical metaphor doesn’t stop there – the computer is indeed transparent: it is only a windows to the world. I wear my glasses all day, and that is barely worth mentioning – why would using a computer all day be more newsworthy ?

I’m myopic – without my glasses I feel lost. Out of my bed, am I really myself if my glasses are not connected to my face ?

Nowadays, my interaction with the noosphere is essentially computer-mediated. Am I really myself without a network-attached computer display handy ? Mind uploading still belongs to fantasy realms, but we are already on the way toward it. We are already partly uploaded creatures, not quite whole when out of touch with the technosphere, like Manfred Macx without his augmented reality gear ? I’m far not the only one to have been struck by that illustration – as this Accelerando writeup attests :

“At one point, Manfred Macx loses his glasses, which function as external computer support, and he can barely function. Doubtless this would happen if we became dependent on implants – but does anyone else, right now, find their mind functioning differently, perhaps even failing at certain tasks, because these cool things called “computers” can access so readily the answers to most factual questions ? How much of our brain function is affected by a palm pilot ? Or, for that matter, by the ability to write things down on a piece of paper ?”

This is not a new line of thought – this paper by Andy Clark and David Chalmers is a good example of reflections in that field. Here is the introduction :

“Where does the mind stop and the rest of the world begin? The question invites two standard replies. Some accept the demarcations of skin and skull, and say that what is outside the body is outside the mind. Others are impressed by arguments suggesting that the meaning of our words “just ain’t in the head”, and hold that this externalism about meaning carries over into an externalism about mind. We propose to pursue a third position. We advocate a very different sort of externalism: an active externalism, based on the active role of the environment in driving cognitive processes”.

There is certainly a “the medium is the message” angle on that – but it goes further with the author and the medium no longer being discrete entities but part of a continuum.

We are already uploading – but most of us have not noticed yet. As William Gibson puts it: the future is already here – it’s just not very evenly distributed.

Design and Mobile computing and Networking & telecommunications and Systems and Technology19 Nov 2010 at 16:32 by Jean-Marc Liotier

In France, at least two mobile networks operators out of three (I won’t tell you which ones) have relied on Cell ID alone to identify cells… A mistake because contrary to what the “Cell ID” moniker suggests, it can’t identify a cell on its own.

A cell is only fully identified by combining with the Location Area Identity (LAI). The LAI is an aggregation of Mobile Country Code (MCC), Mobile Network Code (MNC – which identifies the PLMN in that country) and the Location Area Code (LAC – which identifies Location Area within the PLMN). The whole aggregate is called Cell Global Identification (CGI) – a rarely encountered term, but this GNU Radio GSM architecture document mentions it with details.

Since operators run their networks in their own context, they can consider that MCC and MNC are superfluous. And since the GSM and 3G specifications defines the Cell ID as a 16 bit identifier, the operators have believed that they had plenty for all the cells they could imagine, even taking multiple sectors into account – but that was many years ago. Even nowadays there are not that many cells in a French GSM network, but the growth in the number of bearer channels was not foreseen and each of them requires a different CellID – which multiplies the number of cells by their number.

So all  those who in the beginnings of GSM and in the prehistory of 3GPP decided that 65536 identifiers ought to be enough for everyone are now fixing their information systems in a hurry as they run out of available identifiers – not something anyone likes to do on a large critical production infrastructure.

Manufacturers and operators are together responsible for that, but alas this is just one occurrence of common shortsightedness in information systems design. Choosing unique identifiers is a basic modeling task that happens early in the life of a design – but it is a critical one. Here is what Wikipedia says about unique identifiers :

“With reference to a given (possibly implicit) set of objects, a unique identifier (UID) is any identifier which is guaranteed to be unique among all identifiers used for those objects and for a specific purpose.”

The “specific purpose” clause could be interpreted as exonerating the culprits from responsibility : given their knowledge at the time, the use of Cell ID alone was reasonable for their specific purpose. But they sinned by not making the unique identifier as unique as it possibly could. And even worst, they sinned by not following the full extent of the specification.

But I won’t be the one casting the first stone – hindsight is 20/20 and I doubt that any of us would have done better.

But still… Remember kids : make unique identifiers as unique as possible and follow the specifications !

Mobile computing and Networking & telecommunications and Social networking and Technology30 Sep 2010 at 11:04 by Jean-Marc Liotier

Stumbling upon a months old article by my friend George’s blog expressing his idea of local social networking, I started thinking about Bluetooth again – I’m glad that he made that resurface.

Social networking has been in the air for about as long as Bluetooth exists. The fact that it can be used for reaching out to local people has not escaped obnoxious marketers nor have the frustrated Saudi youth taken long to innovate their way to sex in the midst of the hypocritical Mutaween.

Barely slower than the horny Saudi, SmallPlanet CrowdSurfer attempted to use Bluetooth to discover the proximity of friends, but it apparently did not survive: nowadays none of the likes of Brightkite, Gowalla, Foursquare or Loopt takes advantage of this technology – they all rely on the user checking-in manually. I automated the process for Brightkite – but still it is less efficient than local discovery and Bluetooth is not hampered by an indoor location.

People like George and me think about that from time to time, and researchers put some thought into it too – so it is all the more surprising that there are no mass-scale deployments taking advantage of it. I found OlderSibling but I doubt that it has a large user base and its assumed spying-oriented use-cases are quite off-putting. Georges mentioned Bliptrack, a system for the passive measurement of traffic, but it is not a social networking application. I registered with Aki-Aki but then found that it is only available on Apple Iphone – which I don’t use. I attempted registration with MobyLuck but I’m still waiting for their confirmation SMS… Both MobyLuck and Aki-Aki do not seem very insistent on increasing their user population.

Nevertheless I quite like the idea of MobyLuck and Aki-Aki and I wonder why they have not managed to produce any significant buzz – don’t people want local social networking ?

With indoor navigation looking like the next big thing already rising well above the horizon, I’m pretty sure that there will be a renewed interest in using Blueetooth for social networking – but why did it take so long ?

Networking & telecommunications and Systems and Technology25 Sep 2010 at 10:50 by Jean-Marc Liotier

If you can read French and if you are interested in networking technologies, then you must read Stephane Bortzmeyer’s blog – interesting stuff in every single article. Needless to say I’m a fan.

Stéphane commented an article by Nokia people : « An Experimental Study of Home Gateway Characteristics » – it exposes the results of networking performance tests on 34 residential Internet access CPE. For a condensed and more clearly illustrated version, you’ll appreciate the slides of « An Experimental Study of Home Gateway Characteristics » presented at the IETF 78’th meeting.

The study shows bad performance and murky non-compliance issues on every device tested. The whole thing was not really surprising, but it still sounded rather depressing to me.

But my knowledge of those devices is mostly from the point of few of an user and from the point of view of an information systems project manager within various ISP. I don’t have the depth of knowledge required for a critical look at this Nokia study. So I turned to a friendly industry expert who shall remain anonymous – here is his opinion :

[The study] isn’t really scientific enough testing IMHO. Surely most routers aren’t high performance due to cost reasons and most DSL users (Telco environments don’t have more than 8 Mbit/s (24 Mbit/s is max).

[Nokia] should check with real highend/flagships routers such as Linksys E3000. Other issues are common NAT issues or related settings or use of the box DNS Proxy’s. Also no real testing method explained here so useless IMHO. Our test plan has more than 500 pages with full description and failure judgment… :)

So take « An Experimental Study of Home Gateway Characteristics » with a big grain of salt. Nevertheless, in spite of its faults I’m glad that such studies are conducted – anything that can prod the consumer market into raising its game is a good thing !

Experimental study on 34 residential CPE by Nokia: – Bad performance and murky non-compliance all ove

Experimental study on 34 residential CPE by Nokia: – Bad performance and murky non-compliance all over


Marketing and Mobile computing and Networking & telecommunications17 Sep 2010 at 12:07 by Jean-Marc Liotier

This morning a French banker provided me with an explanation for the efforts of the banks at selling MVNO contracts to their customers. This explanation does not dismiss the influence of the contemporary urge to use any customer-facing operation as an excuse for selling services entirely unrelated to the core product – but it makes a little more sense.

Banks see the mobile payment wave rising high on their horizon, and they want to be part of the surfing – in a bank-centric model of course. As near field communications are coming soon to a handset near you, getting ready is a rather good idea.

To carve their rôle into the mobile payments ecosystem, banks believe that building a customer base is a good way to make sure that they will have a critical mass of users to deploy their products to when the time comes for that. In the context of a maturing market with decreasing churn rates, this makes sense – especially as the banking and insurance industry enjoys much lower churn rates than mobiles operators, and the banker’s image could have a halo effect on the mobile products they distribute.

But on the other hand, considering the leonine conditions that French mobile license holders grant to the MVNO, this is a fragile position on which to take leverage.

By the way, if you feel like working for a hot mobile payments company, take a look at the openings at Zong and say hello to Stéphane from me !

Networking & telecommunications and Politics14 Apr 2010 at 11:51 by Jean-Marc Liotier

Stéphane Richard, chief executive at France Telecom, argued recently : “There is something totally not normal and contrary to economic logic to let Google use our network without paying the price”. I could barely control my hilarity.

But wait, there’s more :

Telefonica chairman Cesar Alierta said Google should share some of its online advertising revenue with carriers to compensate them for the billions of euros they are investing in fixed-line and mobile infrastructure to increase download speeds and network capacity. Alierta said that regulators should step in to supervise a settlement if no revenue sharing deal was possible between search engines led by Google and network operators. France Telecom CEO Stephane Richard said, “Today, there is a winner who Google. There are victims that are content providers, and to a certain extent, network operators. We cannot accept this”. Deutsche Telekom CEO Rene Obermann stated, “There is not a single Google service that is not reliant on network service. We cannot offer our networks for free”.

Whiners ! France Telecom, Telefonica and Deutsch Telekom are all historical monopoly operators that suffered the full impact of the internetworking revolution. It took them a while to realize that the good old times were gone for good, but I thought that with a good help of new blood they had reluctantly adapted to the new reality. Apparently I was wrong : in spite of  a decades long track record of overwhelming evidence to the contrary, executives at the incumbent club keep fantasizing about the pre-eminence of intelligent networks and how they somehow own the user. Of course I would not tax them with sheer stupidity – they are anything but stupid. This is rather a case of gross hypocrisy serving a concerted lobbying effort. And maybe after all they end up believing their own propaganda.

Users pay Internet access providers for – guess what – Internet Access. And most providers are very happy for their Internet access to do exactly what it says on the tin while they get well earned monies in exchange. Only a few of them have the political clout necessary for this blatant attempt at distorting competition – they are trying to leverage it but they will fail, again like they failed to stop local loop unbundling.

Ultimately, if large operators across Europe make a foolish coordinated move against Google, it will look suspiciously like a cartel. You can play that game with the national governments, but you definitely don’t want to do that in view of the European Comissionner for Competition.

Since Google is in the crosshair, I’ll let them have the last word :

“Network neutrality is the principle that Internet users should be in control of what content they view and what applications they use on the Internet. The Internet has operated according to this neutrality principle since its earliest days… Fundamentally, net neutrality is about equal access to the Internet. In our view, the broadband carriers should not be permitted to use their market power to discriminate against competing applications or content. Just as telephone companies are not permitted to tell consumers who they can call or what they can say, broadband carriers should not be allowed to use their market power to control activity online”.

Guide to Net Neutrality for Google Users, cited by the Wikipedia article on Network Neutrality.

Update : I am far from the only one to feel slack-jawed astonishment at that shocking display of hypocrisy. From the repeating-something-relentlessly-does-not-make-it-true dept, Karl Bode at Techdirt published “Telcos Still Pretending Google Gets Free Ride”. You’ll find comments and more context there.

Jabber and Networking & telecommunications and Social networking and Technology12 Apr 2010 at 23:21 by Jean-Marc Liotier

This week-end I noticed Juick, an XMPP-based microblogging system with some nice original features. But Juick is not free and its author does not seem interested in freedom. So who’s gonna save XMPP-based microblogging ?

Enter OneSocialWeb, a free, open and decentralized XMPP-based social networking platform with all the federated goodness one might expect from an XMPP-based system. Sounds good doesn’t it ?

Laurent Eschenauer is a software engineer at Vodafone Group R&D and he is the architect of OneSocialWeb – the team also has Alard Weisscher, Lorena Alvarez and Diana Cheng on board. Today he posted great news about OneSocialWeb at Vodafone’s RndBackyard :

“Two months ago, we introduced you to our onesocialweb project: an opensource project that aims at building a free, open, and decentralized social networks. We explained the idea, we showed what it looked like, and we answered many questions. However it was only a prototype running on our servers, there was no such federated social network.. yet.

Today, we have released the source code and compiled versions of the core components of our architecture. With this, you are now in a position to install your own Openfire server, load our Onesocialweb plugin, and you will immediately be part of the Onesocialweb federation. We also provide you with a command line client to interact with other onesocialweb users.

As you see, we are not releasing the web and android client today. They will require a bit more work and you should expect them in the coming weeks. This means that this first release is mainly targeting developers, providing them with the required tools and documentation to start integrating onesocialweb features in their own clients, servers and applications.

This is a first release, not an end product. Our baby has just learned to walk and we’ll now see if it has some legs. We look forward to keep on growing it with the help of the community. Please have a look at our protocol, try to compile the code, and share your feedback with us on our mailing list. You can also have a look at our roadmap to get a feel for where we are going”.

Laurent only mentions Openfire and the OneSocialWeb plugin for Openfire is the only one currently available for download on OneSocialWeb’s site, but despair not if like me you are rather an ejabberd fan : “Its protocol can be used to turn any XMPP server into a full fledged social network, participating in the onesocialweb federation“. So if everything goes well, you may bet on some ejabberd module development happening soon. And who knows what other XMPP servers will end-up with OneSocialWeb extensions.

There were some news about OpenSocialWeb about two month ago, but that was unlucky timing as the project’s message got lost in the Google Buzz media blitz. Anyway, as Daniel Bo mentions : “Many years of discussion have gone into determining what a federated social network would look like, and the OneSocialWeb doesn’t ignore that work“. Indeed, as the OpenSocialWeb mentions, it “has been built upon the shoulders of other initiatives aiming to open up the web and we have been inspired by the visionaries behind them:, portablecontacts, OAuth, OpenSocial, FOAF, XRDS, OpenID and more“. Only good stuff there – an open standard built on top of recognized open standards is an excellent sign.

All that just for microblogging ? Isn’t that a slight overkill ? Did we say this was a microblogging protocol ? No – the purpose of OneSocialWeb is much more ambitious : it is to enable free, open, and decentralized social applications. OneSocialWeb is a platform  :

“The suite of extensions covers all the usual social networking use cases such as user profiles, relationships, activity streams and third party applications. In addition, it provides support for fine grained access control, realtime notification and collaboration”.

Two weeks ago, Laurent attended DroidCon Belgium and he explained how OneSocialWeb will enable developers to create social & real-time mobile applications, without having to worry about the backend developments:

“In my view, this is one of the most exciting element of our project. Beyond the ‘open’ social network element, what we are building is truly the ‘web as a platform’. An open platform making it simple to create new social applications”.

Here are his slides from DroidCon Belgium :

Is it a threat to ? No : being an open protocol, it can be used by any system willing to interoperate with other OneSocialWeb systems. @evan has expressed interest about that and I would trust him to hedge his bets. OneSocialWeb certainly competes with’s ambitious Ostatus distributed status updates protocol, but whichever wins will be a victory for all of us – and I would guess that their open nature and their similar use-cases will let them interoperate well. Some will see fragmentation, but I see increased interest that validates the vision of an open decentralized social web.

By the way, if you have paid attention at the beginning of this article, you certainly have noticed that Laurent’s article was posted at Vodafone’s RndBackyard. Yes, you read it right : OneSocialWeb is an initiative of Vodafone Group Research and Development to help taking concrete steps towards an open social web. Now that’s interesting – are big telecommunications operators finally seeing the light and embracing the open instead of fighting it ? Are they trying to challenge web services operators on their own turf ? My take is that this is a direct attack on large social networking operators whose rising concentration of power is felt as a threat by traditional telecommunications operator who have always lived in the fantasy that they somehow own the customer. Whatever it is, it is mightily interesting – and even more so when you consider Vodafone’s attitude :

“We by no means claim to have all the answers and are very much open to suggestions and feedback. Anyone is invited to join us in making the open social web a reality”.

“We consider it important to reality check our protocol with a reference implementation”.

They are humble, they are open and they are not grabbing power from anyone but walled garden operators : this really seems to be about enabling an open decentralized social. I have such a negative bias about large oligopolistic telecommunications operators that I would have a hard time believing it if I had not had my understanding of the rational behind one of them funding this effort against the likes of Facebook… But free software and open protocols are free software and open protocols – wherever they come from !

Daniel (a.k.a. Daeng) Bo
Networking & telecommunications and Politics and Rumors and The Web26 Mar 2010 at 15:01 by Jean-Marc Liotier

Stéphane Bortzmeyer has a very long track record of interesting commentary about the Internet – his blog goes back to 1996. Its a pity that my compatriot doesn’t write in English more often: I believe he would find a big audience for his excellent articles. But as he told me : “Many people write in English already, English readers do not need one more writer”. I object – there is always room for good information to be brought to a greater audience. And since his writings are licensed under the GFDL, I’ll do the translation myself when I feel like it.

Maybe this will be the only of his articles I translate – or maybe there will be others in the future… Meanwhile here is this one. I chose it because DNS hijacking is a subject I am sensitive about – and maybe because of the exoticism of Chinese shenanigans…

Before reading this interesting article, please heed this forewarning : as soon as we talk about China, we should admit our ignorance. Most people who pontificate about the state of the Internet in China do not speak Chinese – their knowledge of the country stops at the doorstep of international hotels in Beijing and Shanghai. The prize for the most ludicrous pro-Chinese utterance goes to the Jacques Myard, representative at the National Assembly and member of the UMP party, for his support for the Chinese dictatorship [translator’s note : he went on the record saying that “the Internet is utterly rotten” and went on to say that it “should be nationalized to give us better control – the Chinese did it”]. When it comes to DNS, one of the least understood Internet services, the bullshit production rate goes up considerably and sentences where both « DNS » and « China » occur are most likely to be false.

I am therefore going to try not emulating Myard, and only talk about what I know, which will make this article quite short and full of conditional. Unlike criminal investigations in US movies, this article will name no culprit and you won’t even know if there was really a crime.

DNS root servers hijacking for the purpose of implementing the policy (notably censorship) of the Chinese dictatorship has been discussed several times – for example at the 2005 IETF meeting in Paris. It is very difficult to know exactly what happens in China because Chinese users, for cultural reasons, but mostly for fear of repression, don’t provide much information. Of course, plenty of people travel to China, but few of them are DNS experts and it is difficult to get them to provide data from mtr or dig correctly executed with the right options. Reports on censorship in China are often poor in technical detail.

However, from time to time, DNS hijacking in China has visible consequences outside of Chinese territory. On the 24th March, the technical manager for the .cl domain noted that root server I, anycast and managed by Netnod, answered bizarrely when queried from Chile :

$ dig A

; <<>> DiG 9.6.1-P3 <<>> A
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7448
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;              IN      A

;; ANSWER SECTION:       86400   IN      A

;; Query time: 444 msec
;; WHEN: Wed Mar 24 14:21:54 2010
;; MSG SIZE  rcvd: 66

[translator’s note : sign of the times, the Chilean administrator chose to query – and, before that, used to be classic example material Mauricio used (or because it is hijacked by the chinese govt, unlike (or even]

The root servers are not authoritative for The queried server should therefore have answered with a pointer to the .com domain. Instead, we find an unknown IP address. Someone is screwing with the server’s data :

  • The I root server’s administrators as well as its hosts deny any modifications of the data obtained from VeriSign (who manages the DNS root master server).
  • Other root servers (except, oddly, D) are also affected.
  • Only UDP traffic is hijacked – TCP is unaffected. Traceroute sometimes ends up at reliable instances of the I server (for example, in Japan) which seem to suggest that the manipulation only affects port 53 – the one used by the DNS.
  • Affected names are those of services censored in China, such as Facebook or Twitter. They are censored not just for political reasons, but also because they compete with Chinese interests.

If you want to check it yourself, is hosted by China Unicom and will let you resolve a name :

% dig A @ 

; <<>> DiG 9.5.1-P3 <<>> A @
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44684
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;              IN      A

;; ANSWER SECTION:       86400   IN      A

;; Query time: 359 msec
;; WHEN: Fri Mar 26 10:46:52 2010
;; MSG SIZE  rcvd: 66 is a currently unassigned address and it does not belong to Facebook. [translator’s note : I get which is also abnormal]

It is therefore very likely that rogue root servers exist in China and that Chinese ISP have hacked their IGP (OSPF for example) to hijack traffic bound toward the root servers. This does not quite explain everything – for example why the known good instances installed in China still see significant traffic. But it won’t be possible to know more without in-depth testing from various locations in China. A leak from this routing hack (similar to what affected YouTube in 2008) certainly explains how the announcement from the rogue server reached Chile.

« The Great DNS Wall of China » and « Report about national DNS spoofing in China » are among the reliable sources of information about manipulated DNS in China.

For more information about the problem described in this article, you may also read « China censorship leaks outside Great Firewall via root server » (a good technical  article), « China’s Great Firewall spreads overseas » or « Web traffic redirected to China in mystery mix-up ».

This article is distributed under the terms of the GFDL. The original article was published on Stéphane Bortzmeyer’s blog on the 26 March 2010 and translated by Jean-Marc Liotier the same day.

Economy and Geography and Mobile computing and Networking & telecommunications02 Nov 2009 at 20:31 by Jean-Marc Liotier

Valued Lessons wrote :

A lot has been written lately about Google Maps Navigation. Google is basically giving away an incredible mapping application with good mapping data for free. Why would they do such a thing? Most of the guesses I’ve seen basically say “they like to give stuff away for free to push more advertisements”. That’s close, but everyone seems to have missed a huge detail, perhaps the most important detail of all.

Google is an advertisement company, particularly skilled at targeted advertisements. Almost all of their revenue comes from being able to show you ads that you want to see when you want to see them. What does this have to do with maps and navigation? Well, this is going to seem really obvious once you read it, but no one seems to have mentioned it yet, so here it goes:

Google will know everywhere you drive, and when.

Valued Lessons goes on to detail ways Google could use that data to refine the targeted advertisement that represents the lion’s share of Google’s revenue. But there is another reason for pushing Google Navigation…

Now that they have found the way to gather start harvesting the data at a really massive scale they are able to go head to head with all the navigation software editors that have provide traffic information. Here is a nice business model :

  • Get the free version of Google Navigation deployed to as many terminals as possible.
  • Harvest traffic data.
  • Sell traffic data as a premium service. Or just give it away and kill everyone else…

Mobile network operators I know are going to hate this. They make partnerships with the likes of TomTom, only to be entirely bypassed by Google ! I love it.

Let’s take a look at what TomTom wanted to do :

TomTom will use two main sources of information, occasionally complemented by others.

First, travel times deduced from the movement patterns of mobile phones. TomTom has made an agreement with Vodafone NL, allowing us to use (anonymously) the country’s 4 million Vodafone customers as a potential source of information and developed the technology to transform this monitoring information from the mobile network into reliable travel time information.

Secondly, historical FCD (Floating Car Data) from our own customers. Every TomTom navigation system is equipped with a GPS sensor, from which one can determine the exact location of a car.

Yes, Google can do all that too.

The process of obtaining data TomTom has developed results in highly detailed traffic information. In the Netherlands, for example, it means up-to-date travel times per road segment for approximately 20,000 km of road (see figure) and historical information per road segment for all major roads in the country, approximately 120,000 km.

TomTom has developed the technology in-house to calculate travel times across the entire road network, by processing the monitoring data from the mobile telephone network through TomTom’s Mobility Framework software.

And that’s information from before 2007… Imagine what can be done today !

Letting Google know where you go and letting Google mine that data is the reason for Google Latitude too… Latitude does not have the same mainstream appeal as a turn by turn navigation application, but with so many Google Maps customers now using it inside their car we are now talking Google scale !

Geography and Mobile computing and Networking & telecommunications and Technology30 Oct 2009 at 12:42 by Jean-Marc Liotier

Last week-end I ventured outside of the big city, in the land where cells are measured in kilometres and where signal is not taken for granted. So what surprised me was not to have to deal with only the faint echo of the network’s signal. Cell of origin location on the other hand was quite a surprising feature in that environment : sometimes it works with an error consistent with the cellular environment, but often the next moment it estimated my position to be way further from where I actually was – 34 kilometres west in this case.

The explanation is obvious : my terminal chose to attach to the cell it received best. Being physically located on a coastal escarpment, it had line of sight to a cell on the opposite side of the bay – 34 kilometres away.

But being on the edge of a very well covered area, it was regularly handed over to a nearby cell. In spite of the handover damping algorithms, this resulted in a continuous flip-flop nicely illustrated by this extract of my Brightkite account’s checkin log :

Isn’t that ugly ? Of course I won’t comment on this network’s radio planning and cell neighbourhood settings – I promised my employer I will not mention him anymore. But there has to be a better way and my device can definitely do something about it : it is already equipped with the necessary hardware.

Instant matter displacement being highly unlikely for the time being, we can posit that sudden movement of kilometre-scale distances will result in the corresponding acceleration. And the HTC Magic sports a three axis accelerometer. At that point, inertial navigation immediately springs to mind. Others have thought about it before, and it could be very useful right now for indoor navigation. But limitations seem to put that goal slightly out of reach for now.

But for our purposes the hardware at hand would be entirely sufficient : we just need rough dead reckoning to check that the cell ID change is congruent with recent acceleration. With low quality of the acceleration measurement, using it as a positioning source is out of question, but it would be suitable for dampening the flip flopping as the terminal suffers the vagaries of handover to distant cells.

So who will be the first to refine cell of origin positioning using inertial navigation as a sanity check ?

Economy and Networking & telecommunications19 Oct 2009 at 11:41 by Jean-Marc Liotier

Let’s go out on  a limb and make a prediction. In five years, in dense urban areas, you will get your ADSL at cost, provided you subscribe to your telecommunications operator’s mobile offering.

Three major trends are at play :

  • Cells are getting smaller
  • Radio throughput is increasing
  • ADSL throughput is not going anywhere

Once cell throughput approaches ADSL throughput, the value of ADSL drops to zero. Why bother with ADSL when you have unlimited traffic at decent speeds with no geographical limitations ? In Paris, I seldom bother to even switch on my Android G2’s Wi-Fi networking – now it is all UMTS, all the time.

Why not ADSL for free ? As I can even get full motion video on demand on my mobile communicator, the availability of video services on the ADSL remains an incentive only if I’m interested in high definition. But to some people, high definition is important so ADSL retains some perceived value. In addition, giving away free ADSL access bundled with a mobile subscription would be gross abuse of a dominant position by operators protected behind the barrier to entry that their license affords them – so the worse they can do is selling ADSL at cost. But it is nevertheless tempting to squeeze out the perceived value of ADSL from the consumer’s point of view in order to cut the fixed access pure player’s oxygen supply. Isn’t life so much more comfortable among oligopolistic old pals ? Marginalization of ADSL pure players will be even worse if they are not playing along in the fiber optics arms race.

So the incentives combine :

  • Users want the convenience of permanent unlimited cell access
  • Operators are happy to squeeze out ADSL pure players

As a result, cell traffic increases and leads us to the next step of this self-reinforcing process : femtocells. Spectral efficiency nearing the Shannon limit, antenna diversity, spectral multiplexing and other 3G MIMO techniques can combine to provide the peak throughput that all the shiny marketing pie in the sky presentations promise. But in operations in the field those speeds are not achieved unless you camp under the antenna. For example, LTE 2×2 MIMO is advertised at a peak throughput of 173 Mb/s but actual rates are somewhere between 4 and 24 Mb/s in 2×20 MHz. They drop sharply as distance increases and it gets worse as the cell gets crowded. So there will be strong user demand for small cells – demand theoretically exists until there is one cell per user.

Approximately 60% of mobile usage already takes place indoors, yet providing in-building coverage is a technical problem at the gigahertz frequencies used for Wimax and LTE. This is only set to get worse as the mobile continues to replace the home phone. Research indicates that, as “all you can eat” data packages become commonplace, this number is likely to reach 75% by 2011.

Doug Puley – “The macrocell is dead, long live the network”, 2008

With the user spending more than 60% of his time indoors, there will be a fixed line access nearby. Extension of the access network on top of ADSL and FTTH links is already underway to increase capacity and compress costs by getting the data off the mobile network as close to the user as possible. Femtocells work well on ADSL too. So ADSL will remain useful as a way for mobile operator to shed load from the rest of the access network. And on top of that, ADSL lets the operator reach subscribers in areas not covered by the radio network.

So to mobile operators who offer fixed line access, ADSL could soon be considered as a mere adjunct to their core offering : mobile access. That could add yet more pressure on the game of musical chairs of mobile access frequencies license allocation. Why not attempt to exclude the competition that does not own a mobile network ? That leads us to ADSL access at cost – or slightly below that if the operator is willing to be naughty and deal with the market regulator. It will happen sooner than you think.

By the way, for a wealth of data about 3GPP evolution from UMTS-HSPA to LTE & 4G, you can take a look at this  September 2009 report by Rysavy Research. It provides about all you need to know about it and it is nearly as good as what I get internally from SFR.

Approximately 60% of mobile usage already takes place indoors, yet providing in-building coverage is a technical problem at the gigahertz frequencies used for Wimax and LTE. This is only set to get worse as the mobile continues to replace the home phone. Research indicates that, as “all you can eat” data packages become commonplace, this number is likely to reach 75% by 2011.
Consumption and Free software and Knowledge management and Mobile computing and Networking & telecommunications and Systems and Technology and Unix19 Oct 2009 at 1:18 by Jean-Marc Liotier

Five months have elapsed since that first week-end when my encounter with Android was a severe case of culture shock. With significant daily experience of the device, I can now form a more mature judgement of its capabilities and its potential – of course from my own highly subjective point of view.

I still hate having to use Google Calendar and Google Contacts for synchronization.  I hope that SyncML synchronization will appear in the future, make Android a better desktop citizen and provide more choice of end points. Meanwhile I use Google. With that out of the way, let’s move on to my impressions of Android itself.

I am grateful for features such as a decent web browser on a mobile device, for a working albeit half baked packaging and distribution system, and for Google Maps which I consider both a superlative application in its own right and the current killer albeit proprietary infrastructure for location enabled applications. But the rigidly simple interface that forces behaviours upon its user feels like a straitjacket : the overbearing feeling when using Android is that its designers have decided that simplicity is to be preserved at all costs regardless of what the user prefers.

Why can’t I select a smaller font for my list items ? Would a parameter somewhere in a customization menu add too much complication ? Why won’t you show me the raw configuration data ? Is it absolutely necessary to arbitrarily limit the number of virtual desktops to three ? From the point of a user who is just getting acquainted with such a powerful platform, those are puzzling questions.

I still don’t like the Android ‘s logic, and moreover I still don’t quite understand it. Of course I manage to use that system, but after five month of daily use it still does not feel natural. Maybe it is just a skin-deep issue or maybe I am just not the target audience – but some features are definitely backwards – package management for example. For starters, the “My Downloads” list is not ordered alphabetically nor in any apparently meaningful order. Then for each upgradeable package, one must first browse to the package, then manually trigger the upgrade package, then acknowledge system privileges the upgraded package and finally clear the download notification and the update notification. Is this a joke ? This almost matches the tediousness of upgrading Windows software – an impressive feat considering that the foundations of Android package management seem serious enough. Where is my APT ?

Like any new user on a prosperous enough system, I am lost in choices – but that is an embarrassment of riches. Nevertheless, I wonder why basics such as a task manager are not installed by default. In classic Unix spirit, even the most basic system utilities are independent applications. But what is bearable and even satisfying on a system with a decent shell and package management with dependencies becomes torture when installing a package is so clumsy and upgrading it so tedious.

Tediousness in package management in particular and user interaction in general makes taming the beast an experience in frustration. When installing a bunch of competing applications and testing them takes time and effort. Experimenting is not the pleasure it normally is on a Linux system. The lack of decent text entry compounds the feeling. Clumsy text selection makes cut and paste a significant effort – something Palm did make quick, easy and painless more than ten years ago. Not implementing pointer-driven selection – what were the developers thinking ?

PIM integration has not progressed much. For a given contact, there is no way to look at a communications log that spans mail, SMS and telephony: each of them is its own separate universe. There is no way to have a list of meetings with a given contact or at given location.

But there basic functionality has been omitted too. For example when adding a phone number to an existing contact, search is disabled – you have to scroll all the way to the contact. There is no way to search the SMS archive and SMS to multiple recipients is an exercise left to applications.

Palm OS may have been unstable, incapable of contemporary operating system features, offering only basic functionality and generally way past its shelf date. But in the mind of users, it remains the benchmark against which all PIM systems are judged. And to this day I still don’t see anything beating Palm OS on its home turf of  PIM core features and basic usability.

Palm OS was a poster child for responsiveness, but on the Android everything takes time – even after I have identified and killed the various errant applications that make it even slower. Actually, the system is very fast and capable of feats such as full-motion video that were far beyond the reach of the Palm OS. But the interaction is spoilt by gratuitous use of animations for everything. Animations are useful for graphically hinting the novice user about what is going on – but then hey are only a drag. But please let me disable animations as I do on every desktop I use !

The choice of a virtual keyboard is my own mistake and I am now aware that I need a physical keyboard. After five months, I can now use the virtual keyboard with enough speed and precision for comfortable entry of a couple of sentences. But beyond that it is tiring and feels too clumsy for any meaningful work. This is a major problem for me – text entry is my daily bread and butter. I long for the Treo‘s keyboard or even the one on the Nokia E71 – they offered a great compromise of typing speed and compacity. And no multitouch on the soft keyboard means no keyboard shortcuts which renders many console applications unusable – sorry Emacs users.

The applications offering is still young and I cannot blame it for needing time to expand and mature. I also still need to familiarize myself with Android culture an develop the right habits to find my way instinctively and be more productive. After five months, we are getting there – one handed navigation has been done right. But I still believe that a large part of the user interface conventions used on the Android does not match the expectations for general computing.

It seems like everything has been meticulously designed to bury under a thick layer of Dalvik and Google plaster anything that could remind anyone of Unix. It is very frustrating to know that there is a Linux kernel under all that, and yet to suffer wading knee-deep in the marshes of toyland. The more I use Android an study it, the more I feel that Linux is a mere hardware abstraction layer and the POSIX world a distant memory. This is not the droid I’m looking for.

Networking & telecommunications and Politics and The media08 Oct 2009 at 11:18 by Jean-Marc Liotier

The French satirical investigative journalism weekly “Le Canard Enchaîné”  reveals that our holier-than-thou presidency is in fact a pirate’s lair. In a stunning display of hypocrisy, the presidential audiovisual services produced 400 unauthorized copies of the 52 minutes documentary “A visage découvert : Nicolas Sarkozy“.

The editor, Galaxie Press had only shipped 50 copies, but the propaganda plan required more so the Elysee went to work, going as far as modifying the cover and replacing the Galaxie Presse name and logos with “Service audiovisuel de la présidence de la République”.

Isn’t is deliciously ironic that the same executive power is the main force behind the latest disgusting bungled piece of French legislation regulating and controlling the usage of the Internet in order to enforce the compliance to the copyright law ?

It is even more appalling that we are dealing with repeat offenders : last spring, while the Hadopi law was discussed, U.S. music duo MGMT received €30,000 as a settlement for a copyright infringement by French President Nicolas Sarkozy’s party who used one of its songs at a political rally without permission. Those who led the charge against Internet users are not the most respectful of copyright.

Hadopi is also known as the “three strikes” law because it after a certain number of warnings a copyright infringer’s Internet access would be cut off. Hadopi has just been adopted. Nicolas – one more of those antics and your Internet access is toast !

Africa and Networking & telecommunications04 Jun 2009 at 23:44 by Jean-Marc Liotier

Accra, 23 February 2009.

Mobile telephony prices varying with network load – yield management made transparent

The mobile telephony marketing idea of the day is the MTN price zoning concept I witnessed in Ghana. According to which cell you are attached to, a message on the handset displays a discount rate. On lightly loaded cells the discount can go up to 100% (although I never saw it at more than 50%) and on the heavily loaded ones there is none. Making network loads transparent to the users through a price signal a great way to both shape traffic, take advantage of available infrastructure to provide cheap traffic and charge premium prices to the most demanding users. I have never before seen yield management made so transparent – so refreshing compared to the elaborate pricing schemes designed to play that role in more subtle and more annoying ways.

Here is an extract of the press release from MTN Ghana on MTN Zone service launch in June 2008 :

MTN Ghana has announced the launch of a new Innovative service named MTN Zone. The service gives its Pay As You Go subscribers the opportunity to enjoy up to 100% discount day and night on calls they make to other MTN Ghana subscribers and runs on the per second billing plan. MTN Zone subscribers have a flat tariff on all MTN to MTN calls when they register and subsequently receive messages that display dynamic discounts they will enjoy at any point in time.

In order to enjoy the benefits of MTN Zone, existing and new MTN prepaid customers simply need to register for the service by entering *135*1# and pressing the send or ok button on their handset. Alternatively, they could send 1 to SMS short code 135. Registration onto MTN Zone service is currently free of charge. To activate the cell broadcast functionality on their handsets, customers must enter enter * 135*4# and follow the instructions, this process is unique for each handset module. The cell broadcast feature when enabled, gives MTN customers the opportunity to see the dynamic percentage discount they will enjoy when they initiate a call at that time and the discount will be applicable throughout the duration of the call.

“We are excited at what the team here at MTN Ghana has been able to provide after thorough research and development. Discerning Ghanaians want the most cost effective and exciting means of communicating with their family and friends and we are proud that as a team we have been able to crack this motivation and demonstrate our leadership in innovation and superior customer understanding. This new service empowers our customers with more, choice and control over their cost of making calls. The excitement this service has generated within one week is unparalleled in the industry in Ghana. As usual we will lead the market in innovation and others can follow”, says George Kojo Andah, Chief Marketing Officer.

I believe that they have good reason to be proud of this innovative service. It probably requires some custom billing system but I believe it is a great idea – maybe I should write a proposal for our marketing department. The billing people might be horrified though…

Mobile computing and Networking & telecommunications04 May 2009 at 13:47 by Jean-Marc Liotier

It is coming – but very slowly of course, thanks to the oligopolistic structure of most mobile telecommunications markets. Bombastic new entrants such as Proxad in France may pretend that their vision of future low cost flat fee mobile data offerings will be the second coming, cutting the household bill by half in three years, but once they’ll have joined the spectrum license holder’s club there will be no incentive for them to be more aggressive than what is necessary for them to grab their share of the market. They pretend that their new hotness is a technological advantage that will be the support for their claim of costs reduction, but they forget to mention that the only reason why the old and busted competition has not pushed that technology forward is that they control the market with no need for such bother. The large incumbents have immense resources – financial, technical, human and organizational. They can be terribly powerful when they realize that they are under threat : the steamroller may take a good while to get started but you don’t want to get in the way when it begins to roll at his stately speed.

So is the new entrant the trigger ? Actually, not : the new entrant’s marketing department has just done his homework and read the signs correctly. Early adopters have from the dawn of times been clamoring for simple low cost and preferably flat fee mobile data offerings, but as usual the visionaries don’t hold much weight on a mass market – changing the game takes a large mainstream actor with his own agenda. And as surprising as it may be, that interloper is Apple. As users we may spurn Steve Jobs’ reality distortion field and the technically banal Disneyland world of Apple, but the marketing magic is awe inspiring to say the least. On the basis of it, Apple managed to get the mobile operators to produce deals that were completely unheard of on that market, including the revenue sharing arrangements that lasted until last year and the still strong absolute control of the platform by Apple. As a result of all that hype, the Iphone led the charge in mass usage of mobile data access.

Of course, mobile data access had already been possible for ages and the competitors are catching up fast on Apple’s lead in mobile user experience. But credit goes to Apple for giving the masses the taste for mobile data. Last September, “the Australian Mobile Internet Insight found that during the average iPhone browsing session, users consumed 2.07 MB compared to 0.30 MB for other mobile users – that is six times more ! “The report also found that the average page size for iPhone browsing is more than double the mobile average, which the report attributes to iPhone users browsing desktop versions of websites“.Last month, AT&T announced that its expectation of a tenfold usage in data traffic is driven by the Iphone. Net Applications’ February results show the iPhone generating two third of mobile web accesses. Meanwhile, AdMob Mobile Metrics report credits the Iphone with a 52% share of the traffic. Google claims that it had seen 50 times more searches on Apple’s Iphone than on any other system on the market. I have heard a European mobile operator’s executive mention Japanese studies reporting that Iphone users generate ten time the data traffic of other users. Apple’s share of the handset market will certainly remain minor, but as with any catalyzer, a small quantity changes everything.

So we have a mass market hungry for cheap data and new entrants hoping to build their market share on that. They may eventually disrupt the market somewhat, but the incumbents won’t be caught napping : IP RAN, Ethernet backhaul, IP core networks and the IMS architecture are all in the pipeline. The incumbents fully expect the new market pressure on price, and they expect to be ready to take it on. Cost of the megabit transfered can currently comfortably be counted in cents with just the fingers on your two hands, and the operator’s ambition is to cut that at least in three over the course of three years. Can you believe that mobile operators are actually shaping up to be able to compete on the price of bulk data ? You better do, but don’t hold your breath and expect mobile network operators going at each other’s throat with with generous offerings of abundant data transfer capacity while your bill plummets – the price war will play in slow motion if the history of the mobile telecommunications market is anything to learn from.

Meanwhile, SMS still costs from four to 42 times more than fetching data from Hubble space telescope

Identity management and Jabber and Knowledge management and Military and Mobile computing and Networking & telecommunications and Social networking and Technology and The Web23 Oct 2008 at 14:42 by Jean-Marc Liotier

I have become a user of Brightkite, a service that provides situational awareness in the geographical context. Once its relationship to user location information sources such as Fire Eagle improve, it may become a very nice tool, especially in mobile use cases where location reporting may be partly automated.

But even if they add technical value in the growing world of geographically aware applications, theses services are actually not innovative at the functional level. For example, in the ham radio universe, APRS is already a great system for real time tactical digital communications of information of immediate value in the local area – which includes among other things the position of the participating stations. And there is also TCAS, which interrogates surrounding aircrafts about their positions, and AIS which broadcasts ship positions and enables the entertaining Vessel Traffic Services such as the one provided by MarineTraffic. All these radio based systems broadcast in the clear and are not satisfying the privacy requirements of a personal eventing service. But that problem has also been solved by the Blue Force Tracker which even though it is still a work in progress has already changed how a chaotic battlefield is perceived by its participants.

“Where am I, and where are my friends ?” is not only the soldier’s critical information – it is also an important component of our social lives, witness the thriving landscape of geosocial networking. Geographic location is a fundamental enabler : we are physically embodied and the perimeter of location based services actually encompasses anything concerning our physical presence. So we can’t let physical location services escape our control. Fire Eagle may be practical for now, but we need to make geographical information part of the basic infrastructure under our control and available on a standardized, open and decentralized basis. The good news is that much thoughts have already been invested into that problem.

Physical location is part of our presence, and as you may have guessed by now, this means XMPP comes to the rescue ! We have XEP-0080 – User Location, an XMPP extension which is currently a XMPP Foundation Draft Standard (implementations are encouraged and the protocol is appropriate for deployment in production systems, but some changes to the protocol are possible before it becomes a Final Standard – as good as a draft standard RFC and therefore good enough for early adopter use). It is meant to be communicated and transported by means of Publish-Subscribe or the subset thereof specified in Personal Eventing via Pubsub. It may also be provided as an extension of plain vanilla <presence/> but that is quite a crude way to do it compared to the Publish-Subscribe goodness.

The rest of the work is left to the XMPP client. Of course, the client can show them on a map, just as Brightkite currently does. But I can also easily imagine an instant messaging contact list on my PDA where one of the contact groups is “contacts near me”. I would love to have Psi do that…

« Previous PageNext Page »