Networking & telecommunications archived articles

Subscribe to the RSS feed for this category only

Networking & telecommunications and Politics and Security17 Jun 2013 at 0:37 by Jean-Marc Liotier

I took the EFF and Tor stickers as corroborating material in support of Snowden’s appearances of good character, but not everyone saw them that way… Interviewed by Time’s Andrew Katz, former security clearance investigator Nicole Smith explains that sympathy for online rights activists is a sign that a candidate may not be fit for Top Secret clearance:

In a photograph posted online after Snowden revealed himself, his laptop displays a sticker touting the Electronic Frontier Foundation, a longstanding advocate for online rights and staunch opponent of government surveillance. That would have been enough of a warning sign to make it into his file, Smith says, but investigators wouldn’t have come across it because clearance interviews aren’t performed at their homes: “You’re not around that person’s personal belongings to make any other additional observations about that person’s characters”

Self doubt ? Ethical questioning ? Interest in social issues ? Affinities for dissenting viewpoints ? No – that is not useful nor even compatible with secret work… Better fill the ranks with yes-men who will follow superior orders to the bitter end – that worked so well in the past

Anyway, thanks to Smith, the authorities now know what to watch for – open display of affinities with the EFF is enough of a warning sign to make it to file. Take this NSA agent for example, performing devious agitprop in official EFF attire :

Uh – hello General Alexander ! Doesn’t the Director of the National Security Agency look swell in that T-shirt ? Better in my opinion than in his stiff official portrait… But that warning sign shall certainly cost him an entry in his file – he’ll have some serious explaining to do when his clearances come up for review ! Maybe he should have just ordered an EFF sticker for his home laptop instead.

Marketing and Networking & telecommunications and Security and Social networking and The media and The Web12 Jun 2013 at 11:11 by Jean-Marc Liotier

A few reflections from my notes of public reaction to last weekend’s events.

Advertising is the main source of revenue for publishers on the Web, including the lords of sharecropping empires such as Facebook and Google. Revenue from advertising varies hugely with how well the message targets the audience. Targeting requires getting to know the target – which is the business that Facebook and Google are in : getting the user to find them useful and trust them so that he willingly provides them with their raw material.

I used to enjoy giving the publishers a lot of data in return for personalization and services – even considering the risks. Yes, we knew the risks – but they are the sort of risks that we are notoriously bad at evaluating. Most of us have probably read at least a dozen different tales of Orwellian dystopias – yet our productive relationship with service providers let us convince ourselves that betrayal won’t happen. We were so complacent that it might be argued that we asked for this.

So why are we surprised ? The surprise is in the scale of the abuse. Corruption always exists at the margins of any system that is sufficiently slack to let alternative ways thrive and supply the mainstream with fresh ideas. A society with no deviance at its margins is totalitarian – so we live with that some antisocial behaviour as a cost of doing business in a society that values individual freedom.

But today we find that the extent of corruption is not restricted to the margins – we find that most of what goes on there among people we entrusted with extreme power at the core of the state entirely escapes oversight and drifts into mass surveillance which is known to asphyxiate societies. That much corruption was a risk that we were warned against, but seeing it realized is still a nasty surprise.

Again, this is not about lawful surveillance under democratic oversight, which is as acceptable as ever – this is about the dangerous nature of massive untargeted surveillance outside of democratic control. But public opinion reeling from the shock will probably be blind to the difference – it is now likely to be wary of anything that even remotely smells of surveillance.

Of course, not everyone has yet realized the tradeoffs that modern communications entail and that they have always been making, even if unwittingly – public awareness of privacy issues is not going to arise without continued evangelism anytime soon. But a host of users has awoken to realize that they were sleepwalking naked on Main Street. What will they do now ?

Considering how mainstream audiences have long happily kept gobbling up toxic information from the mass media, I am not holding my breath for a violent phase transition – but a new generation of privacy militants might just have been given birth and I wonder how much they will nudge the information industry’s trajectory. In any case, they will not make the Internet more welcoming to it.

Networking & telecommunications and Politics10 Jun 2013 at 10:02 by Jean-Marc Liotier

Do you remember who said this ?

“This Administration also puts forward a false choice between the liberties we cherish and the security we demand. I will provide our intelligence and law enforcement agencies with the tools they need to track and take out the terrorists without undermining our Constitution and our freedom.

That means no more illegal wire-tapping of American citizens. No more national security letters to spy on citizens who are not suspected of a crime. No more tracking citizens who do nothing more than protest a misguided war. No more ignoring the law when it is inconvenient. That is not who we are. And it is not what is necessary to defeat the terrorists”.

Hint – it was in August 2007. Yes, he may have changed his mind since then…

Yes we (probably) can ! (your mileage may vary; this message does not reflect the thoughts or opinions of either myself, my company, my friends, or alter ego; terms are subject to change without notice; this message has not been safety tested for children under the age of 3; any resemblance to actual persons, living or dead, is unintentional and purely coincidental; do not remove this disclaimer under penalty of law; for a limited time only; this message is void where prohibited, taxed, or otherwise restricted; message is provided “as is” without any warranties; reader assumes full responsibility; if any defects are discovered, do not attempt to read them yourself, but return to an authorized service center; read at your own risk; text may contain explicit materials some readers may find objectionable, parental guidance is advised; keep away from pets and small children; some assembly required; not liable for damages arising from use or misuse; may cause random outbursts of extreme violence, or epileptic seizures; actual message may differ from illustration on box; other rules may apply; past performance does not predict future results; see store for details).

Networking & telecommunications and Politics and Social networking and The media and Uncategorized09 Jun 2013 at 22:49 by Jean-Marc Liotier

In the wake of the Prism debacle, Google CEO Larry Page and Facebook CEO Mark Zuckerberg, among others, published reactions full of outrage, strong denials of specific allegations (“direct access”, “back doors”) and technically correct truth… But ridiculously inadequate in the face of the awesome shitstorm that Edward Snowden kicked up, as they won’t admit willful cooperation or even awareness of possible abuse of privileges lightheartedly granted to the NSA.

Meanwhile, the Director of National Intelligence issued a fact sheet stating that PRISM was conducted “under court supervision, as authorized by Section 702 of the Foreign Intelligence Surveillance Act (FISA) (50 U.S.C. § 1881a)”. Among other things, that fact sheet states that :

Under Section 702 of FISA, the United States Government does not unilaterally obtain information from the servers of U.S. electronic communication service providers. All such information is obtained with FISA Court approval and with the knowledge of the provider based upon a written directive from the Attorney General and the Director of National Intelligence.

Above emphasis is mine – “not unilaterally” and “with knowledge of the provider”. Hello, Larry ? Zuck ? Feeling lonely there ? Have you just been hung out to dry by your friend the DNI ?

Military and Networking & telecommunications and Politics and Social networking06 Jun 2013 at 22:40 by Jean-Marc Liotier

By now you are probably already participating in the fireworks triggered by the leak of a secret court order requiring Verizon to hand over all call data to the NSA. Mass surveillance was a well known threat – but now we have proof that the USA do it… Will that be the wake-up call for increased political awareness ? I’m not holding my breath…

US Senators don’t seem to have realized the extent of public outrage – witness comments such as “This is nothing particularly new… Every member of the United States Senate has been advised of this”… Mass surveillance ? Yes we can ! All that would not have happened if Obama had been elected.

Anyway, a couple of months ago, Frank La Rue, the United Nations Special Rapporteur on Freedom of Expression and Opinion, has reported  to the UN Human Rights Council, making a connection between surveillance and free expression. It establishes the principle that countries that engage in bulk, warrantless Internet surveillance are violating their human rights obligations to ensure freedom of expression. Was that report prescient ? Is it part of a new trend at the UN ? Here are a few choice morsels from the conclusions of this extensive piece of research:

79. States cannot ensure that individuals are able to freely seek and receive information or express themselves without respecting, protecting and promoting their right to privacy. Privacy and freedom of expression are interlinked and mutually dependent; an infringement upon one can be both the cause and consequence of an infringement upon the other.

80. In order to meet their human rights obligations, States must ensure that the rights to freedom of expression and privacy are at the heart of their communications surveillance frameworks.

81. Communications surveillance should be regarded as a highly intrusive act that potentially interferes with the rights to freedom of expression and privacy and threatens the foundations of a democratic society.

Clear enough for y’all ? The report was in no way aiming at the US of A but today’s revelations makes it difficult to read it without thinking about them…

Mass surveillance is like searching every single home in the whole country because some of them might hide something illegal. With such massive indiscriminate intrusion in private lives,  secrecy isn’t kept to avoid “tipping off the target” – it is about avoiding legitimate public outrage at misguided actions outside of any effective control, that undermine the very foundations of what we strive for.

 

Networking & telecommunications and Politics and Security30 Jan 2013 at 13:45 by Jean-Marc Liotier

[This post motivated by a strange lack of FISAA awareness around me]

You will certainly be relieved to learn that US government agencies do not spy clandestinely on the data you entrust to Google, Facebook & co.

So stop wondering about dark conspiracies : there are none.

The bad news is that they do it legally instead. Yes – US government agencies can legally access any data stored by non-American citizens at USA-based hosting companies. No warrant required – they can basically help themselves with your data anytime they please and that is entirely legal.

Brazen, isn’t it ? It is called FISAA – for more details, take a look at this European Parliament report. And by the way, I believe that some strong reaction from the European Union has been long overdue.

The silver lining is that European hosts are making good business with everyone who won’t host their data in the USA anymore !

Networking & telecommunications and Systems administration and Unix06 Jun 2012 at 11:48 by Jean-Marc Liotier

Today is IPv6 party time so let’s celebrate with a blog post !

Reliable IPV6 connectivity is no longer just nice to have – it is a necessity. If your Internet access provider still does not offer proper native IPv6 connectivity, your next best choice is to use an IPv4 tunnel to an IPv6 point of presence. It works and on the client side it only requires this sort of declaration in /etc/network/interfaces :

auto ipv6-tunnel-he
  iface ipv6-tunnel-he inet6 v4tunnel
  address 2001:170:1f12:425::2
  netmask 64
  endpoint 216.66.84.42
  gateway 2001:170:1f12:425::1

Of course, the same sort of configuration is required at the other endpoint – which means that, among other parameters, you must inform the IPv6 tunnel server of the IPv4 address of the client endpoint. Hurricane Electric, my tunnel broker lets me do that manually through its web interface – which is fine for a static configuration done once, but inadequate if your Internet access provider won’t supply you with a static IPv4 address. By the way, even if, after a few weeks of use, you believe you have a static address, you might just have a dynamic address with a rather long DHCP lease…

But Hurricane Electric also provides a primitive HTTP API that lets you inform the tunnel broker of IPv4 address changes – that is all we need to do it automatically every time our Internet access goes up. Adding this wget command to the uplink configuration stanza in /etc/network/interfaces does the trick :

auto eth3
iface eth3 inet dhcp
  up wget -O /dev/null https://USERNAME:PASSWORD@ipv4.tunnelbroker.net/ipv4_end.php?tid=34764

That’s it – you now can count on IPv6 connectivity, even after a dynamic IPv4 address change.

And after you are done, go test your IPv6 configuration and your IPv6 throughput !

Debian and Networking & telecommunications and Systems administration and Unix17 Oct 2011 at 11:03 by Jean-Marc Liotier

I just wanted to create an Apache virtual host responding to queries only over IPv6. That should have been most trivial considering that I had already been running a dual-stacked server, with all services accessible over both IPv4 and IPv6.

Following the established IPv4 practice, I set upon configuring the virtual host to respond only to queries directed to a specific IPv6 address. That is done by inserting the address in the opening of the VirtualHost stanza : <VirtualHost [2001:470:1f13:a4a::1]:80> – same as an IPv4 configuration, but with brackets around the address. It is simple and after adding an AAAA record for the name of the virtual host, it works as expected.

I should rather say it works even better than expected : all sub-domains of the second-level domain I’m using for this virtual host are now serving the same content that the new IPv6-only virtual host is supposed to serve… Ungood – cue SMS and mail from pissed-off users and a speedy rollback of the changes; the joys of cowboy administration in a tiny community-run host with no testing environment. As usual, I am not the first user to fall into the trap. Why Apache behaves that way with an IPv6-only virtual host is beyond my comprehension for now.

Leaving aside the horrible name-based hack proposed by a participant in the Sixxs thread, the solution is to give each IPv6-only virtual host his own IPv6 address. Since this server has been allocated a /64 subnet yielding him 18,446,744,073,709,551,616 addresses, that’s quite doable, especially since I can trivially get a /48 in case I need 1,208,925,819,614,629,174,706,176 more addresses. Remember when you had to fill triplicate forms and fight a host of mounted trolls to justify the use of just one extra IPv4 address ? Yes – another good reason to love IPv6 !

So let’s add an extra IPv6 address to this host – another trivial task : just create an aliased interface, like :

auto eth0:0
    iface eth0:0 inet6 static
    address 2001:470:1f13:a4a::1
    netmask 64
    gateway 2001:470:1f12:a4a::2

The result :

SIOCSIFFLAGS: Cannot assign requested address
Failed to bring up eth0:0.

This is not what we wanted… You may have done it dozens of time in IPv4, but in IPv6 your luck has ran out.

Stop the hair pulling right now : this unexpected behavior is bug – this one documented in Ubuntu, but I confirm it is also valid on my mongrel Debian system. Thanks to Ronny Roethof for pointing me in the right direction !

The solution : declare the additional address in a post-up command of the main IPv6 interface (and don’t forget to add the post-down command to kee things clean) :

auto he-ipv6
iface he-ipv6 inet6 v4tunnel
    address 2001:470:1f12:a4a::2
    netmask 64
    endpoint 216.66.84.42
    local 212.85.152.17
    gateway 2001:470:1f12:a4a::1
    ttl 64
    post-up ip -f inet6 addr add 2001:470:1f13:a4a::1 dev he-ipv6
    pre-down ip -f inet6 addr del 2001:470:1f13:a4a::1 dev he-ipv6

And now the IPv6-only virtual hosts serves as designed and the other virtual hosts are not disturbed. The world is peaceful and harmonious again – except maybe for that ugly post-up declaration in lieu of declaring an aliased interface the way the Unix gods intended.

All that just for creating an IPv6 virtual host… Systems administration or sleep ? Systems administration is more fun !

Mobile computing and Networking & telecommunications29 Jun 2011 at 15:13 by Jean-Marc Liotier

With UMTS now potentially available on all the frequency bands traditionally allocated to GSM, why are we still operating GSM there while UMTS offers nothing but improvements over it and all contemporary handsets support it. The question is particularly pressing since data traffic has for quite a while accounted for more than 90% of network usage in volume and grows faster than backhaul can be deployed and cells made smaller while spectral efficiency has become awfully close to theoretical optima. GSM data modes such as GPRS and its incremental improvements have their purpose well, but they are hacks shoehorning data into a TDM voice world – nothing like the native capabilities of UMTS. Of course, modern marketing knows the value of nostalgia as an advertising vector, but I suspect that the market of users who insist on GSM for nostalgia’s sake may not be sufficient to justify its cost.

Some manufacturers nowadays offer unified RAN infrastructure that supports both UMTS and GSM on a single piece of equipment – and many antennas are now multiband, but there is still an awful amount of specific equipment with the associated duplicated costs… And then there is the effort of maintaining the software for two entirely independent systems, each with its own bugs, quirks and yearly upgrades attempting to squeeze more throughput out of a slice of spectrum that is not going to expand – a single large operator typically has dozens of people whose workload could be cut in half overnight. I for one would love to spend more time on GIS software for the fiber optics infrastructure and less dealing with the Jurassic park.

So what are we waiting for ? Don’t we understand that frequencies are too precious to be wasted on obsolete protocols ? Let’s recycle ! Let GSM retire ! Taiwan’s ministry of transportation and communications is already working on it

Jabber and Mobile computing and Networking & telecommunications09 May 2011 at 14:02 by Jean-Marc Liotier

I have owned an an HTC “G2″ Magic for almost two years and one of my biggest disappointments with the Android operating system has been my inability to find a decent Jabber client. On the desktop, my love of Psi has been going on for half a decade but my encounters with mobile Jabber clients have been nothing but disappointments.

On Android in the past two years I have tried them all, including notables such as Jabbdroid, Beem, Jabiru, Yaxim, Emess and many others not even worth citing. Some of them are hampered by a slow graphical user interface, some deplete batteries in a hurry, some lack features I consider essential, some even crash on receiving a message and not a single one is capable of remaining connected while the radio segment hops from GPRS to UMTS to Wi-Fi and back again… They won’t even try to reconnect – leaving me slack-jawed at the lack of such a basic feature when there is even a standard Android class that notifies applications when network connectivity changes.

Enter Xabber – it does everything I expect from an Android Jabber client. Yes, it really does – you can drop that unbelieving face. I’ll spare you the whole features list… Let’s just focus on what I was looking for :

  • Permanent tray icon as link to contacts lists
  • vCard based avatars
  • XMPP priorities
  • Groups
  • Contacts list management
  • TLS/SSL support
  • Full Unicode support
  • Chat history
  • Parameters for just enough customization
  • Multi User Chat – you can even join multiple rooms
  • Does not deplete the batteries too quickly
  • Reconnects promptly after each disconnection while the radio segment hops from GPRS to UMTS to Wi-Fi and back again

As a bonus it publishes geographical location, but I have no idea where it gets it from, nor if it is supposed to implement XEP-0080.

Don’t you love the feeling of discovering a new application and finding that it behaves the way you expect, as if the developers had been reading your mind and making helpful suggestions about the fuzzy parts of what they had read ? On Android K-9 Mail is the only other example I can think about… Yes, Xabber is that good.

The only downside of Xabber is that the code is not free… The site does not even mention a license. So you don’t know what lies hidden inside, you can’t modify it and you are at the mercy of the developer changing his mind and starting to ask for money for further versions. But even as a Free software fanboy I’m willing to live with that for now – I’m so relieved to at last have something that works.

From now on, expect to find me online while I’m on the move !

Edit 20130130 – Xabber is now Free Software !

Code and Free software and Networking & telecommunications and Systems administration and Unix01 Mar 2011 at 20:06 by Jean-Marc Liotier

I loathe Facebook and its repressive user-hostile policy that provides no value to the rest of the Web. But like that old IRC channel known by some of you, I keep an account there because some people I like & love are only there. I seldom go to Facebook unless some event, such as a comment on one of the posts that I post there through Pixelpipe, triggers a notification by mail. I would like to treat IRC that way: keeping an IRC application open and connected is difficult when mobile or when using the stupid locked-down mandatory corporate Windows workstation, and I’m keen to eliminate that attention-hogging stream from my environment – especially when an average of two people post a dozen lines a day, most of which are greetings and mealtimes notifications. But when a discussion flares up there, it is excellent discussion… And you never know when that will happen – so you need to keep an eye on the channel. Let’s delegate the watching to some automation !

So let me introduce to you to my latest short script : bipIRCnickmailnotify.sh – it sends IRC log lines by mail when a specific string is mentioned by other users. Of course in the present use case I set it up to watch for occurrences of my nickname, but I could have set it to watch any other string. The IRC logging is done by the bip IRC proxy that among other things keeps me permanently present on my IRC channels of choice and provides me with the full backlog whenever I join with a regular IRC client.

This Unix shell script also uses ‘since’ – a Unix utility similar to ‘tail’ that unlike ‘tail’ only shows the lines appended since the last execution. I’m sure that ‘since’ will come handy in the future !

So there… I no longer have to monitor IRC – bipIRCnickmailnotify.sh does it for me.

With trivial modification and the right library it could soon do XMPP notifications too – send me an instant message if my presence is ‘available’ and mail otherwise. See you next version !

Networking & telecommunications and Security and Systems administration07 Feb 2011 at 13:04 by Jean-Marc Liotier

I work for a very large corporation. That sort of companies is not inherently evil, but it is both powerful and soulless – a dangerous combination. Thus when dealing with it, better err on the side of caution. For that reason, all of my browsing from the obligatory corporate Microsoft Windows workstation is done trough a SSH tunnel established using Putty to a trusted host and used by Mozilla Firefox as a SOCKS proxy. If you do that, don’t forget to set network.proxy.socks remote DNS to true so that you don’t leak queries to the local DNS server.

In addition to the privacy benefits, a tunnel also gets you around the immensely annoying arbitrary filtering or throttling of perfectly reasonable sites which mysterious bureaucracies add to opaquely managed exclusion lists used by censorship systems. The site hosting the article you are currently reading is filtered by the brain-damaged Websense filtering gateway as part of the “violence” category – go figure !

Anyway, back on topic – this morning my browsing took me to Internode’s IPv6 site and to my great surprise I read “Congratulations! You’re viewing this page using IPv6 (  2001:470:1f12:425::2 ) !!!!!”. A quick visit to the KAME turtle confirmed : the turtle was dancing. The surprising part is that our office LAN is IPv4 only and the obligatory corporate Microsoft Windows workstation has no clue about IPv6 – how could those sites believe I was connecting through IPv6 ? A quick ‘dig -x 2001:470:1f12:425::2′ cleared the mystery : the reverse DNS record reminded me that this address is the one my trusted host gets from Hurricane Electric’s IPv6 tunnel server.

So browsing trough a SOCKS proxy backed by a SSH tunnel to a host with both IPv4 and IPv6 connectivity will use IPv6 by default and IPv4 if no AAAA record is available for the requested address. This behaviour has many implications – good or bad depending on how you look at it, and fun in any case. As we are all getting used to IPv6, we are going to encounter many more surprises such as this one. From a security point of view, surprises are of course not a good thing.

All that reminds me that I have not yet made this host available trough IPv6… I’ll get that done before the World IPv6 Day which will come on 8th June 2011 – a good motivating milestone !

Brain dump and Knowledge management and Networking & telecommunications and Technology16 Dec 2010 at 13:19 by Jean-Marc Liotier

Piled Higher & Deeper and Savage Chickens nailed it (thanks redditors for digging them up) : we spend most of our waking hours in front of a computer display – and they are not even mentioning all the screens of devices other than a desktop computer.

According to a disturbing number of my parent’s generation, sitting in from of a computer makes me a computer scientist and what I’m doing there is “computing”. They couldn’t be further from the truth : as Edsger Dijkstra stated, “computer science is no more about computers than astronomy is about telescopes”.

The optical metaphor doesn’t stop there – the computer is indeed transparent: it is only a windows to the world. I wear my glasses all day, and that is barely worth mentioning – why would using a computer all day be more newsworthy ?

I’m myopic – without my glasses I feel lost. Out of my bed, am I really myself if my glasses are not connected to my face ?

Nowadays, my interaction with the noosphere is essentially computer-mediated. Am I really myself without a network-attached computer display handy ? Mind uploading still belongs to fantasy realms, but we are already on the way toward it. We are already partly uploaded creatures, not quite whole when out of touch with the technosphere, like Manfred Macx without his augmented reality gear ? I’m far not the only one to have been struck by that illustration – as this Accelerando writeup attests :

“At one point, Manfred Macx loses his glasses, which function as external computer support, and he can barely function. Doubtless this would happen if we became dependent on implants – but does anyone else, right now, find their mind functioning differently, perhaps even failing at certain tasks, because these cool things called “computers” can access so readily the answers to most factual questions ? How much of our brain function is affected by a palm pilot ? Or, for that matter, by the ability to write things down on a piece of paper ?”

This is not a new line of thought – this paper by Andy Clark and David Chalmers is a good example of reflections in that field. Here is the introduction :

“Where does the mind stop and the rest of the world begin? The question invites two standard replies. Some accept the demarcations of skin and skull, and say that what is outside the body is outside the mind. Others are impressed by arguments suggesting that the meaning of our words “just ain’t in the head”, and hold that this externalism about meaning carries over into an externalism about mind. We propose to pursue a third position. We advocate a very different sort of externalism: an active externalism, based on the active role of the environment in driving cognitive processes”.

There is certainly a “the medium is the message” angle on that – but it goes further with the author and the medium no longer being discrete entities but part of a continuum.

We are already uploading – but most of us have not noticed yet. As William Gibson puts it: the future is already here – it’s just not very evenly distributed.

Design and Mobile computing and Networking & telecommunications and Systems and Technology19 Nov 2010 at 16:32 by Jean-Marc Liotier

In France, at least two mobile networks operators out of three (I won’t tell you which ones) have relied on Cell ID alone to identify cells… A mistake because contrary to what the “Cell ID” moniker suggests, it can’t identify a cell on its own.

A cell is only fully identified by combining with the Location Area Identity (LAI). The LAI is an aggregation of Mobile Country Code (MCC), Mobile Network Code (MNC – which identifies the PLMN in that country) and the Location Area Code (LAC – which identifies Location Area within the PLMN). The whole aggregate is called Cell Global Identification (CGI) – a rarely encountered term, but this GNU Radio GSM architecture document mentions it with details.

Since operators run their networks in their own context, they can consider that MCC and MNC are superfluous. And since the GSM and 3G specifications defines the Cell ID as a 16 bit identifier, the operators have believed that they had plenty for all the cells they could imagine, even taking multiple sectors into account – but that was many years ago. Even nowadays there are not that many cells in a French GSM network, but the growth in the number of bearer channels was not foreseen and each of them requires a different CellID – which multiplies the number of cells by their number.

So all  those who in the beginnings of GSM and in the prehistory of 3GPP decided that 65536 identifiers ought to be enough for everyone are now fixing their information systems in a hurry as they run out of available identifiers – not something anyone likes to do on a large critical production infrastructure.

Manufacturers and operators are together responsible for that, but alas this is just one occurrence of common shortsightedness in information systems design. Choosing unique identifiers is a basic modeling task that happens early in the life of a design – but it is a critical one. Here is what Wikipedia says about unique identifiers :

“With reference to a given (possibly implicit) set of objects, a unique identifier (UID) is any identifier which is guaranteed to be unique among all identifiers used for those objects and for a specific purpose.”

The “specific purpose” clause could be interpreted as exonerating the culprits from responsibility : given their knowledge at the time, the use of Cell ID alone was reasonable for their specific purpose. But they sinned by not making the unique identifier as unique as it possibly could. And even worst, they sinned by not following the full extent of the specification.

But I won’t be the one casting the first stone – hindsight is 20/20 and I doubt that any of us would have done better.

But still… Remember kids : make unique identifiers as unique as possible and follow the specifications !

Mobile computing and Networking & telecommunications and Social networking and Technology30 Sep 2010 at 11:04 by Jean-Marc Liotier

Stumbling upon a months old article by my friend George’s blog expressing his idea of local social networking, I started thinking about Bluetooth again – I’m glad that he made that resurface.

Social networking has been in the air for about as long as Bluetooth exists. The fact that it can be used for reaching out to local people has not escaped obnoxious marketers nor have the frustrated Saudi youth taken long to innovate their way to sex in the midst of the hypocritical Mutaween.

Barely slower than the horny Saudi, SmallPlanet CrowdSurfer attempted to use Bluetooth to discover the proximity of friends, but it apparently did not survive: nowadays none of the likes of Brightkite, Gowalla, Foursquare or Loopt takes advantage of this technology – they all rely on the user checking-in manually. I automated the process for Brightkite – but still it is less efficient than local discovery and Bluetooth is not hampered by an indoor location.

People like George and me think about that from time to time, and researchers put some thought into it too – so it is all the more surprising that there are no mass-scale deployments taking advantage of it. I found OlderSibling but I doubt that it has a large user base and its assumed spying-oriented use-cases are quite off-putting. Georges mentioned Bliptrack, a system for the passive measurement of traffic, but it is not a social networking application. I registered with Aki-Aki but then found that it is only available on Apple Iphone – which I don’t use. I attempted registration with MobyLuck but I’m still waiting for their confirmation SMS… Both MobyLuck and Aki-Aki do not seem very insistent on increasing their user population.

Nevertheless I quite like the idea of MobyLuck and Aki-Aki and I wonder why they have not managed to produce any significant buzz – don’t people want local social networking ?

With indoor navigation looking like the next big thing already rising well above the horizon, I’m pretty sure that there will be a renewed interest in using Blueetooth for social networking – but why did it take so long ?

Networking & telecommunications and Systems and Technology25 Sep 2010 at 10:50 by Jean-Marc Liotier

If you can read French and if you are interested in networking technologies, then you must read Stephane Bortzmeyer’s blog – interesting stuff in every single article. Needless to say I’m a fan.

Stéphane commented an article by Nokia people : « An Experimental Study of Home Gateway Characteristics » – it exposes the results of networking performance tests on 34 residential Internet access CPE. For a condensed and more clearly illustrated version, you’ll appreciate the slides of « An Experimental Study of Home Gateway Characteristics » presented at the IETF 78′th meeting.

The study shows bad performance and murky non-compliance issues on every device tested. The whole thing was not really surprising, but it still sounded rather depressing to me.

But my knowledge of those devices is mostly from the point of few of an user and from the point of view of an information systems project manager within various ISP. I don’t have the depth of knowledge required for a critical look at this Nokia study. So I turned to a friendly industry expert who shall remain anonymous – here is his opinion :

[The study] isn’t really scientific enough testing IMHO. Surely most routers aren’t high performance due to cost reasons and most DSL users (Telco environments don’t have more than 8 Mbit/s (24 Mbit/s is max).

[Nokia] should check with real highend/flagships routers such as Linksys E3000. Other issues are common NAT issues or related settings or use of the box DNS Proxy’s. Also no real testing method explained here so useless IMHO. Our test plan has more than 500 pages with full description and failure judgment… :)

So take « An Experimental Study of Home Gateway Characteristics » with a big grain of salt. Nevertheless, in spite of its faults I’m glad that such studies are conducted – anything that can prod the consumer market into raising its game is a good thing !

Experimental study on 34 residential CPE by Nokia: http://j.mp/abqdf6 – Bad performance and murky non-compliance all ove

Experimental study on 34 residential CPE by Nokia: http://j.mp/abqdf6 – Bad performance and murky non-compliance all over

r

Next Page »