Technology archived articles

Subscribe to the RSS feed for this category only

Free software and Technology and Unix05 Oct 2010 at 10:58 by Jean-Marc Liotier

I stumbled upon Peter Hutterer’s “thoughts on Linux multitouch” which gives a good overview of the challenges facing X.org & al. in developing multitouch over Linux. Among other things he explains why, in spite of end-user expectations to the contrary shaped by competitive offerings, Linux multitouch is not yet available:

“Why is it taking us so long when there’s plenty of multitouch offerings out there already ? The simple answer is: we are not working on the same problem.

If we look at commercial products that provide multitouch, Apple’s iPhones and iPads are often the first ones that come to mind. These provide multitouch but in a very restrictive setting: one multi-touch aware application running in full-screen. Doing this is suprisingly easy from a technical point of view, all you need is a new API that you write all new applications against. It is of course still hard to make it a good API and design good user interfaces for the new applications, but that is not a purely technical problem anymore. Apple’s products also provide multitouch in a new setting, an evironment that’s closer to an appliance than a traditional desktop. They have a defined set of features, different form factors, and many of the user expectations we have on the traditional desktop do not exist. For example, hardly anyone expects Word or OpenOffice to run as-is on an iPhone.

The main problems we face with integrating multitouch support into the X server is the need for the traditional desktop. Multitouch must work across multiple windowed application windows, with some pointer emulation to be able to use legacy applications on a screen. I have yet to see a commercial solution that provides this, even the Microsoft Surface applications I’ve played with so far only emulate this within very restrictive settings”.

In summary, the reason why Linux multitouch lags behind some of its competitors is that it is a significantly more ambitious project with bigger challenges to overcome.

Among the links from that document, I particularly appreciated ‘s Bill Buxton’s “Multi-touch systems that I have known and loved” that provides a great deal of material to frame the debate over multitouch functionality – I feel less clueless about multitouch now…

Mobile computing and Networking & telecommunications and Social networking and Technology30 Sep 2010 at 11:04 by Jean-Marc Liotier

Stumbling upon a months old article by my friend George’s blog expressing his idea of local social networking, I started thinking about Bluetooth again – I’m glad that he made that resurface.

Social networking has been in the air for about as long as Bluetooth exists. The fact that it can be used for reaching out to local people has not escaped obnoxious marketers nor have the frustrated Saudi youth taken long to innovate their way to sex in the midst of the hypocritical Mutaween.

Barely slower than the horny Saudi, SmallPlanet CrowdSurfer attempted to use Bluetooth to discover the proximity of friends, but it apparently did not survive: nowadays none of the likes of Brightkite, Gowalla, Foursquare or Loopt takes advantage of this technology – they all rely on the user checking-in manually. I automated the process for Brightkite – but still it is less efficient than local discovery and Bluetooth is not hampered by an indoor location.

People like George and me think about that from time to time, and researchers put some thought into it too – so it is all the more surprising that there are no mass-scale deployments taking advantage of it. I found OlderSibling but I doubt that it has a large user base and its assumed spying-oriented use-cases are quite off-putting. Georges mentioned Bliptrack, a system for the passive measurement of traffic, but it is not a social networking application. I registered with Aki-Aki but then found that it is only available on Apple Iphone – which I don’t use. I attempted registration with MobyLuck but I’m still waiting for their confirmation SMS… Both MobyLuck and Aki-Aki do not seem very insistent on increasing their user population.

Nevertheless I quite like the idea of MobyLuck and Aki-Aki and I wonder why they have not managed to produce any significant buzz – don’t people want local social networking ?

With indoor navigation looking like the next big thing already rising well above the horizon, I’m pretty sure that there will be a renewed interest in using Blueetooth for social networking – but why did it take so long ?

Networking & telecommunications and Systems and Technology25 Sep 2010 at 10:50 by Jean-Marc Liotier

If you can read French and if you are interested in networking technologies, then you must read Stephane Bortzmeyer’s blog – interesting stuff in every single article. Needless to say I’m a fan.

Stéphane commented an article by Nokia people : « An Experimental Study of Home Gateway Characteristics » – it exposes the results of networking performance tests on 34 residential Internet access CPE. For a condensed and more clearly illustrated version, you’ll appreciate the slides of « An Experimental Study of Home Gateway Characteristics » presented at the IETF 78’th meeting.

The study shows bad performance and murky non-compliance issues on every device tested. The whole thing was not really surprising, but it still sounded rather depressing to me.

But my knowledge of those devices is mostly from the point of few of an user and from the point of view of an information systems project manager within various ISP. I don’t have the depth of knowledge required for a critical look at this Nokia study. So I turned to a friendly industry expert who shall remain anonymous – here is his opinion :

[The study] isn’t really scientific enough testing IMHO. Surely most routers aren’t high performance due to cost reasons and most DSL users (Telco environments don’t have more than 8 Mbit/s (24 Mbit/s is max).

[Nokia] should check with real highend/flagships routers such as Linksys E3000. Other issues are common NAT issues or related settings or use of the box DNS Proxy’s. Also no real testing method explained here so useless IMHO. Our test plan has more than 500 pages with full description and failure judgment… :)

So take « An Experimental Study of Home Gateway Characteristics » with a big grain of salt. Nevertheless, in spite of its faults I’m glad that such studies are conducted – anything that can prod the consumer market into raising its game is a good thing !

Experimental study on 34 residential CPE by Nokia: http://j.mp/abqdf6 – Bad performance and murky non-compliance all ove

Experimental study on 34 residential CPE by Nokia: http://j.mp/abqdf6 – Bad performance and murky non-compliance all over

r

Economy and Politics and Technology08 Jul 2010 at 0:36 by Jean-Marc Liotier

Guillaume Esquelisse tipped me about an interesting discussion arising from Andy Grove’s article on the need for US job creation and industrial policy, which highlights the relationship between innovation,manufacturing and trade. Rajiv Sethi summarized its central point : “An economy that innovates prolifically but consistently exports its jobs to lower cost overseas locations will eventually lose not only its capacity for mass production, but eventually also its capacity for innovation“.

Unlike some of the commentators of Tim Duy’s article, I’m not one of those heretics who openly toy with protectionist ideas as a protection against the shameless exploiters of the international trade system. But as Tim Duy warns : “if you scream ‘protectionist fool’ in response, then you need to have a viable policy alternative that goes beyond the empty rhetoric“. So here is a proposal.

I believe that the fix does not lie in the selfish tweaking of trade barriers – that is merely treating a symptom. We need to act much deeper by addressing a foolishly held belief about the fundamental nature of the knowledge economy : underlying the fabless follies of glamorous captains of industry no longer worthy of the title is the fallacious narrative that applies capitalist analogies to the knowledge economy.

Knowledge cannot be hoarded. Like electricity production, knowledge creation is an online process : there are marginal hacks for storing some of it, but to benefit you must be plugged in the grid. As John Hagel III, John Seely Brown and Lang Davison put it : “abandon stocks, embrace flows”. Read their article and let it sink in : knowledge flows trumps knowledge stocks.

This is the same point that Mike Masnick found in Terence Kealey’s “Science is a Private Good–Or: Why Government Science is Wasteful” :

“How many people in this room can read the Journal of Molecular Biology. How many people in this room can read contemporary journals in physics? Or math? Physiology? Very, very few. Now the interesting thing — and we can show this very clearly — is that the only people who can read the papers, the only people who can talk to the scientists who generate the data, are fellow specialists in the same field. And what are they doing? They are publishing their own papers.

And if they try not to publish their own papers… If they say, ‘we’re not going to get engaged in the exchange of information; we’re going to keep out of it and just try to read other people’s papers, but not do any research of our own, not make any advances of our own, not have any conversations with anyone,’ within two or three years they are obsolescent and redundant, and they can no longer read the papers, because they’re not doing the science themselves, which gives them the tacit knowledge — all the subtle stuff that’s never actually published — that enables them actually to access the information of their competitors.”

This is a huge point that fits with similar points that we’ve made in the past when it comes to intellectual property and the idea that others can just come along and “copy” the idea. So many people believe it’s easy for anyone to just copy, but it’s that tacit knowledge that is so hard to get. It’s why so many attempts at just copying what other successful operations do turn into cargo cult copies, where you may get the outward aspects copied, but you miss all that important implicit and tacit information if you’re not out there in the market yourself.

Collecting ideas is easy, but acquiring tacit knowledge takes actual involvement. Tacit knowledge requires doing. This is quite far from being news to the practitioners of knowledge management or to anyone who has ever reflected on what internalizing knowledge actually means… But is is no longer just a self-improvement recipe nor an organizational issue : it has now acquired visibility at the national policy level.

The Chinese are not just mindlessly pillaging out intellectual property : they have also understood the systemic effects of a fluid knowledge economy – take open standards for example. We must now get on with the program and admit that the whole idea of capitalizing intelectual property is a lost cause. Political leaders will soon understand it… Patent trolls are already dead – but they just don’t know it yet.

But we have already advanced far into a de-industrialization process whose only redeeming strategic value is that the Chinese must be laughing hard enough for their gross national product to be slightly negatively affected. Is it too late ?

Design and Systems and Technology14 Apr 2010 at 11:04 by Jean-Marc Liotier

A colleague asked me about acceptable response times for the graphical user interface of a web application. I was surprised to find that both the Gnome Human Interface Guidelines and the Java Look and Feel Design Guidelines provide exactly the same values and even the same text for the most part… One of them must have borrowed the other’s guidelines. I suspect that the ultimate source of their agreement is Jakob Nielsen’s advice :

0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.

1.0 second is about the limit for the user’s flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.

10 seconds is about the limit for keeping the user’s attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect.

Jakob cites Miller’s “Response time in man-computer conversational transactions” – a paper that dates back to 1968. It seems like in more than forty years the consensus about acceptable response times has not moved substantially – which could be explained by the numbers being determined by human nature, independently of technology.

But still, I am rattled by such unquestioned consensus – the absence of dissenting voices could be interpreted as a sign of methodological complacency.

Code and Design and Systems and Technology13 Apr 2010 at 16:27 by Jean-Marc Liotier

Following a link from @Bortzmeyer, I was leafing through Felix von Leitner’s “Source Code Optimization” – a presentation demonstrating how unreadable code is rarely worth the hassle considering how good at optimizing compilers have become nowadays. I have never written a single line of C or Assembler in my whole life – but I like to keep an understanding of what is going on at low level so I sometimes indulge in code tourism.

I got the author’s point, though I must admit that the details of his demonstration flew over my head. But I found the memory access timings table particularly evocative :

Access Cost
Page Fault, file on IDE disk 1.000.000.000 cycles
Page Fault, file in buffer cache 10.000 cycles
Page Fault, file on ram disk 5.000 cycles
Page Fault, zero page 3.000 cycles
Main memory access 200 cycles (Intel says 159)
L3 cache hit 52 cycles (Intel says 36)
L1 cache hit 2 cycles

Of course you know that swapping causes a huge performance hit and you have seen the benchmarks where throughput is reduced to a trickle as soon as the disk is involved. But still I find that quantifying the number of cycles wasted illustrates the point even better. Now you know why programmers insist on keeping memory usage tight.

Jabber and Networking & telecommunications and Social networking and Technology12 Apr 2010 at 23:21 by Jean-Marc Liotier

This week-end I noticed Juick, an XMPP-based microblogging system with some nice original features. But Juick is not free and its author does not seem interested in freedom. So who’s gonna save XMPP-based microblogging ?

Enter OneSocialWeb, a free, open and decentralized XMPP-based social networking platform with all the federated goodness one might expect from an XMPP-based system. Sounds good doesn’t it ?

Laurent Eschenauer is a software engineer at Vodafone Group R&D and he is the architect of OneSocialWeb – the team also has Alard Weisscher, Lorena Alvarez and Diana Cheng on board. Today he posted great news about OneSocialWeb at Vodafone’s RndBackyard :

“Two months ago, we introduced you to our onesocialweb project: an opensource project that aims at building a free, open, and decentralized social networks. We explained the idea, we showed what it looked like, and we answered many questions. However it was only a prototype running on our servers, there was no such federated social network.. yet.

Today, we have released the source code and compiled versions of the core components of our architecture. With this, you are now in a position to install your own Openfire server, load our Onesocialweb plugin, and you will immediately be part of the Onesocialweb federation. We also provide you with a command line client to interact with other onesocialweb users.

As you see, we are not releasing the web and android client today. They will require a bit more work and you should expect them in the coming weeks. This means that this first release is mainly targeting developers, providing them with the required tools and documentation to start integrating onesocialweb features in their own clients, servers and applications.

This is a first release, not an end product. Our baby has just learned to walk and we’ll now see if it has some legs. We look forward to keep on growing it with the help of the community. Please have a look at our protocol, try to compile the code, and share your feedback with us on our mailing list. You can also have a look at our roadmap to get a feel for where we are going”.

Laurent only mentions Openfire and the OneSocialWeb plugin for Openfire is the only one currently available for download on OneSocialWeb’s site, but despair not if like me you are rather an ejabberd fan : “Its protocol can be used to turn any XMPP server into a full fledged social network, participating in the onesocialweb federation“. So if everything goes well, you may bet on some ejabberd module development happening soon. And who knows what other XMPP servers will end-up with OneSocialWeb extensions.

There were some news about OpenSocialWeb about two month ago, but that was unlucky timing as the project’s message got lost in the Google Buzz media blitz. Anyway, as Daniel Bo mentions : “Many years of discussion have gone into determining what a federated social network would look like, and the OneSocialWeb doesn’t ignore that work“. Indeed, as the OpenSocialWeb mentions, it “has been built upon the shoulders of other initiatives aiming to open up the web and we have been inspired by the visionaries behind them: activitystrea.ms, portablecontacts, OAuth, OpenSocial, FOAF, XRDS, OpenID and more“. Only good stuff there – an open standard built on top of recognized open standards is an excellent sign.

All that just for microblogging ? Isn’t that a slight overkill ? Did we say this was a microblogging protocol ? No – the purpose of OneSocialWeb is much more ambitious : it is to enable free, open, and decentralized social applications. OneSocialWeb is a platform  :

“The suite of extensions covers all the usual social networking use cases such as user profiles, relationships, activity streams and third party applications. In addition, it provides support for fine grained access control, realtime notification and collaboration”.

Two weeks ago, Laurent attended DroidCon Belgium and he explained how OneSocialWeb will enable developers to create social & real-time mobile applications, without having to worry about the backend developments:

“In my view, this is one of the most exciting element of our project. Beyond the ‘open’ social network element, what we are building is truly the ‘web as a platform’. An open platform making it simple to create new social applications”.

Here are his slides from DroidCon Belgium :

Is it a threat to Status.net ? No : being an open protocol, it can be used by any system willing to interoperate with other OneSocialWeb systems. @evan has expressed interest about that and I would trust him to hedge his bets. OneSocialWeb certainly competes with Status.net’s ambitious Ostatus distributed status updates protocol, but whichever wins will be a victory for all of us – and I would guess that their open nature and their similar use-cases will let them interoperate well. Some will see fragmentation, but I see increased interest that validates the vision of an open decentralized social web.

By the way, if you have paid attention at the beginning of this article, you certainly have noticed that Laurent’s article was posted at Vodafone’s RndBackyard. Yes, you read it right : OneSocialWeb is an initiative of Vodafone Group Research and Development to help taking concrete steps towards an open social web. Now that’s interesting – are big telecommunications operators finally seeing the light and embracing the open instead of fighting it ? Are they trying to challenge web services operators on their own turf ? My take is that this is a direct attack on large social networking operators whose rising concentration of power is felt as a threat by traditional telecommunications operator who have always lived in the fantasy that they somehow own the customer. Whatever it is, it is mightily interesting – and even more so when you consider Vodafone’s attitude :

“We by no means claim to have all the answers and are very much open to suggestions and feedback. Anyone is invited to join us in making the open social web a reality”.

“We consider it important to reality check our protocol with a reference implementation”.

They are humble, they are open and they are not grabbing power from anyone but walled garden operators : this really seems to be about enabling an open decentralized social. I have such a negative bias about large oligopolistic telecommunications operators that I would have a hard time believing it if I had not had my understanding of the rational behind one of them funding this effort against the likes of Facebook… But free software and open protocols are free software and open protocols – wherever they come from !

Daniel (a.k.a. Daeng) Bo
Jabber and Social networking and Technology and The Web09 Apr 2010 at 16:24 by Jean-Marc Liotier

I don’t quite remember how I stumbled upon this page on Nicolas Verite’s French-language blog about instant messaging and open standards, but this is how I found a microblogging system called Juick. Its claim to fame is that it is entirely XMPP based. I had written about Identichat is a Jabber/XMPP interface to Laconi.caStatus.net – but this is something different : not merely providing an interface to a generic microblogging service, it leverages XMPP by building the microblogging service around it.

As Joshua Price discovered Juick almost a year before me, I’m going to recycle his introduction to the service – he paraphrases Juick’s help page anyway :

Juick is a web service that takes XMPP messages and creates a microblog using those messages as entries [..] There’s no registration, no signup, no hassle. You simply send a XMPP message to “juick@juick.com” and it creates a blog based on the username you sent from and begins recording submissions.

  1. Add “juick@juick.com” to your contact list in your Jabber client or GMail.
  2. Prepare whatever message you want juick to record
  3. Send your message

That’s it. Juick will respond immediately telling you the message has been posted, and will provide you with a web address to view your new entry.

The simplicity of an account creation process that sniffs your Jabber vCard is something to behold – I makes any other sign-up process feel ponderous. This poor man’s OpenID Attribute Exchange does the job with several orders of magnitude less complexity.

Almost every interaction with Juick can be performed from the cozy comfort of your favorite XMPP client – including threaded replies which are something that Status.net’s Jabber bot is not yet capable of handling (edit – thanks to Aaron for letting us know that Status.net’s Jabber bot has always been able to do that too). And contrary to every microblogging service that I have known, the presence information is displayed on the web site – take a look at Nÿco’s subscribers for a example.

The drawbacks is that this is a small social network intended for Russophones, and the software is not free. But still, it is an original project whose features may serve as inspiration for others.

For some technical information about Stoyan Zhekov‘s presentation :


Technology and The Web21 Mar 2010 at 22:21 by Jean-Marc Liotier

Gnutella was the first decentralized file sharing network. It celebrated a decade of existence on March 14, 2010. Once Audiogalaxy went down in 2002, it became my favorite service for clandestine file sharing. In late 2007, it was the most popular file sharing network on the Internet with an estimated market share of more than 40%. But nowadays, BitTorrent steals the limelight. How did that happen ?

Gnutella has structural scalability limitations that even its creator acknowledged from the very start. Over the years, major improvements were introduced, but search horizon and network size remain intrinsic limitations due to search traffic. On the other hand, BitTorrent outsourced much of the search and indexing of files to torrent web sites, only handling the actual distribution of data within the client.

Providing search across the indexes requires other parties to provide them, but that architectural constraint has paradoxically become a key driver of BitTorrent’s popularity by providing a simple business model. Ernesto at TorrentFreak explains that easy monetization explains the ubiquity of indexes : “BitTorrent sites can generate some serious revenue, enough to sustain the site and make a decent living. In general, ad rates per impression are very low, but thanks to the huge amounts of traffic it quickly adds up. This money aspect has made it possible for sites to thrive, and has also lured many gold diggers into starting a torrent site over the years“.

With commercial interests comes spam and legal vulnerabilities – so I feel much more comfortable knowing that decentralized protocols exist to provide resilience towards the censorship that lurks over us in the dark, waiting for us to become complacently reliant on centralized resources. Happy birthday Gnutella !

Social networking and Technology and The Web10 Feb 2010 at 22:06 by Jean-Marc Liotier

Yesterday, while Google Buzz was still only a rumor, I felt that there was a slight likelyhood that Google’s entry into the microblogging field would support decentralized interoperability using the OpenMicroBlogging protocol pioneered by the Status.net open source micro messaging platform. I was wrong about that, but it was quite a long shot… Speculation is a dirty job, but someone’s got to do it !

I am also surprised that there is no Twitter API, but there is plenty of other protocols on the menu that should keep us quite happy. There is already the Social Graph API, the PubSubHubbub push protocol and of course Atom Syndication and the RSS format – with the MediaRSS extension. But much more interesting is the Google Buzz documentation mention that “Over the next several months Google Buzz will introduce an API for developers, including full/read write support for posts with the Atom Publishing Protocol, rich activity notification with Activity Streams, delegated authorization with OAuth, federated comments and activities with Salmon, distributed profile and contact information with WebFinger, and much, much more“. So with all that available to third parties we may even be able to interact with Google’s content without having to deal with Gmail whose rampant portalization makes me dislike it almost as much as Facebook and Yahoo.

I’m particularly excited about Salmon, a protocol for comments and annotations to swim upstream to original update sources. For now I wonder about the compared utilities of Google Buzz and FriendFeed, but once Salmon is widely implemented it won’t matter where the comments are contributed : they will percolate everywhere and the conversation shall be united again !

Jabber and Rumors and Social networking and Technology and The Web09 Feb 2010 at 12:29 by Jean-Marc Liotier

According to a report from the Wall Street Journal mentioned by ReadWriteWeb, Google might be offering a microblogging service as soon as this week.

When Google opened Google Talk, they opened the service to XMPP/Jabber federation. As a new entrant in a saturated market, opening up is the logical move.

The collaborative messaging field as a whole cannot be considered saturated but, while it is still evolving very fast, the needs of the early adopter segment are now well served by entrenched offers such as Twitter and Facebook. Touching them will require an alternative strategy – and that may lead to opening as a way to offer attractive value to users and service providers alike.

So maybe we can cling on a faint hope that Google’s entry into the microblogging field will support decentralized interoperability using the OpenMicroBlogging protocol pioneered by the Status.net open source micro messaging platform. Shall we take a bet ?

Don’t you love bar talk speculation based on anonymous rumors ?

Free software and Geography and Marketing and Politics and Technology and The Web17 Dec 2009 at 13:27 by Jean-Marc Liotier

The quality of OpenStreetMap‘s work speaks for itself, but it seems that we need to speak about it too – especially now that Google is attempting to to appear as holding the moral high ground by using terms such as “citizen cartographer” that they rob of its meaning by conveniently forgetting to mention the license under which the contributed data is held. But in the eye of the public, the $50000 UNICEF donation to the  home country of the winner of the Map Maker Global Challenge lets them appear as charitable citizens.

We need to explain why it is a fraud, so that motivated aspiring cartographers are not tempted to give away their souls for free. I could understand that they sell it, but giving it to Google for free is a bit too much – we must tell them. I’m pretty sure that good geographic data available to anyone for free will do more for the least developed communities than a 50k USD grant.

Take Map Kibera for example :

“Kibera in Nairobi, Kenya, widely known as Africa’s largest slum, remains a blank spot on the map. Without basic knowledge of the geography and resources of Kibera it is impossible to have an informed discussion on how to improve the lives of residents. This November, young Kiberans create the first public digital map of their own community”.

And they did it with OpenStreetMap. To the million of people living in this former terra incognita with no chance of profiting a major mapping provider, how much do you think having at last a platform for services that require geographical information without having to pay Google or remain within the limits of the uses permitted by its license is worth ?

I answered this piece at ReadWriteWeb and I suggest that you keep an eye for opportunities to answer this sort of propaganda against libre mapping.

Free software and Technology07 Dec 2009 at 12:51 by Jean-Marc Liotier

As Ars Technica announced three days ago,  Intel’s 2009 launch of its ambitious Larrabee GPU’s has been canceled : “The project has suffered a final delay that proved fatal to its graphics ambitions, so Intel will put the hardware out as a development platform for graphics and high-performance computing. But Intel’s plans to make a GPU aren’t dead; they’ve just been reset, with more news to come next year“.

I can’t wait for more news about that radical new architecture from the only major graphics hardware vendor that has a long history of producing or commissioning open source drivers for its graphics chips.

But what are we excited about ? In a nutshell : automatic vectorization for parallel execution of any known code graph with no data dependencies between iterations is why Larabee is about. That means that in many cases, the developper can take his existing code and get easy parallel execution for free.

Since I’m an utter layman in the field of processor architecture, I’ll let you read the word of Tim Sweeney of Epic Games, who provided a great deal of input into the design of LRBni. He sums up the big picture a little more eloquently and I found him cited in Michael Abrash’s April 2009 article in Dr. Dobb’s – “A First Look at the Larrabee New Instructions” :

Larrabee enables GPU-class performance on a fully general x86 CPU; most importantly, it does so in a way that is useful for a broad spectrum of applications and that is easy for developers to use. The key is that Larrabee instructions are “vector-complete.”

More precisely: Any loop written in a traditional programming language can be vectorized, to execute 16 iterations of the loop in parallel on Larrabee vector units, provided the loop body meets the following criteria:

  • Its call graph is statically known.
  • There are no data dependencies between iterations.

Shading languages like HLSL are constrained so developers can only write code meeting those criteria, guaranteeing a GPU can always shade multiple pixels in parallel. But vectorization is a much more general technology, applicable to any such loops written in any language.

This works on Larrabee because every traditional programming element — arithmetic, loops, function calls, memory reads, memory writes — has a corresponding translation to Larrabee vector instructions running it on 16 data elements simultaneously. You have: integer and floating point vector arithmetic; scatter/gather for vectorized memory operations; and comparison, masking, and merging instructions for conditionals.

This wasn’t the case with MMX, SSE and Altivec. They supported vector arithmetic, but could only read and write data from contiguous locations in memory, rather than random-access as Larrabee. So SSE was only useful for operations on data that was naturally vector-like: RGBA colors, XYZW coordinates in 3D graphics, and so on. The Larrabee instructions are suitable for vectorizing any code meeting the conditions above, even when the code was not written to operate on vector-like quantities. It can benefit every type of application!

A vital component of this is Intel’s vectorizing C++ compiler. Developers hate having to write assembly language code, and even dislike writing C++ code using SSE intrinsics, because the programming style is awkward and time-consuming. Few developers can dedicate resources to doing that, whereas Larrabee is easy; the vectorization process can be made automatic and compatible with existing code.

With cores proliferating on an more CPUs every day and an embarrassing number of applications not taking advantage of it, bringing easy parallel execution to the masses means a lot. That’s why I’m eager to see what Intel has in store for the future of Larrabee.

Geography and Mobile computing and Networking & telecommunications and Technology30 Oct 2009 at 12:42 by Jean-Marc Liotier

Last week-end I ventured outside of the big city, in the land where cells are measured in kilometres and where signal is not taken for granted. So what surprised me was not to have to deal with only the faint echo of the network’s signal. Cell of origin location on the other hand was quite a surprising feature in that environment : sometimes it works with an error consistent with the cellular environment, but often the next moment it estimated my position to be way further from where I actually was – 34 kilometres west in this case.

The explanation is obvious : my terminal chose to attach to the cell it received best. Being physically located on a coastal escarpment, it had line of sight to a cell on the opposite side of the bay – 34 kilometres away.

But being on the edge of a very well covered area, it was regularly handed over to a nearby cell. In spite of the handover damping algorithms, this resulted in a continuous flip-flop nicely illustrated by this extract of my Brightkite account’s checkin log :

Isn’t that ugly ? Of course I won’t comment on this network’s radio planning and cell neighbourhood settings – I promised my employer I will not mention him anymore. But there has to be a better way and my device can definitely do something about it : it is already equipped with the necessary hardware.

Instant matter displacement being highly unlikely for the time being, we can posit that sudden movement of kilometre-scale distances will result in the corresponding acceleration. And the HTC Magic sports a three axis accelerometer. At that point, inertial navigation immediately springs to mind. Others have thought about it before, and it could be very useful right now for indoor navigation. But limitations seem to put that goal slightly out of reach for now.

But for our purposes the hardware at hand would be entirely sufficient : we just need rough dead reckoning to check that the cell ID change is congruent with recent acceleration. With low quality of the acceleration measurement, using it as a positioning source is out of question, but it would be suitable for dampening the flip flopping as the terminal suffers the vagaries of handover to distant cells.

So who will be the first to refine cell of origin positioning using inertial navigation as a sanity check ?

Consumption and Free software and Knowledge management and Mobile computing and Networking & telecommunications and Systems and Technology and Unix19 Oct 2009 at 1:18 by Jean-Marc Liotier

Five months have elapsed since that first week-end when my encounter with Android was a severe case of culture shock. With significant daily experience of the device, I can now form a more mature judgement of its capabilities and its potential – of course from my own highly subjective point of view.

I still hate having to use Google Calendar and Google Contacts for synchronization.  I hope that SyncML synchronization will appear in the future, make Android a better desktop citizen and provide more choice of end points. Meanwhile I use Google. With that out of the way, let’s move on to my impressions of Android itself.

I am grateful for features such as a decent web browser on a mobile device, for a working albeit half baked packaging and distribution system, and for Google Maps which I consider both a superlative application in its own right and the current killer albeit proprietary infrastructure for location enabled applications. But the rigidly simple interface that forces behaviours upon its user feels like a straitjacket : the overbearing feeling when using Android is that its designers have decided that simplicity is to be preserved at all costs regardless of what the user prefers.

Why can’t I select a smaller font for my list items ? Would a parameter somewhere in a customization menu add too much complication ? Why won’t you show me the raw configuration data ? Is it absolutely necessary to arbitrarily limit the number of virtual desktops to three ? From the point of a user who is just getting acquainted with such a powerful platform, those are puzzling questions.

I still don’t like the Android ‘s logic, and moreover I still don’t quite understand it. Of course I manage to use that system, but after five month of daily use it still does not feel natural. Maybe it is just a skin-deep issue or maybe I am just not the target audience – but some features are definitely backwards – package management for example. For starters, the “My Downloads” list is not ordered alphabetically nor in any apparently meaningful order. Then for each upgradeable package, one must first browse to the package, then manually trigger the upgrade package, then acknowledge system privileges the upgraded package and finally clear the download notification and the update notification. Is this a joke ? This almost matches the tediousness of upgrading Windows software – an impressive feat considering that the foundations of Android package management seem serious enough. Where is my APT ?

Like any new user on a prosperous enough system, I am lost in choices – but that is an embarrassment of riches. Nevertheless, I wonder why basics such as a task manager are not installed by default. In classic Unix spirit, even the most basic system utilities are independent applications. But what is bearable and even satisfying on a system with a decent shell and package management with dependencies becomes torture when installing a package is so clumsy and upgrading it so tedious.

Tediousness in package management in particular and user interaction in general makes taming the beast an experience in frustration. When installing a bunch of competing applications and testing them takes time and effort. Experimenting is not the pleasure it normally is on a Linux system. The lack of decent text entry compounds the feeling. Clumsy text selection makes cut and paste a significant effort – something Palm did make quick, easy and painless more than ten years ago. Not implementing pointer-driven selection – what were the developers thinking ?

PIM integration has not progressed much. For a given contact, there is no way to look at a communications log that spans mail, SMS and telephony: each of them is its own separate universe. There is no way to have a list of meetings with a given contact or at given location.

But there basic functionality has been omitted too. For example when adding a phone number to an existing contact, search is disabled – you have to scroll all the way to the contact. There is no way to search the SMS archive and SMS to multiple recipients is an exercise left to applications.

Palm OS may have been unstable, incapable of contemporary operating system features, offering only basic functionality and generally way past its shelf date. But in the mind of users, it remains the benchmark against which all PIM systems are judged. And to this day I still don’t see anything beating Palm OS on its home turf of  PIM core features and basic usability.

Palm OS was a poster child for responsiveness, but on the Android everything takes time – even after I have identified and killed the various errant applications that make it even slower. Actually, the system is very fast and capable of feats such as full-motion video that were far beyond the reach of the Palm OS. But the interaction is spoilt by gratuitous use of animations for everything. Animations are useful for graphically hinting the novice user about what is going on – but then hey are only a drag. But please let me disable animations as I do on every desktop I use !

The choice of a virtual keyboard is my own mistake and I am now aware that I need a physical keyboard. After five months, I can now use the virtual keyboard with enough speed and precision for comfortable entry of a couple of sentences. But beyond that it is tiring and feels too clumsy for any meaningful work. This is a major problem for me – text entry is my daily bread and butter. I long for the Treo‘s keyboard or even the one on the Nokia E71 – they offered a great compromise of typing speed and compacity. And no multitouch on the soft keyboard means no keyboard shortcuts which renders many console applications unusable – sorry Emacs users.

The applications offering is still young and I cannot blame it for needing time to expand and mature. I also still need to familiarize myself with Android culture an develop the right habits to find my way instinctively and be more productive. After five months, we are getting there – one handed navigation has been done right. But I still believe that a large part of the user interface conventions used on the Android does not match the expectations for general computing.

It seems like everything has been meticulously designed to bury under a thick layer of Dalvik and Google plaster anything that could remind anyone of Unix. It is very frustrating to know that there is a Linux kernel under all that, and yet to suffer wading knee-deep in the marshes of toyland. The more I use Android an study it, the more I feel that Linux is a mere hardware abstraction layer and the POSIX world a distant memory. This is not the droid I’m looking for.

Geography and Knowledge management and Mobile computing and Technology30 Aug 2009 at 21:30 by Jean-Marc Liotier

As the Geohack template used by Wikipedia for geographical locations attests (see Paris for example) there are many map publishing services on the Web. But almost all of them rely on an oligopoly of geographical data suppliers among whom AND, Navteq and Teleatlas dominate and absorb a large proportion of the profit in the geographical information value chain :

“If you purchase a TomTom, approximately 20-30% of that cost goes to Tele Atlas who licenses the maps that TomTom and many other hardware manufacturers use. Part of that charge is because Tele Atlas itself, and the company’s main rival Navteq, have to buy the data from national mapping agencies in the first place, like the Ordance Survey, and then stitch all the information together. Hence the consumer having to pay on a number of levels”.

And yet, geographical data is a fundamental pillar of our information infrastructure. A few years ago the realm of specialized geographic information systems, geography is nowadays a pervasive dimension of about every sort of service. When something becomes an essential feature of our lives, nothing short of freedom is acceptable. What happens when that freedom requires collecting humongous amounts of data and when oligopolistic actors strive to keep control and profits to themselves ? Free software collaboration and  distributed data collection of course !

Andrew Ross gives a nice summary of why free geographical data is the way of the future :

“The tremendous cost of producing the maps necessitates that these firms have very restrictive licenses to protect their business models selling the data. As a result, there are many things you can’t do with the data.

[..] The reason why OpenStreetMap will win in the end and likely obviate the need for commercial map data is that the costs and risks associated with mapping are shared. Conversely, for Navteq and TeleAtlas, the costs born by these companies are passed on to their customers. Once their customers discover OpenStreetMap data is in some cases superior, or more importantly – they can contribute to it and the license allows them to use the data for nearly any purpose – map data then becomes commodity”.

The proprietary players are aware of that trend, and they try to profit from the users who wish to correct the many errors contained in the data they publish. But why would anyone contribute something, only to see it monopolized by the editor who won’t let you do what you want with it ? If I make the effort of contributing carefully collected data, I want it to benefit as many people as possible – not just someone who will keep it for his own profit.

Access to satellite imagery will remain an insurmountable barrier in the long term, but soon the map layers will be ours to play with – and that is enough to open the whole world of mapping. Like a downhill snowball, the OpenStreetMap data set growth is accelerating fast and attracting a thriving community that now includes professional and institutional users and contributors. Over its first five years, the Wikipedia-like online map project has delivered great results – and developed even greater ambitions.

I have started to contribute to OpenStreetmap – I feel great satisfaction at mapping the world for fun and for our common good. Owning the map feels good ! You can do it too – it is easy, especially if you are the sort of person who often logs tracks with a GPS receiver. OpenStreetMap’s infrastructure is quite impressive – everything you need is already out there waiting for your contribution, including very nice editors – and there is one for Android too.

If you just want to add your grain of sand to the heap, reporting bugs and naming the places in your favourite neighbourhood are great ways to help build maps that benefit all of us.  Contributing to the map is like giving directions to strangers lost in your neighbourhood – except that you are giving directions to many strangers at once.

If you are not yet convinced, take a look a the map – isn’t it beautiful ? And it is only one of the many ways to render OpenStreetMap data. Wanna make a cycling map with it ? Yes we can ! That is the whole point of the project – we can do whatever we want with the data, free in every way.

And anyone can decide he wants his neighbourhood part of the worldmap, even if no self-respecting for-profit enterprise will ever consider loosing money on such endeavour :

“OpenStreetMap has better coverage in some niche spaces than other mapping tools, making it very attractive resource for international development organizations. Want proof ? [..] we looked at capital cities in several countries that have been in the news lately for ongoing humanitarian situations – Zimbabwe, Somalia, and the Democratic Republic of the Congo. For two our of the three, Mogadishu and Kinshasa, there is simply no contest – OpenStreetMap is way ahead of the others in both coverage and in the level of detail. OpenStreetMap and Google Maps are comparable in Harare. The data available through Microsoft’s Virtual Earth lagged way behind in all three”.

Among other places, I was amazed at the level of detail provided to the map of Ouagadougou. Aren’t these exciting times for cartography ?

If you purchase a TomTom, approximately 20-30% of that cost goes to Tele Atlas who licenses the maps that TomTom and many other hardware manufacturers use. Part of that charge is because Tele Atlas itself, and the company’s main rival Navteq, have to buy the data from national mapping agencies in the first place, like the Ordance Survey, and then stitch all the information together. Hence the consumer having to pay on a number of levels.

« Previous PageNext Page »