April 2010


Roller skating15 Apr 2010 at 18:41 by Jean-Marc Liotier

Tuesday night, as I was lacing my skates before leaving the office, I was chatting with a colleague about roller-skate ball bearing maintenance and joking about riding my ball bearings to death. Actually, this is not a joke – I do ride them to death, as this photograph taken twenty minutes later attests.

When your ball bearings start looking like these – you may have waited too long before replacing them…

The good news is that when spilling their guts all over the pavement, ball bearings don’t seize too brutally – the inertia of my 94 kilograms helps a lot. But they do brake hard enough that powering on is not a realistic option … Consider yourself lucky if you don’t find yourself catapulted forward – your reflexes count less than the sheer luck of the event not affecting the front wheel.

My ball bearings lifecycle management process begins with the precious set of new ones reserved for racing. After a while, they are shifted to the long distance raid pool – where some performance must be counted on, but where time is not critical. Mediocre bearings from newly purchased skates enter at that stage. Later they end up in my urban skating pool, where constant shocks, humidity and utter lack of maintenance get the better of them after a few months of commuting – though some of them last much longer.

Sometimes the street soot accumulated after wet rides slows or even blocks rotation – but that is usually nothing that a good downhill ride won’t fix. At the end of their lives, the ball bearing are slack – I suspect that once they become slack they degrade exponentially.

I used to meticulously clean my ball bearings using this tedious method – but I no longer that that time as I don’t believe it is worth it. I recently discovered the Bont method of just shaking the ball bearings in petrol and two stroke oil at approximately 50:1 ratio and leaving them to dry on a towel… Now that is fast enough for me – I’ll try this method and see if I can extend my ball bearing’s life a bit.

Networking & telecommunications and Politics14 Apr 2010 at 11:51 by Jean-Marc Liotier

Stéphane Richard, chief executive at France Telecom, argued recently : “There is something totally not normal and contrary to economic logic to let Google use our network without paying the price”. I could barely control my hilarity.

But wait, there’s more :

Telefonica chairman Cesar Alierta said Google should share some of its online advertising revenue with carriers to compensate them for the billions of euros they are investing in fixed-line and mobile infrastructure to increase download speeds and network capacity. Alierta said that regulators should step in to supervise a settlement if no revenue sharing deal was possible between search engines led by Google and network operators. France Telecom CEO Stephane Richard said, “Today, there is a winner who Google. There are victims that are content providers, and to a certain extent, network operators. We cannot accept this”. Deutsche Telekom CEO Rene Obermann stated, “There is not a single Google service that is not reliant on network service. We cannot offer our networks for free”.

Whiners ! France Telecom, Telefonica and Deutsch Telekom are all historical monopoly operators that suffered the full impact of the internetworking revolution. It took them a while to realize that the good old times were gone for good, but I thought that with a good help of new blood they had reluctantly adapted to the new reality. Apparently I was wrong : in spite of  a decades long track record of overwhelming evidence to the contrary, executives at the incumbent club keep fantasizing about the pre-eminence of intelligent networks and how they somehow own the user. Of course I would not tax them with sheer stupidity – they are anything but stupid. This is rather a case of gross hypocrisy serving a concerted lobbying effort. And maybe after all they end up believing their own propaganda.

Users pay Internet access providers for – guess what – Internet Access. And most providers are very happy for their Internet access to do exactly what it says on the tin while they get well earned monies in exchange. Only a few of them have the political clout necessary for this blatant attempt at distorting competition – they are trying to leverage it but they will fail, again like they failed to stop local loop unbundling.

Ultimately, if large operators across Europe make a foolish coordinated move against Google, it will look suspiciously like a cartel. You can play that game with the national governments, but you definitely don’t want to do that in view of the European Comissionner for Competition.

Since Google is in the crosshair, I’ll let them have the last word :

“Network neutrality is the principle that Internet users should be in control of what content they view and what applications they use on the Internet. The Internet has operated according to this neutrality principle since its earliest days… Fundamentally, net neutrality is about equal access to the Internet. In our view, the broadband carriers should not be permitted to use their market power to discriminate against competing applications or content. Just as telephone companies are not permitted to tell consumers who they can call or what they can say, broadband carriers should not be allowed to use their market power to control activity online”.

Guide to Net Neutrality for Google Users, cited by the Wikipedia article on Network Neutrality.

Update : I am far from the only one to feel slack-jawed astonishment at that shocking display of hypocrisy. From the repeating-something-relentlessly-does-not-make-it-true dept, Karl Bode at Techdirt published “Telcos Still Pretending Google Gets Free Ride”. You’ll find comments and more context there.

Design and Systems and Technology14 Apr 2010 at 11:04 by Jean-Marc Liotier

A colleague asked me about acceptable response times for the graphical user interface of a web application. I was surprised to find that both the Gnome Human Interface Guidelines and the Java Look and Feel Design Guidelines provide exactly the same values and even the same text for the most part… One of them must have borrowed the other’s guidelines. I suspect that the ultimate source of their agreement is Jakob Nielsen’s advice :

0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.

1.0 second is about the limit for the user’s flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.

10 seconds is about the limit for keeping the user’s attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect.

Jakob cites Miller’s “Response time in man-computer conversational transactions” – a paper that dates back to 1968. It seems like in more than forty years the consensus about acceptable response times has not moved substantially – which could be explained by the numbers being determined by human nature, independently of technology.

But still, I am rattled by such unquestioned consensus – the absence of dissenting voices could be interpreted as a sign of methodological complacency.

Code and Design and Systems and Technology13 Apr 2010 at 16:27 by Jean-Marc Liotier

Following a link from @Bortzmeyer, I was leafing through Felix von Leitner’s “Source Code Optimization” – a presentation demonstrating how unreadable code is rarely worth the hassle considering how good at optimizing compilers have become nowadays. I have never written a single line of C or Assembler in my whole life – but I like to keep an understanding of what is going on at low level so I sometimes indulge in code tourism.

I got the author’s point, though I must admit that the details of his demonstration flew over my head. But I found the memory access timings table particularly evocative :

Access Cost
Page Fault, file on IDE disk 1.000.000.000 cycles
Page Fault, file in buffer cache 10.000 cycles
Page Fault, file on ram disk 5.000 cycles
Page Fault, zero page 3.000 cycles
Main memory access 200 cycles (Intel says 159)
L3 cache hit 52 cycles (Intel says 36)
L1 cache hit 2 cycles

Of course you know that swapping causes a huge performance hit and you have seen the benchmarks where throughput is reduced to a trickle as soon as the disk is involved. But still I find that quantifying the number of cycles wasted illustrates the point even better. Now you know why programmers insist on keeping memory usage tight.

Jabber and Networking & telecommunications and Social networking and Technology12 Apr 2010 at 23:21 by Jean-Marc Liotier

This week-end I noticed Juick, an XMPP-based microblogging system with some nice original features. But Juick is not free and its author does not seem interested in freedom. So who’s gonna save XMPP-based microblogging ?

Enter OneSocialWeb, a free, open and decentralized XMPP-based social networking platform with all the federated goodness one might expect from an XMPP-based system. Sounds good doesn’t it ?

Laurent Eschenauer is a software engineer at Vodafone Group R&D and he is the architect of OneSocialWeb – the team also has Alard Weisscher, Lorena Alvarez and Diana Cheng on board. Today he posted great news about OneSocialWeb at Vodafone’s RndBackyard :

“Two months ago, we introduced you to our onesocialweb project: an opensource project that aims at building a free, open, and decentralized social networks. We explained the idea, we showed what it looked like, and we answered many questions. However it was only a prototype running on our servers, there was no such federated social network.. yet.

Today, we have released the source code and compiled versions of the core components of our architecture. With this, you are now in a position to install your own Openfire server, load our Onesocialweb plugin, and you will immediately be part of the Onesocialweb federation. We also provide you with a command line client to interact with other onesocialweb users.

As you see, we are not releasing the web and android client today. They will require a bit more work and you should expect them in the coming weeks. This means that this first release is mainly targeting developers, providing them with the required tools and documentation to start integrating onesocialweb features in their own clients, servers and applications.

This is a first release, not an end product. Our baby has just learned to walk and we’ll now see if it has some legs. We look forward to keep on growing it with the help of the community. Please have a look at our protocol, try to compile the code, and share your feedback with us on our mailing list. You can also have a look at our roadmap to get a feel for where we are going”.

Laurent only mentions Openfire and the OneSocialWeb plugin for Openfire is the only one currently available for download on OneSocialWeb’s site, but despair not if like me you are rather an ejabberd fan : “Its protocol can be used to turn any XMPP server into a full fledged social network, participating in the onesocialweb federation“. So if everything goes well, you may bet on some ejabberd module development happening soon. And who knows what other XMPP servers will end-up with OneSocialWeb extensions.

There were some news about OpenSocialWeb about two month ago, but that was unlucky timing as the project’s message got lost in the Google Buzz media blitz. Anyway, as Daniel Bo mentions : “Many years of discussion have gone into determining what a federated social network would look like, and the OneSocialWeb doesn’t ignore that work“. Indeed, as the OpenSocialWeb mentions, it “has been built upon the shoulders of other initiatives aiming to open up the web and we have been inspired by the visionaries behind them: activitystrea.ms, portablecontacts, OAuth, OpenSocial, FOAF, XRDS, OpenID and more“. Only good stuff there – an open standard built on top of recognized open standards is an excellent sign.

All that just for microblogging ? Isn’t that a slight overkill ? Did we say this was a microblogging protocol ? No – the purpose of OneSocialWeb is much more ambitious : it is to enable free, open, and decentralized social applications. OneSocialWeb is a platform  :

“The suite of extensions covers all the usual social networking use cases such as user profiles, relationships, activity streams and third party applications. In addition, it provides support for fine grained access control, realtime notification and collaboration”.

Two weeks ago, Laurent attended DroidCon Belgium and he explained how OneSocialWeb will enable developers to create social & real-time mobile applications, without having to worry about the backend developments:

“In my view, this is one of the most exciting element of our project. Beyond the ‘open’ social network element, what we are building is truly the ‘web as a platform’. An open platform making it simple to create new social applications”.

Here are his slides from DroidCon Belgium :

Is it a threat to Status.net ? No : being an open protocol, it can be used by any system willing to interoperate with other OneSocialWeb systems. @evan has expressed interest about that and I would trust him to hedge his bets. OneSocialWeb certainly competes with Status.net’s ambitious Ostatus distributed status updates protocol, but whichever wins will be a victory for all of us – and I would guess that their open nature and their similar use-cases will let them interoperate well. Some will see fragmentation, but I see increased interest that validates the vision of an open decentralized social web.

By the way, if you have paid attention at the beginning of this article, you certainly have noticed that Laurent’s article was posted at Vodafone’s RndBackyard. Yes, you read it right : OneSocialWeb is an initiative of Vodafone Group Research and Development to help taking concrete steps towards an open social web. Now that’s interesting – are big telecommunications operators finally seeing the light and embracing the open instead of fighting it ? Are they trying to challenge web services operators on their own turf ? My take is that this is a direct attack on large social networking operators whose rising concentration of power is felt as a threat by traditional telecommunications operator who have always lived in the fantasy that they somehow own the customer. Whatever it is, it is mightily interesting – and even more so when you consider Vodafone’s attitude :

“We by no means claim to have all the answers and are very much open to suggestions and feedback. Anyone is invited to join us in making the open social web a reality”.

“We consider it important to reality check our protocol with a reference implementation”.

They are humble, they are open and they are not grabbing power from anyone but walled garden operators : this really seems to be about enabling an open decentralized social. I have such a negative bias about large oligopolistic telecommunications operators that I would have a hard time believing it if I had not had my understanding of the rational behind one of them funding this effort against the likes of Facebook… But free software and open protocols are free software and open protocols – wherever they come from !

Daniel (a.k.a. Daeng) Bo
Brain dump and Economy and Free software and Marketing11 Apr 2010 at 10:45 by Jean-Marc Liotier

In the wake of the Ordnance Survey’s liberation of the UK’s geographical information, I just had an interesting conversation with Glyn Moody about the relationship between free digital publishing and the sale of same data on physical substrate.

If computer reading is cheaper and more convenient, can free digital publishing lead to sale of same data on physical substrate ? Free data on physical substrate has market value if the substrate has value on its own or if the data has sentimental value. That is a potential axis of development for the traditional publishing industry : when nostalgia and habits are involved, the perceived value of the scarce physical substrate of digitally abundant data may actually increases. Of course, free data has value on its own – but, as the reader of this blog certainly knows, it involves a business model entirely different to physical items.

Identification of content producers, quality control, aggregation, packaging… This is what a traditional editor does – and it is also what a Linux distribution does. Isn’t it ironical that those the Free software world and the world or traditional publishing have had such a hard time understanding each other ?

Some actors did catch the wave early on. In the mid-nineties, I remember that my first exposure to Free software took the form of a Walnut Creek CD-ROM – at the time there was a small publishing industry based on producing and distributing physical media filled with freely available packages for those of us stuck across tens of kilobytes thin links in the Internet’s backwaters. And there were other before : since time immemorial, the Free software industry has understood that the market role of producing data on physical substrate is distinct and independent from managing the data. As Glyn Moody remarked, it is only a matter of time before the media industry as a whole gets it.

Strangely, the media industry lags at least fifteen years – and probably twenty : even in mainstream publications, the writing has been on the wall for that long. To prove that, here is an excerpts of a 1994 New York Times article by Laurie Flynn “In the On-Line Market, the Name of the Game Is Internet” :

“I think Compuserve as a business is going to change very radically,” said David Strom, a communications and networking consultant in Port Washington, N.Y. “It could be they’re going to become a pipe, an access provider to the Internet, rather than a content provider.”

But Compuserve, like other on-line services, says it will continue to find ways to differentiate its offerings from databases of similar information on the Internet, by providing better search tools, a more organized approach and better customer service.

Compuserve has just released a CD-ROM, to be updated bimonthly, that works with its consumer on-line service to add video clips and music to the service in a magazine-like format. In the first edition, for example, users can view a video clip from a Jimmy Buffett concert and then with a click of the mouse connect to the Compuserve on-line service where they can order the audio CD. All the on-line services are working to add multimedia.

“Compuserve has 15 years experience in organizing that data and making it easy for them to find it and grab it,” Mr. Hogan said. “It’s not just a user interface issue but how content is packaged.”

The history of Compuserve since then shows that they were never able to fully execute that vision. But it shows how long it took for the idea of free data as lifeblood of a multi-industry symbiotic organism to get from visionaries to a mainstream business model.

In the nineties, we had to endure the tired rear-guard debate of “content vs. pipes”. The coming of age of Free data, confirms that the whole thing was moot from the very start. In 1984, Stewart Brand said “Information Wants To Be Free. Information also wants to be expensive… That tension will not go away”. I believe that said tension is most definitely in the process of going away as free data will dominate and feed a system of economic actors who will add value to it and feed each other in the process.

Jabber and Social networking and Technology and The Web09 Apr 2010 at 16:24 by Jean-Marc Liotier

I don’t quite remember how I stumbled upon this page on Nicolas Verite’s French-language blog about instant messaging and open standards, but this is how I found a microblogging system called Juick. Its claim to fame is that it is entirely XMPP based. I had written about Identichat is a Jabber/XMPP interface to Laconi.caStatus.net – but this is something different : not merely providing an interface to a generic microblogging service, it leverages XMPP by building the microblogging service around it.

As Joshua Price discovered Juick almost a year before me, I’m going to recycle his introduction to the service – he paraphrases Juick’s help page anyway :

Juick is a web service that takes XMPP messages and creates a microblog using those messages as entries [..] There’s no registration, no signup, no hassle. You simply send a XMPP message to “juick@juick.com” and it creates a blog based on the username you sent from and begins recording submissions.

  1. Add “juick@juick.com” to your contact list in your Jabber client or GMail.
  2. Prepare whatever message you want juick to record
  3. Send your message

That’s it. Juick will respond immediately telling you the message has been posted, and will provide you with a web address to view your new entry.

The simplicity of an account creation process that sniffs your Jabber vCard is something to behold – I makes any other sign-up process feel ponderous. This poor man’s OpenID Attribute Exchange does the job with several orders of magnitude less complexity.

Almost every interaction with Juick can be performed from the cozy comfort of your favorite XMPP client – including threaded replies which are something that Status.net’s Jabber bot is not yet capable of handling (edit – thanks to Aaron for letting us know that Status.net’s Jabber bot has always been able to do that too). And contrary to every microblogging service that I have known, the presence information is displayed on the web site – take a look at Nÿco’s subscribers for a example.

The drawbacks is that this is a small social network intended for Russophones, and the software is not free. But still, it is an original project whose features may serve as inspiration for others.

For some technical information about Stoyan Zhekov‘s presentation :


Email and Knowledge management and Systems09 Apr 2010 at 14:41 by Jean-Marc Liotier

In the digital world, the folder metaphor has perpetuated the single dimensional limitations of the physical world : each message is present in one and only one folder. The problem of adding more dimensions to the classification has been solved ages ago – whether you want to go hardcore with a full thesaurus or just use your little folksonomy the required technical foundation is the same : tags, labels – or whatever you want to call your multiple index keys.

Noticing how Google has been successful with email tagging, I started exploring free implementation four years ago, aiming to complement my folder hierarchy with tags. But inertia affects all, and I actually never went beyond happy experimentation. As Zoli Erdos puts it in his eloquent description of Google’s 2009 breakthrough in ending the tags vs. folders war :

Those who “just can’t live without folders”, mostly legacy users of Yahoo Mail, Hotmail and mostly Outlook. They are used to folders and won’t learn new concepts, don’t want to change, but are happy spending their life “organizing stuff” and even feel productive doing so.

Ouch – that hits a little too close to home. And even if I had gone forward with tags, that would have been pussyfooting : as Google illustrates, the distinction between tags and folders is a technical one – from a user point of view it should be abstracted. Of course the abstraction is somewhat leaky if you consider what folders mean for local storage, but in the clouds you can get away with that.

For cranky folder-addicted users like myself, born to the Internet with early versions of Eudora, later sucking at the  Microsoft tit and nowadays a major fan Mozilla Thunderbird, there is definitely a user interface habit involved – and one that is hard to break after all those years. It is not about the graphics – I use Mutt regularly; it is about the warm fuzzy feeling of believing that the folder is a direct and unabstracted representation of something tangible on a file system.

Software objects are all abstractions anyway but, with time, the familiar abstraction becomes something that feels tangible. This is why, while I acknowledge the need to get the tagging goodness, I still have desire toward the good old folder look and feel with features such as drag-n-drop and hierarchies. Google seems to know that audience : all those features are now part of Gmail. Now tell the difference between folders and labels !

To make a smooth transition, I want Mozilla Thunderbird to display tags as folders. It looks like it is possible using Gmail with Claws through IMAP. I have yet to learn if I can do that on my own systems using Courier or Dovecot.

Consumption and Security and Systems administration09 Apr 2010 at 1:33 by Jean-Marc Liotier

Lexmark stubbornly refuses to make any effort toward providing, or at least letting other people provide, printer drivers for their devices – don’t buy from them if you need support for anything other than their operating system of choice.

After repeatedly acquiring throwaway inkjet printers from Lexmark and repeatedly wondering why my mother’s Ubuntu laptop can’t use them, my father finally accepted my suggestion of studying compatibility beforehand instead of buying on impulse – years of pedagogy finally paid off !

My parents required a compact wireless device supporting printing and scanning from their operating systems – preferably fast and silent, if possible robust and not too unsightly. No need for color, black and white was fine – though I would have pushed them toward color if multifunction laser printing devices capable of putting out colors were not so bulky. Those requirements led us toward the Samsung SCX-4500W.

I connected the Samsung SCX-4500W on one of the Ethernet ports of my parent’s router and went through the HTTP administration interface. The printing controls are extremely basic – but the networking configuration surprised me with a wealth of supported protocols : raw TCP/IP printing, LPR/LPD, IPP, SLP, UPnP, SNMP including SNMP v3, Telnet, email alert on any event you want – including levels of consumables… Anything I can think about printing on top of my mind is there. The funniest thing is that neither the product presentation, nor the specification sheet or the various reviews advertise that this device boasts such a rich set of networking features… Demure advertising – now that’s a novel concept !

I set-up wireless the printer’s 802.11 networking features, unplugged the Ethernet cable, rebooted the device… And nothing happened. No wireless networking, no error and, when I reconnected the Ethernet cable and got back to the administration interface, the radio networking menu was not even available anymore. After careful verification I could reliably reproduce that behaviour. At that stage, my parents were already lamenting the sorry state of the ever-unreliable modern technology – and most users would have been equally lost.

I pressed on and found that I was not alone in my predicament. User experiences soon led me to the solution : I had configured my parent’s radio network to use WPA with TKIP+AES encryption (the best option available on their access point) but the Samsung SCX-4500W was unable to support that properly. The administration interface’s radio networking menu proposed TKIP+AES but silently failed to establish a connection and seemed to screw the whole radio networking stack. Only setting my parent’s Freebox and all other devices on the network, to use TKIP only instead of TKIP+AES yielded a working setup with a reachable printer, at the cost of using trivially circumventable security to protect the network’s traffic from intrusion.

Now that is seriously bad engineering : not supporting a desirable protocol is entirely forgivable – but advertising it in a menu, then failing to connect without generating the slightest hint of an error message, and as a bonus wedging the user into an irrecoverable configuration is a grievous sin. I managed to overcome the obstacle, but this is a device aimed at the mass market and I can perfectly understand its target audience’s desire to throw it out of the window.

On that problem was solved, configuring the clients over the network was a breeze and pages of nice print were soon flying out quickly and silently. In summary, the Samsung SCX-4500W is a stylish printing and scanning device that lives up to its promises – apart from that nasty bug that makes me doubt Samsung’s quality control over its networking features.

Scanning with the Samsung SCX-4500W is another story entirely – it should work with the xerox_mfp SANE backend, but only through USB. For now I have found no hope of having it scan for a Linux host across the network.

Consumption and Cycling and Geography and Photography07 Apr 2010 at 12:07 by Jean-Marc Liotier

One fellow mapper on talk-fr@openstreetmap.org complained that there was very few comments about the Amod AGL 3080 GPS logger from other OpenStreetMap users… So here is one.

I liked my trusty Sony GPS-CS1 GPS logger, but autonomy of barely more than a good riding day was too short for my taste and the one Hertz sampling rate was too low for satisfactory OpenStreetMap surveying by bicycle or roller-skate, though it was plenty for walking.

After sifting through various reviews and specification sheets, I declared the Amod AGL 3080 the true heir to the Sony GPS-CS1. And after a few months of use I am not disappointed.

AMOD AGL 3080 GPS logger

This solid little unit is simple to use : normal operation requires a single button. After mounting as USB mass storage with a standard mini-USB cable, a pass trough GPSbabel is all that is needed before the data is ready for consumption. There is also a handy second button for marking waypoints – I use it mostly to record points of interests. The AGL 3080’s SiRF Star III chipset provides satisfactory reception – subjectively much better than the GPS-CS1’s, and the storage capacity is more than you will need for anything up to a transcontinental ride. It uses three AAA batteries, which makes it practical for underway replenishment while making the use of rechargeables possible too. For a walkaround, PocketGPSWorld has a review with detailed pictures.

But what I appreciate most is the ability to configure the output NMEA sentences for the best compromise between autonomy and the richness of the of logged data. 6 logging modes can be by cycled through by pressing the “MARK” button for as much precision or as much battery life as you wish to adjust as you go :

Mode LED Status Output format Interval (seconds) Records Duration (hours)
1 “Memory
Full” on
GGA/GSA/RMC/VTG 1 260 000 72
GSV 5
2 “Memory Full” flash Only
RMC
1 1 040 000 288
3 “GPS” on GGA/GSA/RMC/VTG/GSV 5 260 000 360
4 “Battery Low” on Only
RMC
5 1 040 000 1440
5 “Battery Low” on GGA/GSA/RMC/VTG/GSV 10 260 000 720
6 “Battery Low” flash Only RMC 10 1 040 000 2880

The not so good is that the absence of rubber gasket on the battery compartment hints that this device is not waterproof. Like the Sony GPS-CS1 it has been through rain with no apparent problem, but pushing my luck too far will probably result in corrosion.

The ugly is that I have yet to find a way to strap the Amod AGL 3080 securely. It features a strap slot on only one side, making any balanced setup impossible. Supplied Velcro strap can connect it to a carabiner, but the resulting contraption dangles around wherever you attach it – I hate to have dangling things attached to my kit. The Sony GPS-CS1has a pouch that features a convenient Velcro strap to conveniently attach it to a any strap – I use it on top of my backpack’s shoulder straps or on top of my handlebar bag. The Amod AGL 3080 has nothing like that and I have yet to find a good way to mount it on my bicycle – for now, rubber-bands are the least worst option.

But for 70 Euros, it is a bargain if you need a cheap, simple and flexible GPS logger for photography, sports or cartography. Buy it – and then tell me if how you succeeded in mounting it on a bicycle or on a backpack !