The Web archived articles

Subscribe to the RSS feed for this category only

Marketing and Networking & telecommunications and Security and Social networking and The media and The Web12 Jun 2013 at 11:11 by Jean-Marc Liotier

A few reflections from my notes of public reaction to last weekend’s events.

Advertising is the main source of revenue for publishers on the Web, including the lords of sharecropping empires such as Facebook and Google. Revenue from advertising varies hugely with how well the message targets the audience. Targeting requires getting to know the target – which is the business that Facebook and Google are in : getting the user to find them useful and trust them so that he willingly provides them with their raw material.

I used to enjoy giving the publishers a lot of data in return for personalization and services – even considering the risks. Yes, we knew the risks – but they are the sort of risks that we are notoriously bad at evaluating. Most of us have probably read at least a dozen different tales of Orwellian dystopias – yet our productive relationship with service providers let us convince ourselves that betrayal won’t happen. We were so complacent that it might be argued that we asked for this.

So why are we surprised ? The surprise is in the scale of the abuse. Corruption always exists at the margins of any system that is sufficiently slack to let alternative ways thrive and supply the mainstream with fresh ideas. A society with no deviance at its margins is totalitarian – so we live with that some antisocial behaviour as a cost of doing business in a society that values individual freedom.

But today we find that the extent of corruption is not restricted to the margins – we find that most of what goes on there among people we entrusted with extreme power at the core of the state entirely escapes oversight and drifts into mass surveillance which is known to asphyxiate societies. That much corruption was a risk that we were warned against, but seeing it realized is still a nasty surprise.

Again, this is not about lawful surveillance under democratic oversight, which is as acceptable as ever – this is about the dangerous nature of massive untargeted surveillance outside of democratic control. But public opinion reeling from the shock will probably be blind to the difference – it is now likely to be wary of anything that even remotely smells of surveillance.

Of course, not everyone has yet realized the tradeoffs that modern communications entail and that they have always been making, even if unwittingly – public awareness of privacy issues is not going to arise without continued evangelism anytime soon. But a host of users has awoken to realize that they were sleepwalking naked on Main Street. What will they do now ?

Considering how mainstream audiences have long happily kept gobbling up toxic information from the mass media, I am not holding my breath for a violent phase transition – but a new generation of privacy militants might just have been given birth and I wonder how much they will nudge the information industry’s trajectory. In any case, they will not make the Internet more welcoming to it.

Knowledge management and Politics and Security and The media and The Web28 Feb 2013 at 12:43 by Jean-Marc Liotier

Article 322-6-1 of the French Code Pénal punishes with one year in prison and a 15000€ fine “the diffusion by any mean of manufacturing processes for destructive devices made from explosive, nuclear, biological or chemical substances or any product intended for domestic, industrial or agricultural use“.

So in France, Cryptome can’t publish this very common and very public US military field manual, a textfiles.com mirror in France is illegal because it contains this, description of a chemical reaction on the MIT’s site would be repressed  and Wikipedia’s legal team better excise this section of the Nitroglycerin article from any HTTP response bound to France.

And someone once again forgot that censoring information locally does not work.

But wait – there is more stupidity… The punishment is tripled (three years in prison and a 45000€ fine) if the information has been published “to an undefined audience on a public electronic communication network“. Why isn’t there a specific punishment for posting on a billboard too ? Once again, in yet another country, the use of electronic tools is an aggravating circumstance. As electronics pervade our whole lives, isn’t that entirely anachronistic ?

Well – as long as Tor, I2P & al. keep working…

By the way, that law makes an exception for professional use – so if you are acting as an agent of a duly accredited terrorist enterprise, rest assured it does not apply to you !

Identity management and Knowledge management and Social networking and Technology and The Web09 Jul 2011 at 2:21 by Jean-Marc Liotier

I have not read any reviews of Google Plus, so you’ll get my raw impressions starting after fifteen minutes of use – I guess that whatever they are worth, they bring more value than risking paraphrasing other people’s impressions after having been influenced by their prose.

First, a minor annoyance : stop asking me to join the chat. I don’t join messaging silos – if it is not open, I’m not participating. You asked, I declined – now you insist after every login and I find that impolite.

First task I set upon : set up information streams in and out of Google Plus. A few moments later it appears that this one will remain on the todo list for a while : there is not even an RSS feed with the public items… Hello ? Is that nostalgia for the nineties ? What good is an information processing tool that won’t let me aggregate, curate, remix and share ? Is this AOL envy ?

Then I move on toward some contacts management. I find the Circles interface is pretty bad. For starters, selecting multiple contacts and editing their Circles memberships wholesale is not possible – the pattern of editing the properties of multiple items is simple enough to be present and appreciated in most decent file managers (for editing permissions)… Sure it can be added later as it is not a structural feature, but still : for now much tedium ensues. Likewise, much time would be saved by letting users copy and paste contacts between circles. But all that is minor ergonomic nitpicking compared to other problems…

No hashtags, no groups… How am I supposed to discover people ? Where is the serendipity ? Instead of “Google Circles” this should be named “Google Cliques”. In its haste to satisfy the privacy obsessed, it seems that Google has forgotten that the first function of social networking software is to enable social behaviour… It seems that the features are focused on the anti-social instead. I can understand the absence of hashtags – spam is a major unresolved issue… But groups ? See Friendfeed to understand how powerful they can be – and they are in no way incompatible with the Circles model. It seems that selective sharing is what Google Plus is mostly about – public interaction and collaboration feels like an afterthought. This will please the reclusive, but it does not fit my needs.

Worse, the Circles feature only segments the population – it does nothing to organize shared interests : I may carefully select cyclists to put into my ‘cyclists’ Circle, but when I read the stream for that circle I’ll see pictures of their pets too. This does not help knowledge management in any way – it is merely about people management.

Finally Google is still stuck with Facebook, Twitter & al. in the silo era – the spirits of well known dinosaurs still haunt those lands. Why don’t they get on with the times and let users syndicate streams across service boundaries using open protocols such as Ostatus which an increasing number of social networking tools use to interoperate ? Google may be part of the technological vanguard of information services at massive scales, but cloning the worst features of competing services is the acme of backwardness.

Of course, this is a first release – not even fully open to subscription yet, so many features will be added and refined. But rough edges are not the reason of my dissatisfaction with Google Plus : what irks me most is the silo mentality and the very concept of Circles as the fundamental object for interaction management – no amount of polish will change the nature of a service built on those precepts.

I’ll keep an account on Google Plus for monitoring purposes, but for now and until major changes happen, that’s clearly not where I’ll be seeking intelligent life.

Brain dump and Knowledge management and The Web and Writing13 May 2011 at 0:23 by Jean-Marc Liotier

Using this blog for occasional casual experience capitalization means that an article captures and shares a fragment of knowledge I have managed to grasp at a given moment. While this frozen frame remains forever still, it may become stale as knowledge moves on. Comments contributed by the readers may help in keeping the article fresh, but  that only lasts as long as the discussion. After a while, part of article is obsolete – so it is with some unease that I see some old articles of dubious wisdom keep attracting traffic on my blog.

Maybe this unease is the guilt that  comes with publishing in a blog – a form of writing whose subjective qualities can easily slide into asocial  self-centered drivel. Maybe I should sometimes let those articles become wiki pages – an useful option given to  contributors on some question & answers sites. But letting an article slide into the bland utilitarian style of a wiki would  spoil some of my narcissic writing fun. That shows that between the wiki utility and the blog subjectivity no choice must be made : they both have their role to play in the community media mix.

So what about the expiration date ? I won’t use one : let obsolete knowledge, false trails, failed attempts and disproved theories live forever with us for they are as useful to our research as the current knowledge, bright successes and established theories that are merely the end result of a process more haphazard than most recipients of scientific and technical glory will readily admit. To the scientific and technical world, what did not work and why it did not work is even more important than what did – awareness of failures is an essential raw material of the research process.

So I am left with the guilt of letting innocent bystanders hurt themselves with my stale drivel which I won’t even point to for fear of increasing its indecently high page rank. But there is not much I can do for them besides serving the articles with their publication date and hope that the intelligent reader will seek contemporary confirmation of a fact draped in the suspicious fog of a less informed past with an author even less competent than he is nowadays…

Code and Mobile computing and Social networking and The Web01 Sep 2010 at 13:58 by Jean-Marc Liotier

Twenty two days ago, my periodically running script ceased to produce any check-ins on Brightkite. A quick look at the output showed that the format of the returned place object had changed. Had I used proper XML parsing, that would not have been a problem – but I’m using homely grep, sed and awk… Not robust code in any way, especially when dealing with XML. At least you get a nice illustration of why defensive programming with proper tools is good for you.

So here is a new update of latitude2brightkite.sh – a script that checks-in your Google Latitude position to Brightkite using the Brightkite API and the Google Public Location Badge. Description of the whole contraption may be found in the initial announcement.

The changes are :

% diff latitude2brightkite_old.sh latitude2brightkite.sh
69,70c69,70
< id=`wget -qO- "http://brightkite.com/places/search.xml?q=$latitude%2C$longitude" | grep "<id>" | sed s/\ \ \<id\>// | sed s/\<\\\/id\>//`
< place=`wget -qO- "http://brightkite.com/places/search.xml?q=$latitude%2C$longitude" | grep "<name>" | sed s/\ \ \<name\>// | sed s/\<\\\/name\>//`
---
> id=`wget -qO- "http://brightkite.com/places/search.xml?q=$latitude%2C$longitude" | grep "<id>" | sed s/\ \ \<id\>// | sed s/\<\\\/id\>// | tail -n 1`
> place=`wget -qO- "http://brightkite.com/places/search.xml?q=$latitude%2C$longitude" | grep "<name>" | sed s/\ \ \<name\>// | sed s/\<\\\/name\>// | md5sum | awk '{print $1}'`

I know I should use a revision control system… Posting this diff that does not even fit this blog is yet another reminder that a revision control system is not just for “significant” projects – anything should use one and considering how lightweight Git is in comparison to Subversion, there really is no excuse anymore.

Back to the point… To get the place identifier, I now only take the last line of the field – which is all we need. I mdsum the place name – I only need to compare it to the place name at the time of the former invocation, so a mdsum does the job and keeps me from having to deal with accented characters and newlines… Did I mention how hackish this is ?

Anyway… It works for me™ – get the code !

Jabber and Social networking and Technology and The Web09 Apr 2010 at 16:24 by Jean-Marc Liotier

I don’t quite remember how I stumbled upon this page on Nicolas Verite’s French-language blog about instant messaging and open standards, but this is how I found a microblogging system called Juick. Its claim to fame is that it is entirely XMPP based. I had written about Identichat is a Jabber/XMPP interface to Laconi.caStatus.net – but this is something different : not merely providing an interface to a generic microblogging service, it leverages XMPP by building the microblogging service around it.

As Joshua Price discovered Juick almost a year before me, I’m going to recycle his introduction to the service – he paraphrases Juick’s help page anyway :

Juick is a web service that takes XMPP messages and creates a microblog using those messages as entries [..] There’s no registration, no signup, no hassle. You simply send a XMPP message to “juick@juick.com” and it creates a blog based on the username you sent from and begins recording submissions.

  1. Add “juick@juick.com” to your contact list in your Jabber client or GMail.
  2. Prepare whatever message you want juick to record
  3. Send your message

That’s it. Juick will respond immediately telling you the message has been posted, and will provide you with a web address to view your new entry.

The simplicity of an account creation process that sniffs your Jabber vCard is something to behold – I makes any other sign-up process feel ponderous. This poor man’s OpenID Attribute Exchange does the job with several orders of magnitude less complexity.

Almost every interaction with Juick can be performed from the cozy comfort of your favorite XMPP client – including threaded replies which are something that Status.net’s Jabber bot is not yet capable of handling (edit – thanks to Aaron for letting us know that Status.net’s Jabber bot has always been able to do that too). And contrary to every microblogging service that I have known, the presence information is displayed on the web site – take a look at Nÿco’s subscribers for a example.

The drawbacks is that this is a small social network intended for Russophones, and the software is not free. But still, it is an original project whose features may serve as inspiration for others.

For some technical information about Stoyan Zhekov‘s presentation :


Networking & telecommunications and Politics and Rumors and The Web26 Mar 2010 at 15:01 by Jean-Marc Liotier

Stéphane Bortzmeyer has a very long track record of interesting commentary about the Internet – his blog goes back to 1996. Its a pity that my compatriot doesn’t write in English more often: I believe he would find a big audience for his excellent articles. But as he told me : “Many people write in English already, English readers do not need one more writer”. I object – there is always room for good information to be brought to a greater audience. And since his writings are licensed under the GFDL, I’ll do the translation myself when I feel like it.

Maybe this will be the only of his articles I translate – or maybe there will be others in the future… Meanwhile here is this one. I chose it because DNS hijacking is a subject I am sensitive about – and maybe because of the exoticism of Chinese shenanigans…


Before reading this interesting article, please heed this forewarning : as soon as we talk about China, we should admit our ignorance. Most people who pontificate about the state of the Internet in China do not speak Chinese – their knowledge of the country stops at the doorstep of international hotels in Beijing and Shanghai. The prize for the most ludicrous pro-Chinese utterance goes to the Jacques Myard, representative at the National Assembly and member of the UMP party, for his support for the Chinese dictatorship [translator's note : he went on the record saying that "the Internet is utterly rotten" and went on to say that it "should be nationalized to give us better control - the Chinese did it"]. When it comes to DNS, one of the least understood Internet services, the bullshit production rate goes up considerably and sentences where both « DNS » and « China » occur are most likely to be false.

I am therefore going to try not emulating Myard, and only talk about what I know, which will make this article quite short and full of conditional. Unlike criminal investigations in US movies, this article will name no culprit and you won’t even know if there was really a crime.

DNS root servers hijacking for the purpose of implementing the policy (notably censorship) of the Chinese dictatorship has been discussed several times – for example at the 2005 IETF meeting in Paris. It is very difficult to know exactly what happens in China because Chinese users, for cultural reasons, but mostly for fear of repression, don’t provide much information. Of course, plenty of people travel to China, but few of them are DNS experts and it is difficult to get them to provide data from mtr or dig correctly executed with the right options. Reports on censorship in China are often poor in technical detail.

However, from time to time, DNS hijacking in China has visible consequences outside of Chinese territory. On the 24th March, the technical manager for the .cl domain noted that root server I, anycast and managed by Netnod, answered bizarrely when queried from Chile :

$ dig @i.root-servers.net www.facebook.com A

; <<>> DiG 9.6.1-P3 <<>> @i.root-servers.net www.facebook.com A
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7448
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;www.facebook.com.              IN      A

;; ANSWER SECTION:
www.facebook.com.       86400   IN      A       8.7.198.45

;; Query time: 444 msec
;; SERVER: 192.36.148.17#53(192.36.148.17)
;; WHEN: Wed Mar 24 14:21:54 2010
;; MSG SIZE  rcvd: 66

[translator's note : sign of the times, the Chilean administrator chose to query facebook.com - google.com and, before that, microsoft.com used to be classic example material Mauricio used facebook.com (or twitter.com) because it is hijacked by the chinese govt, unlike microsoft.com (or even google.com)]

The root servers are not authoritative for facebook.com. The queried server should therefore have answered with a pointer to the .com domain. Instead, we find an unknown IP address. Someone is screwing with the server’s data :

  • The I root server’s administrators as well as its hosts deny any modifications of the data obtained from VeriSign (who manages the DNS root master server).
  • Other root servers (except, oddly, D) are also affected.
  • Only UDP traffic is hijacked – TCP is unaffected. Traceroute sometimes ends up at reliable instances of the I server (for example, in Japan) which seem to suggest that the manipulation only affects port 53 – the one used by the DNS.
  • Affected names are those of services censored in China, such as Facebook or Twitter. They are censored not just for political reasons, but also because they compete with Chinese interests.

If you want to check it yourself, 123.123.123.123 is hosted by China Unicom and will let you resolve a name :

% dig A www.facebook.com @123.123.123.123 

; <<>> DiG 9.5.1-P3 <<>> A www.facebook.com @123.123.123.123
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44684
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;www.facebook.com.              IN      A

;; ANSWER SECTION:
www.facebook.com.       86400   IN      A       37.61.54.158

;; Query time: 359 msec
;; SERVER: 123.123.123.123#53(123.123.123.123)
;; WHEN: Fri Mar 26 10:46:52 2010
;; MSG SIZE  rcvd: 66

37.61.54.158 is a currently unassigned address and it does not belong to Facebook. [translator's note : I get 243.185.187.39 which is also abnormal]

It is therefore very likely that rogue root servers exist in China and that Chinese ISP have hacked their IGP (OSPF for example) to hijack traffic bound toward the root servers. This does not quite explain everything – for example why the known good instances installed in China still see significant traffic. But it won’t be possible to know more without in-depth testing from various locations in China. A leak from this routing hack (similar to what affected YouTube in 2008) certainly explains how the announcement from the rogue server reached Chile.

« The Great DNS Wall of China » and « Report about national DNS spoofing in China » are among the reliable sources of information about manipulated DNS in China.

For more information about the problem described in this article, you may also read « China censorship leaks outside Great Firewall via root server » (a good technical  article), « China’s Great Firewall spreads overseas » or « Web traffic redirected to China in mystery mix-up ».

This article is distributed under the terms of the GFDL. The original article was published on Stéphane Bortzmeyer’s blog on the 26 March 2010 and translated by Jean-Marc Liotier the same day.

Technology and The Web21 Mar 2010 at 22:21 by Jean-Marc Liotier

Gnutella was the first decentralized file sharing network. It celebrated a decade of existence on March 14, 2010. Once Audiogalaxy went down in 2002, it became my favorite service for clandestine file sharing. In late 2007, it was the most popular file sharing network on the Internet with an estimated market share of more than 40%. But nowadays, BitTorrent steals the limelight. How did that happen ?

Gnutella has structural scalability limitations that even its creator acknowledged from the very start. Over the years, major improvements were introduced, but search horizon and network size remain intrinsic limitations due to search traffic. On the other hand, BitTorrent outsourced much of the search and indexing of files to torrent web sites, only handling the actual distribution of data within the client.

Providing search across the indexes requires other parties to provide them, but that architectural constraint has paradoxically become a key driver of BitTorrent’s popularity by providing a simple business model. Ernesto at TorrentFreak explains that easy monetization explains the ubiquity of indexes : “BitTorrent sites can generate some serious revenue, enough to sustain the site and make a decent living. In general, ad rates per impression are very low, but thanks to the huge amounts of traffic it quickly adds up. This money aspect has made it possible for sites to thrive, and has also lured many gold diggers into starting a torrent site over the years“.

With commercial interests comes spam and legal vulnerabilities – so I feel much more comfortable knowing that decentralized protocols exist to provide resilience towards the censorship that lurks over us in the dark, waiting for us to become complacently reliant on centralized resources. Happy birthday Gnutella !

Social networking and Technology and The Web10 Feb 2010 at 22:06 by Jean-Marc Liotier

Yesterday, while Google Buzz was still only a rumor, I felt that there was a slight likelyhood that Google’s entry into the microblogging field would support decentralized interoperability using the OpenMicroBlogging protocol pioneered by the Status.net open source micro messaging platform. I was wrong about that, but it was quite a long shot… Speculation is a dirty job, but someone’s got to do it !

I am also surprised that there is no Twitter API, but there is plenty of other protocols on the menu that should keep us quite happy. There is already the Social Graph API, the PubSubHubbub push protocol and of course Atom Syndication and the RSS format – with the MediaRSS extension. But much more interesting is the Google Buzz documentation mention that “Over the next several months Google Buzz will introduce an API for developers, including full/read write support for posts with the Atom Publishing Protocol, rich activity notification with Activity Streams, delegated authorization with OAuth, federated comments and activities with Salmon, distributed profile and contact information with WebFinger, and much, much more“. So with all that available to third parties we may even be able to interact with Google’s content without having to deal with Gmail whose rampant portalization makes me dislike it almost as much as Facebook and Yahoo.

I’m particularly excited about Salmon, a protocol for comments and annotations to swim upstream to original update sources. For now I wonder about the compared utilities of Google Buzz and FriendFeed, but once Salmon is widely implemented it won’t matter where the comments are contributed : they will percolate everywhere and the conversation shall be united again !

Jabber and Rumors and Social networking and Technology and The Web09 Feb 2010 at 12:29 by Jean-Marc Liotier

According to a report from the Wall Street Journal mentioned by ReadWriteWeb, Google might be offering a microblogging service as soon as this week.

When Google opened Google Talk, they opened the service to XMPP/Jabber federation. As a new entrant in a saturated market, opening up is the logical move.

The collaborative messaging field as a whole cannot be considered saturated but, while it is still evolving very fast, the needs of the early adopter segment are now well served by entrenched offers such as Twitter and Facebook. Touching them will require an alternative strategy – and that may lead to opening as a way to offer attractive value to users and service providers alike.

So maybe we can cling on a faint hope that Google’s entry into the microblogging field will support decentralized interoperability using the OpenMicroBlogging protocol pioneered by the Status.net open source micro messaging platform. Shall we take a bet ?

Don’t you love bar talk speculation based on anonymous rumors ?

Free software and Geography and Marketing and Politics and Technology and The Web17 Dec 2009 at 13:27 by Jean-Marc Liotier

The quality of OpenStreetMap‘s work speaks for itself, but it seems that we need to speak about it too – especially now that Google is attempting to to appear as holding the moral high ground by using terms such as “citizen cartographer” that they rob of its meaning by conveniently forgetting to mention the license under which the contributed data is held. But in the eye of the public, the $50000 UNICEF donation to the  home country of the winner of the Map Maker Global Challenge lets them appear as charitable citizens.

We need to explain why it is a fraud, so that motivated aspiring cartographers are not tempted to give away their souls for free. I could understand that they sell it, but giving it to Google for free is a bit too much – we must tell them. I’m pretty sure that good geographic data available to anyone for free will do more for the least developed communities than a 50k USD grant.

Take Map Kibera for example :

“Kibera in Nairobi, Kenya, widely known as Africa’s largest slum, remains a blank spot on the map. Without basic knowledge of the geography and resources of Kibera it is impossible to have an informed discussion on how to improve the lives of residents. This November, young Kiberans create the first public digital map of their own community”.

And they did it with OpenStreetMap. To the million of people living in this former terra incognita with no chance of profiting a major mapping provider, how much do you think having at last a platform for services that require geographical information without having to pay Google or remain within the limits of the uses permitted by its license is worth ?

I answered this piece at ReadWriteWeb and I suggest that you keep an eye for opportunities to answer this sort of propaganda against libre mapping.

Marketing and Social networking and The media and The Web15 Dec 2009 at 0:24 by Jean-Marc Liotier

Today I mentioned that 15 years late, I had finally put a name on a past adolescent problem : patellofemoral pain syndrome (PFPS). As far as I understood, it is a growth related muscle unbalance that solves itself when the body reaches maturity.

As usual with most of my microblogging, I dispatch the 140 chars to several sites using Ping.fm and then follow the conversation wherever it eventually happens. In that case, a conversation developed on Facebook. Friends asked questions and gave their two cents – business as usual.

And then an interloper cut in : “Jean-Marc we can help correct your patellfemoral pain syndrome. It is the mal-tracking of your patella. Check us out at mycommercialkneesite.com”. It is not entirely spam at first sight because it is actually on-topic and even slightly informative. But it is not really taking part in the conversation either because it is a blatant plug for an infomercial site. So spam it is, but cleverly targeted at a niche audience.

I does looks like all the blatant plugs that we have been seeing for decades in forums and mailing list – usually for a short time after which the culprit mends is devious ways or ends up banned. But there is an innovative twist brought by the rise of the “real-time web” : the power of keyword filtering applied to the whole microblogging world is the enabler of large-scale conversational marketing. Obnoxious marketers attempting to pass as bona fide contributors to the conversation are no longer a merely local nuisance – they are now reaching us at a global scale and in near real-time.

Marketers barging in whenever someone utters a word that qualifies their niche are gatecrashers and will be treated as such. But I find fascinating that we now have  personalized advertising capable of targeting a niche audience in real-time as the qualifying keywords appear. Not that I like it, but you have to recognize it as a new step in the memetic arms race between advertisers and audience.

Imagine that coupled with voice recognition and some IVR scripting. Do you remember those telephone services where you get free airtime if you listen for advertising breaks ? Imagine the same concept where during the conversation someone – a human, or even a conversational automaton – comes in and says “Hey, you were telling your boyfriend about your headache ? Why don’t you try Schrufanol ? Mention SHMURZ and get the third one free !”.

Even better, add some more intelligent pattern recognition to go beyond keywords. The hopeless student who just told his pal on Schmoogle FreeVoice telling about his fear of failure at exams will immediately receive through Schmoogle AdVoice a special offer for cram school from a salesdrone who knows his name and just checked out his Facebook profile. You think this is the future ? This is probably already happening.

15 years late, I finally put a name on my past adolescent problem : patellofemoral pain syndrome (PFPS) – growth related muscle unbalance.

Code and Design and Knowledge management and Social networking and The Web21 Aug 2009 at 16:01 by Jean-Marc Liotier

LinkedIn’s profile PDF render is a useful service, but its output lacks in aesthetics. I like the HTML render by Jobspice, especially the one using the Green & Simple template – but I prefer hosting my resume on my own site. This is why since 2003 I have been using the XML Résumé Library. It is an XML and XSL based system for marking up, adding metadata to, and formatting résumés and curricula vitae. Conceptually, it is a perfect tool – and some trivial shell scripting provided me with a fully automated toolchain. But the project has been completely quiet since 2004 – and meanwhile we have seen the rise of the hresume microformat, an interesting case of “less is more” – especially compared to the even heavier HR-XML.

Interestingly, both LinkedIn and Jobspice use hresume. A PHP LinkedIn hResume grabber part of a WordPress plugin by Brad Touesnard takes the hresume microformat block from a LinkedIn public profile page and weeds out all the LinkedIn specific chaff. With pure hresume semantic XHTML, you just have to add CSS to obtain a presentable CV. So my plan is now to use LinkedIn as a resume writing aid and a social networking tool, and use hresume microformated output extracted from it to host a nice CSS styled CV on my own site.

Preparing to do that, I went through the “hResume examples in the wild” page of the microformats wiki and selected the favorite styles that I’ll use for inspiration :

Great excuse to play with CSS – and eventually publish an updated CV…

Code and Mobile computing and Social networking and The Web17 Jun 2009 at 11:11 by Jean-Marc Liotier

I just released a new update of latitude2brightkite.sh – a script that checks-in your Google Latitude position to Brightkite using the Brightkite REST API and the Google Public Location Badge.

The changes are :

20090607 – 0.3 – The working directory is now a parameter
20090612 – 0.4 – Only post updates if the _name_ of the location changes, not if only the _internal BK id_ of the place does (contribution by Yves Le Jan <inliner@grabeuh.com>).
20090615 – 0.5 – Perl 5.8.8 compatibility of the JSON coordinate parsing (contribution by Jay Rishel <jay@rishel.org>).

Yves’ idea smooths location sampling noise and makes check-ins much more meaningful.

Thanks to Yves and Jay for their contributions ! Maybe it is time for revision control…

Code and Mobile computing and Social networking and The Web05 Jun 2009 at 21:43 by Jean-Marc Liotier

Tired of waiting for Google to release a proper Latitude API, I went ahead and scribbled latitude2brightkite.sh – a script that checks-in your Google Latitude position to Brightkite using the Brightkite REST API and the Google Public Location Badge. See my seminal post from yesterday for more information about how I cobbled it together.

Since yesterday I cleaned it up a little, but most of all, as promised, I made it more intelligent by having it compare the current position with the last one, in order to check-in with Brightkite only if the Google Latitude position has changed. Not checking-in at each invocation will certainly reduce the number of check-ins by 99% – and I’m sure that Brightkite will be thankful for the lesser load on their HTTP servers…

So grab the code for latitude2brightkite.sh, put it in your crontab and have more fun with Brightkite and Google Latitude !

There is quite a bit of interest for this script – it seems that I have filled a widely felt need.

Code and Mobile computing and Social networking and The Web05 Jun 2009 at 0:51 by Jean-Marc Liotier

Tired of waiting for Google to release a proper Latitude API, I went ahead and scribbled latitude2brightkite.sh – a script that checks-in your Google Latitude position to Brightkite using the Brightkite REST API and the Google Public Location Badge.

This script is an ugly mongrel hack, but that is what you get when an aged script kiddie writes something in a hurry. The right way to do it would be to parse Latitude’s JSON output cleanly using the Perl library. But that dirty prototype took me all of ten minutes to set up while unwinding between meetings, and it now works fine in my crontab.

Apart from Bash, the requirements to run this script are the Perl JSON library (available in Debian as libjson-perl) and Curl.

The main limitation of this script is that your Google Public Location Badge has to be enabled and it has to show the best available location. This means that for this script to work, your location has to be public. The privacy conscious among my readers will surely love it !

This script proves that automatic Google Latitude position check-in in Brightkite can be done, it works for me, and the official Google Latitude API will hopefully soon make it obsolete !

Meanwhile, grab the code for latitude2brightkite.sh, put it in your crontab and have more fun with Brightkite and Google Latitude… To me, this is what both services were missing to become truly usable.

Of course, doing it with “XEP-0080 – User Location” via publish-subscribe (“XEP-0080 – PubSub” would make much more sense than polling an HTTP server all the time, but we are not there yet. Meanwhile this script could be made more intelligent by only checking in with Brightkite if the Google Latitude position has changed. I’ll think about it for the next version…

Next Page »