Knowledge management archived articles

Subscribe to the RSS feed for this category only

Design and Knowledge management and Politics and Security and Technology26 May 2014 at 14:07 by Jean-Marc Liotier

Skimming an entirely unrelated article, I stumbled upon this gem:

Recently, a number of schools have started using a program called CourseSmart, which uses e-book analytics to alert teachers if their students are studying the night before tests, rather than taking a long-haul approach to learning. In addition to test scores, the CourseSmart algorithm assigns each student an “engagement index” which can determine not just if a student is studying, but also if they’re studying properly. In theory, a person could receive a “satisfactory” C grade in a particular class, only to fail on “engagement

This immediately reminded me of Neal Stephenson’s 1992 novel, Snow Crash where a government employee’s reading behavior has been thoroughly warped into simulacrum by a lifetime of overbearing surveillance:

Y.T.’s mom pulls up the new memo, checks the time, and starts reading it. The estimated reading time is 15.62 minutes. Later, when Marietta does her end-of-day statistical roundup, sitting in her private office at 9:00 P.M., she will see the name of each employee and next to it, the amount of time spent reading this memo, and her reaction, based on the time spent, will go something like this:
– Less than 10 min.: Time for an employee conference and possible attitude counseling.
– 10-14 min.: Keep an eye on this employee; may be developing slipshod attitude.
– 14-15.61 min.: Employee is an efficient worker, may sometimes miss important details.
– Exactly 15.62 min.: Smartass. Needs attitude counseling.
– 15.63-16 min.: Asswipe. Not to be trusted.
– 16-18 min.: Employee is a methodical worker, may sometimes get hung up on minor details.
– More than 18 min.: Check the security videotape, see just what this employee was up to (e.g., possible unauthorized restroom break).

Y.T.’s mom decides to spend between fourteen and fifteen minutes reading the memo. It’s better for younger workers to spend too long, to show that they’re careful, not cocky. It’s better for older workers to go a little fast, to show good management potential. She’s pushing forty. She scans through the memo, hitting the Page Down button at reasonably regular intervals, occasionally paging back up to pretend to reread some earlier section. The computer is going to notice all this. It approves of rereading. It’s a small thing, but over a decade or so this stuff really shows up on your work-habits summary.

Dystopian panoptical horrors were supposed to be cautionary tales – not specifications for new projects…

As one Hacker News commenter put it : in the future, you don’t read books; books read you !

Post-scriptum… Isn’t it funny that users don’t mind being spied upon by apps and pages but get outraged when e-books do ? It may be because in their minds, e-books are still books… But shouldn’t all documents and all communicated information be as respectful of their reader as books are ?

Knowledge management and Politics and Security and The media and The Web28 Feb 2013 at 12:43 by Jean-Marc Liotier

Article 322-6-1 of the French Code Pénal punishes with one year in prison and a 15000€ fine “the diffusion by any mean of manufacturing processes for destructive devices made from explosive, nuclear, biological or chemical substances or any product intended for domestic, industrial or agricultural use“.

So in France, Cryptome can’t publish this very common and very public US military field manual, a textfiles.com mirror in France is illegal because it contains this, description of a chemical reaction on the MIT’s site would be repressed  and Wikipedia’s legal team better excise this section of the Nitroglycerin article from any HTTP response bound to France.

And someone once again forgot that censoring information locally does not work.

But wait – there is more stupidity… The punishment is tripled (three years in prison and a 45000€ fine) if the information has been published “to an undefined audience on a public electronic communication network“. Why isn’t there a specific punishment for posting on a billboard too ? Once again, in yet another country, the use of electronic tools is an aggravating circumstance. As electronics pervade our whole lives, isn’t that entirely anachronistic ?

Well – as long as Tor, I2P & al. keep working…

By the way, that law makes an exception for professional use – so if you are acting as an agent of a duly accredited terrorist enterprise, rest assured it does not apply to you !

Knowledge management and Methodology and Politics19 Aug 2011 at 13:40 by Jean-Marc Liotier

Whether you like Alvin Toffler or not, he is a visionary with exceptional acuity, and this quote cited by John Perry Barlow was no exception to his outstanding output :

“Freedom of expression is no longer a political nicety, but a precondition for economic competitiveness” – Alvin Toffler

I had never encountered it, so I wondered about where it first appeared. Not finding anything on the Web besides reproductions of the quote blindly attributing it to Alvin Toffler, I asked John Perry Barlow who promptly solved the mystery : “He said this to me in an interview I did of them in 1997” – no wonder I couldn’t find it.

Thanks John I updated Alvin Toffler’s Wikiquote page.

And let’s hope someone tells my employer that freedom of expression is good for business !

Identity management and Knowledge management and Social networking and Technology and The Web09 Jul 2011 at 2:21 by Jean-Marc Liotier

I have not read any reviews of Google Plus, so you’ll get my raw impressions starting after fifteen minutes of use – I guess that whatever they are worth, they bring more value than risking paraphrasing other people’s impressions after having been influenced by their prose.

First, a minor annoyance : stop asking me to join the chat. I don’t join messaging silos – if it is not open, I’m not participating. You asked, I declined – now you insist after every login and I find that impolite.

First task I set upon : set up information streams in and out of Google Plus. A few moments later it appears that this one will remain on the todo list for a while : there is not even an RSS feed with the public items… Hello ? Is that nostalgia for the nineties ? What good is an information processing tool that won’t let me aggregate, curate, remix and share ? Is this AOL envy ?

Then I move on toward some contacts management. I find the Circles interface is pretty bad. For starters, selecting multiple contacts and editing their Circles memberships wholesale is not possible – the pattern of editing the properties of multiple items is simple enough to be present and appreciated in most decent file managers (for editing permissions)… Sure it can be added later as it is not a structural feature, but still : for now much tedium ensues. Likewise, much time would be saved by letting users copy and paste contacts between circles. But all that is minor ergonomic nitpicking compared to other problems…

No hashtags, no groups… How am I supposed to discover people ? Where is the serendipity ? Instead of “Google Circles” this should be named “Google Cliques”. In its haste to satisfy the privacy obsessed, it seems that Google has forgotten that the first function of social networking software is to enable social behaviour… It seems that the features are focused on the anti-social instead. I can understand the absence of hashtags – spam is a major unresolved issue… But groups ? See Friendfeed to understand how powerful they can be – and they are in no way incompatible with the Circles model. It seems that selective sharing is what Google Plus is mostly about – public interaction and collaboration feels like an afterthought. This will please the reclusive, but it does not fit my needs.

Worse, the Circles feature only segments the population – it does nothing to organize shared interests : I may carefully select cyclists to put into my ‘cyclists’ Circle, but when I read the stream for that circle I’ll see pictures of their pets too. This does not help knowledge management in any way – it is merely about people management.

Finally Google is still stuck with Facebook, Twitter & al. in the silo era – the spirits of well known dinosaurs still haunt those lands. Why don’t they get on with the times and let users syndicate streams across service boundaries using open protocols such as Ostatus which an increasing number of social networking tools use to interoperate ? Google may be part of the technological vanguard of information services at massive scales, but cloning the worst features of competing services is the acme of backwardness.

Of course, this is a first release – not even fully open to subscription yet, so many features will be added and refined. But rough edges are not the reason of my dissatisfaction with Google Plus : what irks me most is the silo mentality and the very concept of Circles as the fundamental object for interaction management – no amount of polish will change the nature of a service built on those precepts.

I’ll keep an account on Google Plus for monitoring purposes, but for now and until major changes happen, that’s clearly not where I’ll be seeking intelligent life.

Knowledge management and Politics and Technology28 Jun 2011 at 22:51 by Jean-Marc Liotier

Open data is the idea that certain data should be freely available to everyone to use and republish as they wish, without restrictions from copyright, patents or other mechanisms of control. Share, remix, reuse – just do it for fun, for profit and for the public good… Once the data is liberated, good things will follow ! Alas, some Cassandra beg to differ.

Can the output of a process based entirely on publicly available data be considered unfit for public availability ? As Marek Mahut explains in “The danger of transparency: A lesson from Slovakia“, the answer is ‘yes’ according to a court in Bratislava who ordered immediate censorship of some information produced by an application whose input is entirely composed of publicly available data.

As a French citizen, I’m not surprised – for more than thirty years, our law has recognized how the merging of data sources is a danger to privacy.

I was prepared to translate the relevant section of the original French text of “Act N°78-17 of 6 January 1978 on data processing, data files and individual liberties” for you… But in its great benevolence, my government has kindly provided an official translation – so I’ll use that… Here is the relevant extract :

Chapter IV, Section 2 : Authorisation
Article 25
I. – The following may be carried out after authorisation by the “Commission nationale de l’informatique et des libertés” , with the exception of those mentioned in Articles 26 (State security and criminal offences processing) and 27 (public processing NIR, i.e. social security number – State biometrics –census – e-government online services):
[..]
5° automatic processing whose purpose is:
– the combination of files of one or several legal entities who manage a public service and whose purposes relate to different public interests;
– the combination of other entities’ files of which the main purposes are different.

Short version : if you want to join data from two isolated sources, you need to ask and receive authorization first, on a case-by-case basis.

That law only applies to personal data, which it defines (Chapter I, Article 2) as ‘any information relating to a natural person who is or can be identified, directly or indirectly’. That last word opens a big can of worms : data de-anonymization techniques have shown that with sufficient detail, anonymous data can be linked to individuals. With that knowledge, one may consider that the whole Open Data movement falls in the shadow of that law.

To my knowledge this question has not yet been brought before a court, so there is therefore no case law to guide us… But it is only a matter of time – watch this space !

Brain dump and Knowledge management and The Web and Writing13 May 2011 at 0:23 by Jean-Marc Liotier

Using this blog for occasional casual experience capitalization means that an article captures and shares a fragment of knowledge I have managed to grasp at a given moment. While this frozen frame remains forever still, it may become stale as knowledge moves on. Comments contributed by the readers may help in keeping the article fresh, but  that only lasts as long as the discussion. After a while, part of article is obsolete – so it is with some unease that I see some old articles of dubious wisdom keep attracting traffic on my blog.

Maybe this unease is the guilt that  comes with publishing in a blog – a form of writing whose subjective qualities can easily slide into asocial  self-centered drivel. Maybe I should sometimes let those articles become wiki pages – an useful option given to  contributors on some question & answers sites. But letting an article slide into the bland utilitarian style of a wiki would  spoil some of my narcissic writing fun. That shows that between the wiki utility and the blog subjectivity no choice must be made : they both have their role to play in the community media mix.

So what about the expiration date ? I won’t use one : let obsolete knowledge, false trails, failed attempts and disproved theories live forever with us for they are as useful to our research as the current knowledge, bright successes and established theories that are merely the end result of a process more haphazard than most recipients of scientific and technical glory will readily admit. To the scientific and technical world, what did not work and why it did not work is even more important than what did – awareness of failures is an essential raw material of the research process.

So I am left with the guilt of letting innocent bystanders hurt themselves with my stale drivel which I won’t even point to for fear of increasing its indecently high page rank. But there is not much I can do for them besides serving the articles with their publication date and hope that the intelligent reader will seek contemporary confirmation of a fact draped in the suspicious fog of a less informed past with an author even less competent than he is nowadays…

Brain dump and Knowledge management and Networking & telecommunications and Technology16 Dec 2010 at 13:19 by Jean-Marc Liotier

Piled Higher & Deeper and Savage Chickens nailed it (thanks redditors for digging them up) : we spend most of our waking hours in front of a computer display – and they are not even mentioning all the screens of devices other than a desktop computer.

According to a disturbing number of my parent’s generation, sitting in from of a computer makes me a computer scientist and what I’m doing there is “computing”. They couldn’t be further from the truth : as Edsger Dijkstra stated, “computer science is no more about computers than astronomy is about telescopes”.

The optical metaphor doesn’t stop there – the computer is indeed transparent: it is only a windows to the world. I wear my glasses all day, and that is barely worth mentioning – why would using a computer all day be more newsworthy ?

I’m myopic – without my glasses I feel lost. Out of my bed, am I really myself if my glasses are not connected to my face ?

Nowadays, my interaction with the noosphere is essentially computer-mediated. Am I really myself without a network-attached computer display handy ? Mind uploading still belongs to fantasy realms, but we are already on the way toward it. We are already partly uploaded creatures, not quite whole when out of touch with the technosphere, like Manfred Macx without his augmented reality gear ? I’m far not the only one to have been struck by that illustration – as this Accelerando writeup attests :

“At one point, Manfred Macx loses his glasses, which function as external computer support, and he can barely function. Doubtless this would happen if we became dependent on implants – but does anyone else, right now, find their mind functioning differently, perhaps even failing at certain tasks, because these cool things called “computers” can access so readily the answers to most factual questions ? How much of our brain function is affected by a palm pilot ? Or, for that matter, by the ability to write things down on a piece of paper ?”

This is not a new line of thought – this paper by Andy Clark and David Chalmers is a good example of reflections in that field. Here is the introduction :

“Where does the mind stop and the rest of the world begin? The question invites two standard replies. Some accept the demarcations of skin and skull, and say that what is outside the body is outside the mind. Others are impressed by arguments suggesting that the meaning of our words “just ain’t in the head”, and hold that this externalism about meaning carries over into an externalism about mind. We propose to pursue a third position. We advocate a very different sort of externalism: an active externalism, based on the active role of the environment in driving cognitive processes”.

There is certainly a “the medium is the message” angle on that – but it goes further with the author and the medium no longer being discrete entities but part of a continuum.

We are already uploading – but most of us have not noticed yet. As William Gibson puts it: the future is already here – it’s just not very evenly distributed.

Books and Games and Knowledge management02 Sep 2010 at 13:22 by Jean-Marc Liotier

I stumbled upon an article published last June by Knowledge@Wharton mentioning “The Power of Pull: How Small Moves, Smartly Made, Can Set Big Things in Motion” by John Hagel III, John Seely Brown, Lang Davison. Somehow I had missed this book that offers intriguing alternatives to organizations mired in their own structures. To learn about it you can read this critique by The Economist,  that happen to be titled “In Search of Serendipity” – on a side note, I’m happy that this word that I discovered in 1997 has been enjoying increasing popularity since the beginning of this millennium.

I can’t stand playing a MMORPG for even fifteen minutes (I prefer tactical, operational or strategical games – preferably with a pseudo-realistic environment), but I watched my people play and I agree about Hagel & al’s the mob collaborative dynamics that happen there :

Guild leaders in World of Warcraft “require a high degree of influence,” noted Hagel [..]. “You have to be able to influence and persuade people — not order them to do things. Ordering people in most of these guilds doesn’t get you far.”

In addition to the leadership qualities involved with becoming the head of a guild and assembling a problem-solving team from previously independent players, World of Warcraft enthusiasts, as noted by Hagel, conduct extensive after-action reviews of their performances as well as that of the leader. In addition, he said that game players typically customize their own dashboards to offer statistics and rate performance in areas they consider critical to their strategy.

This parallel between gaming and management is interesting – but Hagel & al. are not the first to notice it. In 2008, in “Collective solitude and social networks in World of Warcraft” my fellow ESSCA alumni and friend Nicolas Ducheneaut remarked :

We show that these social networks are often sparse and that most players spend time in the game experiencing a form of “collective solitude”: they play surrounded by, but not necessarily with, other players. We also show that the most successful player groups are analogous to the organic, team-based forms of organization that are prevalent in today’s workplace. Based on these findings, we discuss the relationship between online social networks and “real world” behavior in organizations in more depth.

“Prevalent in today’s workplace” ? From my big company point of view, I find that statement more than slightly optimistic – though not surprising considering how Nicolas enthusiastically embraces the future. But that is definitely the direction that we are going in. Expect even more of it as Generation Y enters the workforce. Until then, there is still a lot of evangelism to do…

Email and Knowledge management and Systems09 Apr 2010 at 14:41 by Jean-Marc Liotier

In the digital world, the folder metaphor has perpetuated the single dimensional limitations of the physical world : each message is present in one and only one folder. The problem of adding more dimensions to the classification has been solved ages ago – whether you want to go hardcore with a full thesaurus or just use your little folksonomy the required technical foundation is the same : tags, labels – or whatever you want to call your multiple index keys.

Noticing how Google has been successful with email tagging, I started exploring free implementation four years ago, aiming to complement my folder hierarchy with tags. But inertia affects all, and I actually never went beyond happy experimentation. As Zoli Erdos puts it in his eloquent description of Google’s 2009 breakthrough in ending the tags vs. folders war :

Those who “just can’t live without folders”, mostly legacy users of Yahoo Mail, Hotmail and mostly Outlook. They are used to folders and won’t learn new concepts, don’t want to change, but are happy spending their life “organizing stuff” and even feel productive doing so.

Ouch – that hits a little too close to home. And even if I had gone forward with tags, that would have been pussyfooting : as Google illustrates, the distinction between tags and folders is a technical one – from a user point of view it should be abstracted. Of course the abstraction is somewhat leaky if you consider what folders mean for local storage, but in the clouds you can get away with that.

For cranky folder-addicted users like myself, born to the Internet with early versions of Eudora, later sucking at the  Microsoft tit and nowadays a major fan Mozilla Thunderbird, there is definitely a user interface habit involved – and one that is hard to break after all those years. It is not about the graphics – I use Mutt regularly; it is about the warm fuzzy feeling of believing that the folder is a direct and unabstracted representation of something tangible on a file system.

Software objects are all abstractions anyway but, with time, the familiar abstraction becomes something that feels tangible. This is why, while I acknowledge the need to get the tagging goodness, I still have desire toward the good old folder look and feel with features such as drag-n-drop and hierarchies. Google seems to know that audience : all those features are now part of Gmail. Now tell the difference between folders and labels !

To make a smooth transition, I want Mozilla Thunderbird to display tags as folders. It looks like it is possible using Gmail with Claws through IMAP. I have yet to learn if I can do that on my own systems using Courier or Dovecot.

Consumption and Free software and Knowledge management and Mobile computing and Networking & telecommunications and Systems and Technology and Unix19 Oct 2009 at 1:18 by Jean-Marc Liotier

Five months have elapsed since that first week-end when my encounter with Android was a severe case of culture shock. With significant daily experience of the device, I can now form a more mature judgement of its capabilities and its potential – of course from my own highly subjective point of view.

I still hate having to use Google Calendar and Google Contacts for synchronization.  I hope that SyncML synchronization will appear in the future, make Android a better desktop citizen and provide more choice of end points. Meanwhile I use Google. With that out of the way, let’s move on to my impressions of Android itself.

I am grateful for features such as a decent web browser on a mobile device, for a working albeit half baked packaging and distribution system, and for Google Maps which I consider both a superlative application in its own right and the current killer albeit proprietary infrastructure for location enabled applications. But the rigidly simple interface that forces behaviours upon its user feels like a straitjacket : the overbearing feeling when using Android is that its designers have decided that simplicity is to be preserved at all costs regardless of what the user prefers.

Why can’t I select a smaller font for my list items ? Would a parameter somewhere in a customization menu add too much complication ? Why won’t you show me the raw configuration data ? Is it absolutely necessary to arbitrarily limit the number of virtual desktops to three ? From the point of a user who is just getting acquainted with such a powerful platform, those are puzzling questions.

I still don’t like the Android ‘s logic, and moreover I still don’t quite understand it. Of course I manage to use that system, but after five month of daily use it still does not feel natural. Maybe it is just a skin-deep issue or maybe I am just not the target audience – but some features are definitely backwards – package management for example. For starters, the “My Downloads” list is not ordered alphabetically nor in any apparently meaningful order. Then for each upgradeable package, one must first browse to the package, then manually trigger the upgrade package, then acknowledge system privileges the upgraded package and finally clear the download notification and the update notification. Is this a joke ? This almost matches the tediousness of upgrading Windows software – an impressive feat considering that the foundations of Android package management seem serious enough. Where is my APT ?

Like any new user on a prosperous enough system, I am lost in choices – but that is an embarrassment of riches. Nevertheless, I wonder why basics such as a task manager are not installed by default. In classic Unix spirit, even the most basic system utilities are independent applications. But what is bearable and even satisfying on a system with a decent shell and package management with dependencies becomes torture when installing a package is so clumsy and upgrading it so tedious.

Tediousness in package management in particular and user interaction in general makes taming the beast an experience in frustration. When installing a bunch of competing applications and testing them takes time and effort. Experimenting is not the pleasure it normally is on a Linux system. The lack of decent text entry compounds the feeling. Clumsy text selection makes cut and paste a significant effort – something Palm did make quick, easy and painless more than ten years ago. Not implementing pointer-driven selection – what were the developers thinking ?

PIM integration has not progressed much. For a given contact, there is no way to look at a communications log that spans mail, SMS and telephony: each of them is its own separate universe. There is no way to have a list of meetings with a given contact or at given location.

But there basic functionality has been omitted too. For example when adding a phone number to an existing contact, search is disabled – you have to scroll all the way to the contact. There is no way to search the SMS archive and SMS to multiple recipients is an exercise left to applications.

Palm OS may have been unstable, incapable of contemporary operating system features, offering only basic functionality and generally way past its shelf date. But in the mind of users, it remains the benchmark against which all PIM systems are judged. And to this day I still don’t see anything beating Palm OS on its home turf of  PIM core features and basic usability.

Palm OS was a poster child for responsiveness, but on the Android everything takes time – even after I have identified and killed the various errant applications that make it even slower. Actually, the system is very fast and capable of feats such as full-motion video that were far beyond the reach of the Palm OS. But the interaction is spoilt by gratuitous use of animations for everything. Animations are useful for graphically hinting the novice user about what is going on – but then hey are only a drag. But please let me disable animations as I do on every desktop I use !

The choice of a virtual keyboard is my own mistake and I am now aware that I need a physical keyboard. After five months, I can now use the virtual keyboard with enough speed and precision for comfortable entry of a couple of sentences. But beyond that it is tiring and feels too clumsy for any meaningful work. This is a major problem for me – text entry is my daily bread and butter. I long for the Treo‘s keyboard or even the one on the Nokia E71 – they offered a great compromise of typing speed and compacity. And no multitouch on the soft keyboard means no keyboard shortcuts which renders many console applications unusable – sorry Emacs users.

The applications offering is still young and I cannot blame it for needing time to expand and mature. I also still need to familiarize myself with Android culture an develop the right habits to find my way instinctively and be more productive. After five months, we are getting there – one handed navigation has been done right. But I still believe that a large part of the user interface conventions used on the Android does not match the expectations for general computing.

It seems like everything has been meticulously designed to bury under a thick layer of Dalvik and Google plaster anything that could remind anyone of Unix. It is very frustrating to know that there is a Linux kernel under all that, and yet to suffer wading knee-deep in the marshes of toyland. The more I use Android an study it, the more I feel that Linux is a mere hardware abstraction layer and the POSIX world a distant memory. This is not the droid I’m looking for.

Geography and Knowledge management and Mobile computing and Technology30 Aug 2009 at 21:30 by Jean-Marc Liotier

As the Geohack template used by Wikipedia for geographical locations attests (see Paris for example) there are many map publishing services on the Web. But almost all of them rely on an oligopoly of geographical data suppliers among whom AND, Navteq and Teleatlas dominate and absorb a large proportion of the profit in the geographical information value chain :

“If you purchase a TomTom, approximately 20-30% of that cost goes to Tele Atlas who licenses the maps that TomTom and many other hardware manufacturers use. Part of that charge is because Tele Atlas itself, and the company’s main rival Navteq, have to buy the data from national mapping agencies in the first place, like the Ordance Survey, and then stitch all the information together. Hence the consumer having to pay on a number of levels”.

And yet, geographical data is a fundamental pillar of our information infrastructure. A few years ago the realm of specialized geographic information systems, geography is nowadays a pervasive dimension of about every sort of service. When something becomes an essential feature of our lives, nothing short of freedom is acceptable. What happens when that freedom requires collecting humongous amounts of data and when oligopolistic actors strive to keep control and profits to themselves ? Free software collaboration and  distributed data collection of course !

Andrew Ross gives a nice summary of why free geographical data is the way of the future :

“The tremendous cost of producing the maps necessitates that these firms have very restrictive licenses to protect their business models selling the data. As a result, there are many things you can’t do with the data.

[..] The reason why OpenStreetMap will win in the end and likely obviate the need for commercial map data is that the costs and risks associated with mapping are shared. Conversely, for Navteq and TeleAtlas, the costs born by these companies are passed on to their customers. Once their customers discover OpenStreetMap data is in some cases superior, or more importantly – they can contribute to it and the license allows them to use the data for nearly any purpose – map data then becomes commodity”.

The proprietary players are aware of that trend, and they try to profit from the users who wish to correct the many errors contained in the data they publish. But why would anyone contribute something, only to see it monopolized by the editor who won’t let you do what you want with it ? If I make the effort of contributing carefully collected data, I want it to benefit as many people as possible – not just someone who will keep it for his own profit.

Access to satellite imagery will remain an insurmountable barrier in the long term, but soon the map layers will be ours to play with – and that is enough to open the whole world of mapping. Like a downhill snowball, the OpenStreetMap data set growth is accelerating fast and attracting a thriving community that now includes professional and institutional users and contributors. Over its first five years, the Wikipedia-like online map project has delivered great results – and developed even greater ambitions.

I have started to contribute to OpenStreetmap – I feel great satisfaction at mapping the world for fun and for our common good. Owning the map feels good ! You can do it too – it is easy, especially if you are the sort of person who often logs tracks with a GPS receiver. OpenStreetMap’s infrastructure is quite impressive – everything you need is already out there waiting for your contribution, including very nice editors – and there is one for Android too.

If you just want to add your grain of sand to the heap, reporting bugs and naming the places in your favourite neighbourhood are great ways to help build maps that benefit all of us.  Contributing to the map is like giving directions to strangers lost in your neighbourhood – except that you are giving directions to many strangers at once.

If you are not yet convinced, take a look a the map – isn’t it beautiful ? And it is only one of the many ways to render OpenStreetMap data. Wanna make a cycling map with it ? Yes we can ! That is the whole point of the project – we can do whatever we want with the data, free in every way.

And anyone can decide he wants his neighbourhood part of the worldmap, even if no self-respecting for-profit enterprise will ever consider loosing money on such endeavour :

“OpenStreetMap has better coverage in some niche spaces than other mapping tools, making it very attractive resource for international development organizations. Want proof ? [..] we looked at capital cities in several countries that have been in the news lately for ongoing humanitarian situations – Zimbabwe, Somalia, and the Democratic Republic of the Congo. For two our of the three, Mogadishu and Kinshasa, there is simply no contest – OpenStreetMap is way ahead of the others in both coverage and in the level of detail. OpenStreetMap and Google Maps are comparable in Harare. The data available through Microsoft’s Virtual Earth lagged way behind in all three”.

Among other places, I was amazed at the level of detail provided to the map of Ouagadougou. Aren’t these exciting times for cartography ?

If you purchase a TomTom, approximately 20-30% of that cost goes to Tele Atlas who licenses the maps that TomTom and many other hardware manufacturers use. Part of that charge is because Tele Atlas itself, and the company’s main rival Navteq, have to buy the data from national mapping agencies in the first place, like the Ordance Survey, and then stitch all the information together. Hence the consumer having to pay on a number of levels.
Code and Design and Knowledge management and Social networking and The Web21 Aug 2009 at 16:01 by Jean-Marc Liotier

LinkedIn’s profile PDF render is a useful service, but its output lacks in aesthetics. I like the HTML render by Jobspice, especially the one using the Green & Simple template – but I prefer hosting my resume on my own site. This is why since 2003 I have been using the XML Résumé Library. It is an XML and XSL based system for marking up, adding metadata to, and formatting résumés and curricula vitae. Conceptually, it is a perfect tool – and some trivial shell scripting provided me with a fully automated toolchain. But the project has been completely quiet since 2004 – and meanwhile we have seen the rise of the hresume microformat, an interesting case of “less is more” – especially compared to the even heavier HR-XML.

Interestingly, both LinkedIn and Jobspice use hresume. A PHP LinkedIn hResume grabber part of a WordPress plugin by Brad Touesnard takes the hresume microformat block from a LinkedIn public profile page and weeds out all the LinkedIn specific chaff. With pure hresume semantic XHTML, you just have to add CSS to obtain a presentable CV. So my plan is now to use LinkedIn as a resume writing aid and a social networking tool, and use hresume microformated output extracted from it to host a nice CSS styled CV on my own site.

Preparing to do that, I went through the “hResume examples in the wild” page of the microformats wiki and selected the favorite styles that I’ll use for inspiration :

Great excuse to play with CSS – and eventually publish an updated CV…

Code and Debian and Free software and Knowledge management and RSS and Social networking and Systems and Unix18 May 2009 at 12:15 by Jean-Marc Liotier

If you want to skip the making-of story, you can go straight to the laconica2IRC.pl script download. Or in case anyone is interested, here is the why and how…

Some of my best friends are die-hard IRC users that make a point of not touching anything remotely looking like a social networking web site, especially if anyone has ever hinted that it could be tagged as “Web 2.0” (whatever that means). As much as I enjoy hanging out with them in our favorite IRC channel, conversations there are sporadic. Most of the time, that club house increasingly looks like an asynchronous forum for short updates posted infrequently on a synchronous medium… Did I just describe microblogging ? Indeed it is a very similar use case, if not the same. And I don’t want to choose between talking to my close accomplices and opening up to the wider world. So I still want to hang out in IRC for a nice chat from time to time, but while I’m out broadcasting dents I want my paranoid autistic friends to get them too. To satisfy that need, I need to have my IRC voice say my dents on the old boys channel.

The data source could be an OpenMicroblogging endpoint, but being lazy I found a far easier solution : use Laconi.ca‘s Web feeds. Such solution looked easier because there are already heaps of code out there for consuming Web feeds, and it was highly likely that I would find one I could bend into doing my bidding.

To talk on IRC, I had previously had the opportunity to peruse the Net::IRC library with great satisfaction – so it was an obvious choice. In addition, in spite of being quite incompetent with it, I appreciate Perl and I was looking for an excuse to hack something with it.

With knowledge of the input, the output and the technology I wanted to use, I could start implementing. Being lazy and incompetent, I of course turned to Google to provide me with reusable code that would spare me building the script from the ground up. My laziness was of course quick to be rewarded as I found rssbot.pl by Peter Baudis in the public domain. That script fetches a RSS feed and says the new items in an IRC channel. It was very close to what I wanted to do, and it had no exotic dependancies – only Net::IRC library (alias libnet-irc-perl in Debian) and XML::RSS (alias libxml-rss-perl in Debian).

So I set upon hacking this script into the shape I wanted. I added IRC password authentication (courtesy of Net::IRC), I commented out a string sanitation loop which I did not understand and whose presence cause the script to malfunction, I pruned out the Laconi.ca user name and extraneous punctuation to have my IRC user “say” my own Identi.ca entries just as if I was typing them myself, and after a few months of testing I finally added an option for @replies filtering so that my IRC buddies are not annoyed by the noise of remote conversations.

I wanted my own IRC user “say” the output, and that part was very easy because I use the Bip an IRC proxy which supports multiple clients on one IRC server connection. This script was just going to be another client, and that is why I added password authentication. Bip is available in Debian and is very handy : I usually have an IRC client at home, one in the office, occasionally a CGI-IRC, rarely a mobile client and now this script – and to the dwellers of my favorite IRC channel there is no way to tell which one is talking. And whichever client I choose, I never missing anything thanks to logging and replay on login. Screen with a command-line IRC client provides part of this functionality, but the zero maintainance Bip does so much more and is so reliable that one has to wonder if my friends cling to Irssi and Screen out of sheer traditionalism.

All that remained to do was to launch the script in a sane way. To control this sort of simple and permanently executed piece of code and keep it from misbehaving, Daemon is a good way. Available in Debian, Daemon proved its worth when the RSS file went missing during the Identi.ca upgrade and the script crashed everytime it tried to access it for lack of exception catching. Had I simply put it in an infinite loop, it would have hogged significant ressources just by running in circles like a headless chicken. Daemon not only restarted it after each crash, but also killed it after a set number of retries in a set duration – thus preventing any interference with the rest of what runs on our server. Here is the Daemon launch command that I have used :

#!/bin/bash
path=/usr/local/bin/laconica2IRC
daemon -a 16 -L 16 -M 3 -D $path -N -n laconica2IRC_JML -r -O $path/laconica2IRC.log -o $path/laconica2IRC.log $path/laconica2IRC.pl

And that’s it… Less cut and paste from Identi.ca to my favorite IRC channel, and my IRC friends who have not yet adopted microblogging don’t feel left out of my updates anymore. And I can still jump into IRC from time to time for a real time chat. I have the best of both worlds – what more could I ask ?

Sounds good to you ? Grab the laconica2IRC.pl script !

Code and Knowledge management and Social networking and Technology29 Jan 2009 at 15:46 by Jean-Marc Liotier

I sometimes get requests for help. Often they are smart questions and I’m actually somewhat relevant to them – for example questions about a script or an article that I wrote, or an experience I had. But sometimes it is not the case. This message I received today is particularly bad, so I thought it might be a public service to share it as an example of what not to do. This one is especially appalling because it comes not from some wet-behind-the-ears teenager to whom I would gracefully have issued a few hints and a gentle reminder of online manners, but from the inside of the corporate network of Wipro – a company that has a reputation as a global IT services organization.

From: xxxxxx.kumar@wipro.com
Subject: Perl
To: jim@liotier.org
Date: Thu, 29 Jan 2009 16:22:32 +0530

Hi Jim,

Could you please help me in finding out the solution for my problem. Iam new to perl i have tried all the options whatever i learned but couldn’t solve. Please revert me if you know the solution.

Here is the problem follows:

Below is the XML in which you could see the lines with AssemblyVersion and Version in each record i need to modify these values depending on some values which i get from perforce. Assuming hardcode values as of now need to change those values upon user wish using Perl. Upon changing these lines it should effect in existing file .

<FileCopyInfo>
<Entries>
<Entry>
<DeviceName>sss</DeviceName>
<ModuleName>general1</ModuleName>
<AssemblyVersion>9</AssemblyVersion>
<Language>default</Language>
<Version>9</Version>
<DisplayName>Speech – eneral</DisplayName>
<UpdateOnlyExisting>false</UpdateOnlyExisting>
</Entry>
<Entry>
<DeviceName>sss</DeviceName>
<ModuleName>general2</ModuleName>
<AssemblyVersion>9</AssemblyVersion>
<Language>default</Language>
<Version>9</Version>
<DisplayName>Speech – recog de_DE</DisplayName>
<UpdateOnlyExisting>false</UpdateOnlyExisting>
</Entry>
</Entries>
</FileCopyInfo>

Thanks & Regards,
Xxxxxxx

From what I gather from the convoluted use of approximative English, the problem is about changing the value of two XML elements in a file. Can anyone believe that this guy has even tried to solve this simple problem on his own ? It is even sadder that he tries to obtain answers by spamming random strangers by mail, soliciting answers that will never be shared with the wider world. Least he could have done is posting his message on a Perl forum so that others with similar questions can benefit from the eventual answer.

Had he performed even a cursory Google search, he would have found that one of his compatriots has done exactly that and gotten three different answers to a similar question, letting him choose between XML::Twig, XML::Rules and XML::Simple. These are just three – but the Perl XML FAQ enumerates at least a dozen CPAN modules for manipulating XML data. The documentation for any of them or the examples in the FAQ would also have put him on the track to a solution.

Everyone can be clueless about something and learning is a fundamental activity for our whole lives. But everyone can do some research, read the FAQ, ask smart questions and make sure that the whole community benefits from their learning process, especially as it doe not cost any additional effort. Knowledge capitalization within a community of practice is such an easy process with benefits for everyone involved that I don’t understand why it is not a universally drilled reflex.

The funny part is that while I’m ranting about it and wielding the cluebat over the head of some random interloper, I realize that the same sort of behavior is standard internally in a very large company I know very well, because a repository of community knowledge has not even been made available for those willing to share. Is there any online community without a wiki and a forum ?

Ten years ago I was beginning to believe that consulting opportunities in knowledge management were drying up because knowledge management skills had entered the mainstream and percolated everywhere. I could not be more wrong : ten years of awesome technological progress have proved beyond reasonable doubt that technology and tools are a peripheral issue : knowledge management is about the people and their attitudes; it is about cooperation. This was the introduction of my graduation paper ten years ago, with the prisoner’s dilemma illustrating cooperation issues – and it is today still as valid as ever.

Arts and Brain dump and Knowledge management and Methodology and Social networking and The Web23 Jan 2009 at 14:43 by Jean-Marc Liotier

Amanda Mooney remarks that :

It’s hard to maintain the illusion that you’re particularly special, talented and original when, with a quick Google of whatever genius idea you’ve come up with, you see that 3 billion people have already thought that, done that, analyzed that, criticized that, indexed the history of that in Wikipedia and made a fortune on that… In 1995.

So now, to really live up to our parents’ and teachers’ praise, we have to work a lot harder, be a lot smarter and know that we’re competing with all of those other 3 billion people who think like us and have already started to act on the kind of ideas and “talent” we have.

Actually it was always like that, but slower and invisible. Original ideas are few because similar inputs through similar individuals generate similar outputs – the same problems with the same environment and the same tools handled by people who share backgrounds produce the same conclusions. So it is not surprising that concepts are invented simultaneously and reinvented all the time. I don’t feel belittled by finding out that I’m not unique – on the contrary : I feel empowered by finding that I’m not isolated anymore. I remember lounging in libraries in my youth, reading esoteric technical books chosen at random. I often resented not being able to share that with people who have similar interests. Now we can find each other easily and all be surfing together at the wavefront. Childhood dreams came true – life is good !

But if you anguish about being a unique snowflake just like all the other unique snowflakes, there is still hope for you. Our mental agility and cultural maleability suffer from a rather heavy inertia, so the processing stage is not readily manipulable. That leaves only the input to be tinkered with in the short term – and you can play with inputs a lot ! This is why it is important to cultivate diversity in your social network, and it is also why adding some noise into your web feeds is good for you. Who is not addicted to new stimuli ?

Email and Knowledge management and RSS and Social networking and Systems administration and The Web and Unix27 Nov 2008 at 13:17 by Jean-Marc Liotier

Have you tried one more time to convince you parents to switch to web feeds to get updates from the family ? Do you cringe when you see your colleague clumsily wade through a collections of sites main pages instead of having them aggregated in a single feed ? Or did your technophobe girlfriend miss the latest photo album you posted ? With a wide variety of source acknowledging that web feeds ans web feeds readers being perceived as too technical, many of us have scaled back this particular evangelization effort to focus it on users ripe for transitionning from basic to advanced  tools.

Breaking through that resistance outright is beyond our power, but we can get around it. Electronic mail is a mature tool with well understood use cases with which even the least competent users feels comfortable thanks to how easily it maps with the deeply assimilated physical mail model. This is why Louis Gray has started mailing Google Reader items to promote the use of that web feed reader. But we can do better than that by building a fully automated bridge from web feed to email.

Our hope for plugging the late adopters into the information feeds is named rss2email. As its name suggests, Aaron Swartz’s GPL-licensed rss2email utility converts RSS subscriptions into email messages and sends them to whatever address you specify. Despite the name, it handles Atom feeds as well, so you should be able to use it with just about any feed you like. And of course rss2email is available from Debian.

The nice introduction to rss2email by Joe ‘Zonker’ Brockmeier is all the documentation you need – and rss2email is so simple that you probably don’t even need that. I now have some of my favorite late adopters each plugged into his custom subset of my regular information distribution feeds. The relevant news stories get mailed to them without me having to even think about it. And the best part is that they now read them !

Next Page »