Systems archived articles

Subscribe to the RSS feed for this category only

Networking & telecommunications and Radio and Systems and Technology04 Feb 2015 at 23:24 by Jean-Marc Liotier

I was so happy with my pair of Baofeng UV-B6 that I decided to buy four more to entirely replace my fleet of even cheaper Lidl Silvercrest TwinTalker PMR transceivers whose horrendous attrition hints about excessive cheapness.

Alas, as I was using the beloved CHIRP to load them with the family’s standard configuration, I encountered the dreaded ‘Radio did not Ack Programming Mode‘ error.

I was using the USB serial adapter with ID 067b:2303 “Prolific Technology, Inc. PL2303 Serial Port” with of course the Baofeng/Kenwood/etc. specific twin 2.5mm/3.5mm plug.

Some of those Baofeng UV-B6 worked fine with this cable – those are UV-B6 with 29 menu entries (with serial numbers 10B6014828 and 10B6014839)

But others were entirely recalcitrant, with a consistent error pattern – those are UV-B6 with 27 menu entries (with serial numbers 10B6025976, 10B6025999, 10B6026018 and 10B6026047).

As suggested by Miklor I slightly trimmed the plug with a cutter – no change.

I also used a couple of male/female extension cords (2.5mm and 3.5mm) in case the lack of twin plastic molding would provide unimpeded contact – no change either.

I bought two different other cables – they both turned out to also be PL2303 serial adapters with same USB ID (but with different plastic moldings – and of course different commercial names). Still no change – same frustrating results.

My last hope was to get this cable, which turned out having USB ID 0403:6001 “Future Technology Devices International, Ltd FT232 USB-Serial (UART) IC”. The ‘Genuine‘ qualifier in its name and the photocopied sheet that attempted to pass for documentation by merely pointing to Miklor were par for the course and did not inspire me to expect anything different… But actually – it worked ! This is the legendary ‘FTDI’ cable I was reading about, the real thing, the one that works with all Baofeng UV-B6 sub-models. Was I not a militant atheist, I would certainly consider this as a proof of God’s greatness – الله أكبر and all those sorts of things !

TL;DR :
Cables with a FTDI chip work with both 27 and 29 menu entries UV-B6 submodels
– Cables with a PL2303 chip only work with 29 menu entries UV-B6

There is still a non-zero probability that all the PL2303 chips I went through were counterfeit and that only the FTDI model was genuine – thus voiding my analysis. But with a sample of three PL2303-based cables from three different vendors, that probability is low enough for me to publish this article. A driver issue is not entirely impossible either – I have only tested with Linux, where both PL2303 and FTDI drivers are part of the standard kernel.

By the way, how does one manage a mixed fleet of 27 and 29 menu entries UV-B6 submodels with CHIRP ? Well – easy:

– If you upload an image originaly downloaded from a 29 menu entries submodel to a 27 menu entries submodel, CHIRP will give you the following error message: “An error has occurred – Radio NAK’d block at address 0x0f10” but you can disregard this message as it only concerns the non-existent menu items – the rest of the configuration has been perfectly transmitted.

– If you you upload an image originaly downloaded from a 27 menu entries submodel to a 29 menu entries submodel, no error occurs – but companding will be disabled. No problem.

Now I would be grateful if someone could explain the interoperability of the companding feature – is it still useful if it is active in only one of the two transceivers involved in a given transmission ?

Uhhh… Anyone wants three PL2303-based cables ? I’ll sell them real cheap !

Edit: also used the FTDI cable successfully with the Pofung UV-B5.

Design and Mobile computing and Networking & telecommunications and Systems and Technology19 Nov 2010 at 16:32 by Jean-Marc Liotier

In France, at least two mobile networks operators out of three (I won’t tell you which ones) have relied on Cell ID alone to identify cells… A mistake because contrary to what the “Cell ID” moniker suggests, it can’t identify a cell on its own.

A cell is only fully identified by combining with the Location Area Identity (LAI). The LAI is an aggregation of Mobile Country Code (MCC), Mobile Network Code (MNC – which identifies the PLMN in that country) and the Location Area Code (LAC – which identifies Location Area within the PLMN). The whole aggregate is called Cell Global Identification (CGI) – a rarely encountered term, but this GNU Radio GSM architecture document mentions it with details.

Since operators run their networks in their own context, they can consider that MCC and MNC are superfluous. And since the GSM and 3G specifications defines the Cell ID as a 16 bit identifier, the operators have believed that they had plenty for all the cells they could imagine, even taking multiple sectors into account – but that was many years ago. Even nowadays there are not that many cells in a French GSM network, but the growth in the number of bearer channels was not foreseen and each of them requires a different CellID – which multiplies the number of cells by their number.

So all  those who in the beginnings of GSM and in the prehistory of 3GPP decided that 65536 identifiers ought to be enough for everyone are now fixing their information systems in a hurry as they run out of available identifiers – not something anyone likes to do on a large critical production infrastructure.

Manufacturers and operators are together responsible for that, but alas this is just one occurrence of common shortsightedness in information systems design. Choosing unique identifiers is a basic modeling task that happens early in the life of a design – but it is a critical one. Here is what Wikipedia says about unique identifiers :

“With reference to a given (possibly implicit) set of objects, a unique identifier (UID) is any identifier which is guaranteed to be unique among all identifiers used for those objects and for a specific purpose.”

The “specific purpose” clause could be interpreted as exonerating the culprits from responsibility : given their knowledge at the time, the use of Cell ID alone was reasonable for their specific purpose. But they sinned by not making the unique identifier as unique as it possibly could. And even worst, they sinned by not following the full extent of the specification.

But I won’t be the one casting the first stone – hindsight is 20/20 and I doubt that any of us would have done better.

But still… Remember kids : make unique identifiers as unique as possible and follow the specifications !

Networking & telecommunications and Systems and Technology25 Sep 2010 at 10:50 by Jean-Marc Liotier

If you can read French and if you are interested in networking technologies, then you must read Stephane Bortzmeyer’s blog – interesting stuff in every single article. Needless to say I’m a fan.

Stéphane commented an article by Nokia people : « An Experimental Study of Home Gateway Characteristics » – it exposes the results of networking performance tests on 34 residential Internet access CPE. For a condensed and more clearly illustrated version, you’ll appreciate the slides of « An Experimental Study of Home Gateway Characteristics » presented at the IETF 78’th meeting.

The study shows bad performance and murky non-compliance issues on every device tested. The whole thing was not really surprising, but it still sounded rather depressing to me.

But my knowledge of those devices is mostly from the point of few of an user and from the point of view of an information systems project manager within various ISP. I don’t have the depth of knowledge required for a critical look at this Nokia study. So I turned to a friendly industry expert who shall remain anonymous – here is his opinion :

[The study] isn’t really scientific enough testing IMHO. Surely most routers aren’t high performance due to cost reasons and most DSL users (Telco environments don’t have more than 8 Mbit/s (24 Mbit/s is max).

[Nokia] should check with real highend/flagships routers such as Linksys E3000. Other issues are common NAT issues or related settings or use of the box DNS Proxy’s. Also no real testing method explained here so useless IMHO. Our test plan has more than 500 pages with full description and failure judgment… :)

So take « An Experimental Study of Home Gateway Characteristics » with a big grain of salt. Nevertheless, in spite of its faults I’m glad that such studies are conducted – anything that can prod the consumer market into raising its game is a good thing !

Experimental study on 34 residential CPE by Nokia: http://j.mp/abqdf6 – Bad performance and murky non-compliance all ove

Experimental study on 34 residential CPE by Nokia: http://j.mp/abqdf6 – Bad performance and murky non-compliance all over

r

Design and Systems and Technology14 Apr 2010 at 11:04 by Jean-Marc Liotier

A colleague asked me about acceptable response times for the graphical user interface of a web application. I was surprised to find that both the Gnome Human Interface Guidelines and the Java Look and Feel Design Guidelines provide exactly the same values and even the same text for the most part… One of them must have borrowed the other’s guidelines. I suspect that the ultimate source of their agreement is Jakob Nielsen’s advice :

0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.

1.0 second is about the limit for the user’s flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.

10 seconds is about the limit for keeping the user’s attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect.

Jakob cites Miller’s “Response time in man-computer conversational transactions” – a paper that dates back to 1968. It seems like in more than forty years the consensus about acceptable response times has not moved substantially – which could be explained by the numbers being determined by human nature, independently of technology.

But still, I am rattled by such unquestioned consensus – the absence of dissenting voices could be interpreted as a sign of methodological complacency.

Code and Design and Systems and Technology13 Apr 2010 at 16:27 by Jean-Marc Liotier

Following a link from @Bortzmeyer, I was leafing through Felix von Leitner’s “Source Code Optimization” – a presentation demonstrating how unreadable code is rarely worth the hassle considering how good at optimizing compilers have become nowadays. I have never written a single line of C or Assembler in my whole life – but I like to keep an understanding of what is going on at low level so I sometimes indulge in code tourism.

I got the author’s point, though I must admit that the details of his demonstration flew over my head. But I found the memory access timings table particularly evocative :

Access Cost
Page Fault, file on IDE disk 1.000.000.000 cycles
Page Fault, file in buffer cache 10.000 cycles
Page Fault, file on ram disk 5.000 cycles
Page Fault, zero page 3.000 cycles
Main memory access 200 cycles (Intel says 159)
L3 cache hit 52 cycles (Intel says 36)
L1 cache hit 2 cycles

Of course you know that swapping causes a huge performance hit and you have seen the benchmarks where throughput is reduced to a trickle as soon as the disk is involved. But still I find that quantifying the number of cycles wasted illustrates the point even better. Now you know why programmers insist on keeping memory usage tight.

Email and Knowledge management and Systems09 Apr 2010 at 14:41 by Jean-Marc Liotier

In the digital world, the folder metaphor has perpetuated the single dimensional limitations of the physical world : each message is present in one and only one folder. The problem of adding more dimensions to the classification has been solved ages ago – whether you want to go hardcore with a full thesaurus or just use your little folksonomy the required technical foundation is the same : tags, labels – or whatever you want to call your multiple index keys.

Noticing how Google has been successful with email tagging, I started exploring free implementation four years ago, aiming to complement my folder hierarchy with tags. But inertia affects all, and I actually never went beyond happy experimentation. As Zoli Erdos puts it in his eloquent description of Google’s 2009 breakthrough in ending the tags vs. folders war :

Those who “just can’t live without folders”, mostly legacy users of Yahoo Mail, Hotmail and mostly Outlook. They are used to folders and won’t learn new concepts, don’t want to change, but are happy spending their life “organizing stuff” and even feel productive doing so.

Ouch – that hits a little too close to home. And even if I had gone forward with tags, that would have been pussyfooting : as Google illustrates, the distinction between tags and folders is a technical one – from a user point of view it should be abstracted. Of course the abstraction is somewhat leaky if you consider what folders mean for local storage, but in the clouds you can get away with that.

For cranky folder-addicted users like myself, born to the Internet with early versions of Eudora, later sucking at the  Microsoft tit and nowadays a major fan Mozilla Thunderbird, there is definitely a user interface habit involved – and one that is hard to break after all those years. It is not about the graphics – I use Mutt regularly; it is about the warm fuzzy feeling of believing that the folder is a direct and unabstracted representation of something tangible on a file system.

Software objects are all abstractions anyway but, with time, the familiar abstraction becomes something that feels tangible. This is why, while I acknowledge the need to get the tagging goodness, I still have desire toward the good old folder look and feel with features such as drag-n-drop and hierarchies. Google seems to know that audience : all those features are now part of Gmail. Now tell the difference between folders and labels !

To make a smooth transition, I want Mozilla Thunderbird to display tags as folders. It looks like it is possible using Gmail with Claws through IMAP. I have yet to learn if I can do that on my own systems using Courier or Dovecot.

Consumption and Free software and Knowledge management and Mobile computing and Networking & telecommunications and Systems and Technology and Unix19 Oct 2009 at 1:18 by Jean-Marc Liotier

Five months have elapsed since that first week-end when my encounter with Android was a severe case of culture shock. With significant daily experience of the device, I can now form a more mature judgement of its capabilities and its potential – of course from my own highly subjective point of view.

I still hate having to use Google Calendar and Google Contacts for synchronization.  I hope that SyncML synchronization will appear in the future, make Android a better desktop citizen and provide more choice of end points. Meanwhile I use Google. With that out of the way, let’s move on to my impressions of Android itself.

I am grateful for features such as a decent web browser on a mobile device, for a working albeit half baked packaging and distribution system, and for Google Maps which I consider both a superlative application in its own right and the current killer albeit proprietary infrastructure for location enabled applications. But the rigidly simple interface that forces behaviours upon its user feels like a straitjacket : the overbearing feeling when using Android is that its designers have decided that simplicity is to be preserved at all costs regardless of what the user prefers.

Why can’t I select a smaller font for my list items ? Would a parameter somewhere in a customization menu add too much complication ? Why won’t you show me the raw configuration data ? Is it absolutely necessary to arbitrarily limit the number of virtual desktops to three ? From the point of a user who is just getting acquainted with such a powerful platform, those are puzzling questions.

I still don’t like the Android ‘s logic, and moreover I still don’t quite understand it. Of course I manage to use that system, but after five month of daily use it still does not feel natural. Maybe it is just a skin-deep issue or maybe I am just not the target audience – but some features are definitely backwards – package management for example. For starters, the “My Downloads” list is not ordered alphabetically nor in any apparently meaningful order. Then for each upgradeable package, one must first browse to the package, then manually trigger the upgrade package, then acknowledge system privileges the upgraded package and finally clear the download notification and the update notification. Is this a joke ? This almost matches the tediousness of upgrading Windows software – an impressive feat considering that the foundations of Android package management seem serious enough. Where is my APT ?

Like any new user on a prosperous enough system, I am lost in choices – but that is an embarrassment of riches. Nevertheless, I wonder why basics such as a task manager are not installed by default. In classic Unix spirit, even the most basic system utilities are independent applications. But what is bearable and even satisfying on a system with a decent shell and package management with dependencies becomes torture when installing a package is so clumsy and upgrading it so tedious.

Tediousness in package management in particular and user interaction in general makes taming the beast an experience in frustration. When installing a bunch of competing applications and testing them takes time and effort. Experimenting is not the pleasure it normally is on a Linux system. The lack of decent text entry compounds the feeling. Clumsy text selection makes cut and paste a significant effort – something Palm did make quick, easy and painless more than ten years ago. Not implementing pointer-driven selection – what were the developers thinking ?

PIM integration has not progressed much. For a given contact, there is no way to look at a communications log that spans mail, SMS and telephony: each of them is its own separate universe. There is no way to have a list of meetings with a given contact or at given location.

But there basic functionality has been omitted too. For example when adding a phone number to an existing contact, search is disabled – you have to scroll all the way to the contact. There is no way to search the SMS archive and SMS to multiple recipients is an exercise left to applications.

Palm OS may have been unstable, incapable of contemporary operating system features, offering only basic functionality and generally way past its shelf date. But in the mind of users, it remains the benchmark against which all PIM systems are judged. And to this day I still don’t see anything beating Palm OS on its home turf of  PIM core features and basic usability.

Palm OS was a poster child for responsiveness, but on the Android everything takes time – even after I have identified and killed the various errant applications that make it even slower. Actually, the system is very fast and capable of feats such as full-motion video that were far beyond the reach of the Palm OS. But the interaction is spoilt by gratuitous use of animations for everything. Animations are useful for graphically hinting the novice user about what is going on – but then hey are only a drag. But please let me disable animations as I do on every desktop I use !

The choice of a virtual keyboard is my own mistake and I am now aware that I need a physical keyboard. After five months, I can now use the virtual keyboard with enough speed and precision for comfortable entry of a couple of sentences. But beyond that it is tiring and feels too clumsy for any meaningful work. This is a major problem for me – text entry is my daily bread and butter. I long for the Treo‘s keyboard or even the one on the Nokia E71 – they offered a great compromise of typing speed and compacity. And no multitouch on the soft keyboard means no keyboard shortcuts which renders many console applications unusable – sorry Emacs users.

The applications offering is still young and I cannot blame it for needing time to expand and mature. I also still need to familiarize myself with Android culture an develop the right habits to find my way instinctively and be more productive. After five months, we are getting there – one handed navigation has been done right. But I still believe that a large part of the user interface conventions used on the Android does not match the expectations for general computing.

It seems like everything has been meticulously designed to bury under a thick layer of Dalvik and Google plaster anything that could remind anyone of Unix. It is very frustrating to know that there is a Linux kernel under all that, and yet to suffer wading knee-deep in the marshes of toyland. The more I use Android an study it, the more I feel that Linux is a mere hardware abstraction layer and the POSIX world a distant memory. This is not the droid I’m looking for.

Code and Debian and Free software and Knowledge management and RSS and Social networking and Systems and Unix18 May 2009 at 12:15 by Jean-Marc Liotier

If you want to skip the making-of story, you can go straight to the laconica2IRC.pl script download. Or in case anyone is interested, here is the why and how…

Some of my best friends are die-hard IRC users that make a point of not touching anything remotely looking like a social networking web site, especially if anyone has ever hinted that it could be tagged as “Web 2.0” (whatever that means). As much as I enjoy hanging out with them in our favorite IRC channel, conversations there are sporadic. Most of the time, that club house increasingly looks like an asynchronous forum for short updates posted infrequently on a synchronous medium… Did I just describe microblogging ? Indeed it is a very similar use case, if not the same. And I don’t want to choose between talking to my close accomplices and opening up to the wider world. So I still want to hang out in IRC for a nice chat from time to time, but while I’m out broadcasting dents I want my paranoid autistic friends to get them too. To satisfy that need, I need to have my IRC voice say my dents on the old boys channel.

The data source could be an OpenMicroblogging endpoint, but being lazy I found a far easier solution : use Laconi.ca‘s Web feeds. Such solution looked easier because there are already heaps of code out there for consuming Web feeds, and it was highly likely that I would find one I could bend into doing my bidding.

To talk on IRC, I had previously had the opportunity to peruse the Net::IRC library with great satisfaction – so it was an obvious choice. In addition, in spite of being quite incompetent with it, I appreciate Perl and I was looking for an excuse to hack something with it.

With knowledge of the input, the output and the technology I wanted to use, I could start implementing. Being lazy and incompetent, I of course turned to Google to provide me with reusable code that would spare me building the script from the ground up. My laziness was of course quick to be rewarded as I found rssbot.pl by Peter Baudis in the public domain. That script fetches a RSS feed and says the new items in an IRC channel. It was very close to what I wanted to do, and it had no exotic dependancies – only Net::IRC library (alias libnet-irc-perl in Debian) and XML::RSS (alias libxml-rss-perl in Debian).

So I set upon hacking this script into the shape I wanted. I added IRC password authentication (courtesy of Net::IRC), I commented out a string sanitation loop which I did not understand and whose presence cause the script to malfunction, I pruned out the Laconi.ca user name and extraneous punctuation to have my IRC user “say” my own Identi.ca entries just as if I was typing them myself, and after a few months of testing I finally added an option for @replies filtering so that my IRC buddies are not annoyed by the noise of remote conversations.

I wanted my own IRC user “say” the output, and that part was very easy because I use the Bip an IRC proxy which supports multiple clients on one IRC server connection. This script was just going to be another client, and that is why I added password authentication. Bip is available in Debian and is very handy : I usually have an IRC client at home, one in the office, occasionally a CGI-IRC, rarely a mobile client and now this script – and to the dwellers of my favorite IRC channel there is no way to tell which one is talking. And whichever client I choose, I never missing anything thanks to logging and replay on login. Screen with a command-line IRC client provides part of this functionality, but the zero maintainance Bip does so much more and is so reliable that one has to wonder if my friends cling to Irssi and Screen out of sheer traditionalism.

All that remained to do was to launch the script in a sane way. To control this sort of simple and permanently executed piece of code and keep it from misbehaving, Daemon is a good way. Available in Debian, Daemon proved its worth when the RSS file went missing during the Identi.ca upgrade and the script crashed everytime it tried to access it for lack of exception catching. Had I simply put it in an infinite loop, it would have hogged significant ressources just by running in circles like a headless chicken. Daemon not only restarted it after each crash, but also killed it after a set number of retries in a set duration – thus preventing any interference with the rest of what runs on our server. Here is the Daemon launch command that I have used :

#!/bin/bash
path=/usr/local/bin/laconica2IRC
daemon -a 16 -L 16 -M 3 -D $path -N -n laconica2IRC_JML -r -O $path/laconica2IRC.log -o $path/laconica2IRC.log $path/laconica2IRC.pl

And that’s it… Less cut and paste from Identi.ca to my favorite IRC channel, and my IRC friends who have not yet adopted microblogging don’t feel left out of my updates anymore. And I can still jump into IRC from time to time for a real time chat. I have the best of both worlds – what more could I ask ?

Sounds good to you ? Grab the laconica2IRC.pl script !

Design and Security and Systems and Technology09 Jun 2008 at 13:35 by Jean-Marc Liotier

Who these days has not witnessed the embarrassing failure modes of Microsoft Windows ? Blue screens of all hues and an assortment of badged dialog boxes make each crash into a very public display of incompetence.

I will not argue that Windows is more prone to failure than other operating systems – that potential war of religion is best left alone. What I am arguing is that failure modes should be graceful, or at least more discreet.

A black screen is neutral : the service is not delivered, but at least the most trafficked billboard in town is not hammering everyone with a random pseudo-technical message that actually means “my owners are clueless morons”.

Even better than a black screen : a low level routine that in case of system failure may display something harmless. Anything but an error message.

With so many information screens in the transportation industry, automated teller machines of all sorts and a growing number of advertising screens on roadsides, a properly and specifically configured system is necessary. What about “Microsoft Windows – Public Display Edition” ? Of course, users of Free Software don’t have to wait for a stubborn editor to understand the problems its customers are facing.

When the stakes are high enough, the costs of not managing risk through graceful degradation cannot be ignored. But let’s not underestimate the power of user inertia…

Brain dump and Systems04 May 2008 at 15:23 by Jean-Marc Liotier

The openMosix Project has officially closed as of March 1st 2008. This brings nostalgia of the toy OpenMosix cluster I once had running for a few years, assembled using the ailing collection of dusty hardware heating my apartment and infrequently put to productive use for large batch jobs. Soon I found that a single less ancient machine could perform about as fast if not faster for less electricity, and batch jobs being what they are I could just as well let them run during my sleep. But in an age when I had more time than money (I now have neither…) and when compression jobs were measured in hours, OpenMosix was a fun and useful patch for which I foresaw a bright future.

A few years later the efficient scheduler in recent Linux releases lets me load my workstation to high values with barely any consequence for interactive tasks, so I don’t really feel like I’m starved for processing power. But I still spend too much time staring at progress bars when editing photos, so more available CPU could definitely speed up my workflow. This is why I look longingly at the servers in the corridor who spend most of their lives at fractional loads while the workstation is struggling. Manual load balancing by executing heavy tasks on remote hosts is a bit of a chore, so I go browsing for single-system image clustering news, wondering why easily pooling local system resources is not yet a standard feature of modern operating systems.

One of the major obstacles to the generalization of SSI clustering outside of dedicated systems is that software such as OpenMosix or Kerrighed require an homogeneous environment : you can’t just mix whatever hosts happen to live on your LAN. For most users, homogenizing their systems using one Linux kernel version, let alone one type of operating system is not an option.

But nowadays, virtualization systems such as Xen are common enough that they may represent a viable path to homogenization. So I envision using it to extend my workstation to the neighboring hosts. I would run the workstation as a normal “on the metal” host, but on each of the hosts I want to assist the workstation I would run a Xen guest domain running a bare bones operating system compatible with taking part in a single system image with the workstation. Adding capacity to the cluster would be as simple as copying the Xen guest domain image to an additional host and running it as nice as desired, with no impact on the host apart from the CPU load and allocated memory.

This approach looks sane to me on paper, but strangely I can’t find much about it on the web. Is there an hidden caveat ? Did I make an obviously wrong assumption ? Tell me if you have heard of other users extending their workstation with SSI using Xen guest domains on random local hosts. Meanwhile, since OpenMosix is now unsupported, I guess I’ll have to dive into Kerrighed

Systems13 Apr 2008 at 14:30 by Jean-Marc Liotier

Fan failure is a common life-ending event for electronic hardware, and so did I send my three years old HIS Radeon 9800 Pro IceQ to the retirement drawer when overheat crashes helped me discover it was not pushing much air anymore since the fan motor seized.

This was an excellent pretext to acquire a faster graphical adapter. I chose a Sapphire HD 2600 XT AGP 512 DDR3 (reference 100229L) because it is currently an excellent performance for money at €85, and also because that is one of the few remaining choices for upgrading my aging AGP system significantly. It is such a rarity that I can’t even find a decent review to link to and the picture shows the 256 MB version which nevertheless looks exactly the same.

With Linux, all is mostly well : the RadeonHD driver provided me with the basic functionality I need, and I was hopeful it would make me forget the lacking Xorg 3D support with my former card. But alas for now RadeonHD does not support 3D for graphic adapters with a PCIE to AGP bridge – and that includes the Sapphire HD 2600 XT AGP. Users are ranting about the lack support for ATI HD 2600 AGP support so at least I am not the only one. In that conversation, someone with apparent insider information noted that “Linux support for AGP HD2xxx cards has not yet been released, but is being worked on”. So maybe I’ll have Linux 3D some day…

I then executed ATI Catalyst installer to upgrade my dusty Windows XP drivers in case we manage to throw a LAN party for the first time in months since we all let family and professional duties creep on our schedule, I was faced with this message : “setup did not recognize compatible drivers”. And the installation process would abort.

The Wikipedia entry for the Radeon R600 series mentions this issue :

Note that Catalyst drivers 7.10, 7.11 and 7.12 do not yet support the AGP versions of Radeon HD 2000 series cards with RIALTO bridge. Installing Catalyst drivers 7.10, 7.11 or 7.12 on those cards will yield the following error message: “setup did not find a driver compatible with your current hardware or operating system.” The cards, which are yet to be supported, with their PCI vendor ID are listed below:[46]

GPU core Product PCI device ID
RV610 Radeon HD 2400 Pro 94C4
RV630 Radeon HD 2600 Pro 9587
RV630 Radeon HD 2600 XT 9586

Niiice ! ATI lets manufacturers produce hardware it does not provide drivers for… At least this teaches me that they can even do worse than their proprietary binary drivers.

The solution is to head to Sapphire’s archive of old drivers which contains the 10th March 2008 release of the “Hotfix Driver for AGP version of ATI RADEON HD 2400Pro/2600Pro/2600XT/HD3850 Windows XP(32-bit)” which contains the old AGP support I needed.

On installation, the system complains about that driver not being “Windows certified”. The lack of that fairy dust does not hinder normal operation the slightest bit, but it does hint that this driver was rushed as a stop-gap.

I was competent enough to sort it out, but this is the sort of problem I would expect from cutting edge hardware, not from a mass market product designed to appeal to the value-for-money segment which is less technically aware than the free spending enthusiast segment. I can imagine many better ways for ATI to show respect toward its users.

PHP and Systems19 Oct 2007 at 10:12 by Jean-Marc Liotier

PHP :

  • Sparkline is a PHP library that produces Edward Tufte inspired “intense, simple, wordlike graphics”. I like the way sparklines spruce up text without interrupting its flow.
  • Libchart is a simple PHP charting library that reminds me of the core functionality of the Jpgraph. Simple to deploy and does the basics well.
  • Jpgraph can be used to create numerous types of graphs either on-line or written to a file. The range of functionality is very impressive and new features get added all the time. But basic use remains simple. Jpgraph is used by many Free software projects such as Mantis.
  • PEAR::Image_Graph was formerly known as GraPHPite. It supports a good choice of graph types, five types of data sources and many output formats.
  • Artichow is yet another small PHP charting library. Functionality is limited but it does look clean. The downside is that everything about it is in French… But that may be an upside if you are a French speaker !

Command-line and CGI :

  • Ploticus provides a C and Python API, and a Perl command line that can be called from CGI. It is a mature solution that is no longer on the cutting edge but still satisfy many users.

DHTML and Javascript :

  • Timeplot is a DHTML-based AJAXy widget for plotting time series and overlay time-based events over them (with the same data formats that Timeline supports). It has limited functionnality, but what it does looks very good and easy to integrate.
  • Plotkit is aimed at web applications that require plotting series of data in modern web browsers. It requires MochiKit and supports HTML Canvas and SVG, which makes it a cutting edge way to render graphics. It supports graphing from a dynamic table.
  • Plotr is a fork of Plotkit with no need for MochiKit. The result is an incredibly lightweight charting framework : only 12 KB !

Multiplatform :

Systems17 Oct 2007 at 11:49 by Jean-Marc Liotier

A new job generally mean a new computer. In most old big companies, a computer is still synonymous with having to suffer using Microsoft Windows. But despair not : a good selection of additional software will make Windows more functional and your workstation experience more bearable.

Here is a list of the ones I setup most of the time. It covers most of the indispensable everyday utilities :

Jxplorer LDAP client
Filezilla FTP client
Xchat IRC client
Notepad++ text editor
Psi Jabber client
Putty SSH client
WinSCP SCP client
Irfanview image viewer
PalmOne Palm Desktop
Virtual Dimension virtual desktop
Winmerge diff and merge utility
7zip archive manager
Mozilla Firefox Web browser
VMware player
Foxit PDF reader
Tortoise SVN client
Thunderbird mail client
Kompozer HTML editor
Unison file synchronization tool
AdAware system cleanser
Gimp image editor
Openoffice suite
GPG4Win
Tora Oracle SQL client

Of course that will not get you anywhere near as far as a half decent setup of Ubuntu or Debian, and once you will have hunted down, downloaded and installed each of those independant packages with no centralized package management you will have a much better understanding of what super cow powers are all about. But at least it is a start and you can quite comfortably survive with that kit.

As a bonus, here are the few useful Thunderbird that I use all the time :

Attachment Extractor
Headers Toggle
Rewrap Button
Remove Duplicate Messages
Enigmail

Music and Systems13 Oct 2007 at 17:24 by Jean-Marc Liotier

This took me a ridiculous chunk of afternoon to solve, and the solution was surprising to me. So I guess a full report will be useful to spare other users the same process…

Symptoms :

  • You mount a share with music files over SMB or CIFS. With a file browser you can navigate the tree, and you can play the files perfectly.
  • You add local music files to your Amarok collection, they appear and Amarok is fully functional.
  • You add the mount point of the network share to your collection. You then update or rescan your collection.
  • At some point during the scan, a notification pops up with the message : “The Collection Scanner was unable to process these files“. Once you acknowledge the notification, the scan halts and no files appear to have been added to the collection. As a bonus, KNotify may crash with signal 11 (SIGSEGV).
  • On the Samba file server, a ridiculously high number of files is opened. So many that on the client if you try even a ‘ls’ anywhere on the mounted share you will get a complain about “too many files opened”. In normal operation, Amarok only opens one file at a time during a scan.
  • Desperate, you try exiting Amarok. It crashes hard on termination and brings down the whole X session along with him.
  • You are pretty pissed off.

In summary, both sides work perfectly fine individually, but trying to get them to work together fails and there are no useful pointers.

Failing to root out the bug and not finding anything obvious on the Web I headed to the Amarok forums. There I quickly found that about each and every thread mentioning Samba ended with a link to the Samba page of the Amarok wiki. I found the content to be basic and apparently completely unrelated with my problem, but reading between the lines I understood the key to the solution…

If you have read and write rights on a share, there are probably no problems any way you put it. But if you only have read rights on the share and mount it read and write, then Amarok is all confused ! That is what was happening to me.

A few days ago, before letting a novice user play music on my workstation , in order to protect the files from harm, I had quickly removed my username from the write list of the music share on the file server. And I had forgotten about that…

So I went back to faulty /etc/smb.conf and I added my username to the “write list” parameters. I reloaded the Samba configuration, launched Amarok, the collection was automatically rescanned and my world was back to harmony.

Let the music play !

Systems17 Aug 2007 at 18:32 by Jean-Marc Liotier

We host applications on a couple dozen domain names with more subdomains than I count offhand. We have a policy that anything over which passwords transit should be encrypted, so we have plenty of Apache mod_ssl virtual hosts along with TLS or SSL versions of POP, IMAP, SMTP and XMPP. To provide all that as cheaply as possible, we run our own certificate authority and issued our own root certificate. Certificate authority is a pretty big word for a bunch of Openssl commands, but they do the job fine until we deploy something else to help us. So far, so good.

Since our root certificate is of course not bundled with any browser or operating system, our users are constantly nagged by their browser and mail client until they store it locally. In addition, with no root authority for other servers to refer to, server to server communication is wide open to man in the middle attacks. So at the moment, our cryptography is about as good as snake oil.

The limitations of the current implementation of HTTPS make it difficult to deploy correctly on the cheap. When a client requests a HTTPS connection, it does not tell the server the name of the host it wants to connect to. So the server has no way to choose a certificate, and this is why there can be only one certificate per IP address. IP address being an expensive resource, having one for each virtual host can quickly be prohibitively expensive, at least until IPv6 becomes sufficiently widespread.

With multiple sub-domains, we could use wildcard certificates. They have more risks than benefits and they are not universally supported, but at least they provide a cheap solution. But we host multiple domains, so even that is not the way out for us, nor for the countless wretched sysadmins that share our predicament.

But despair not, wretched sysadmin : you savior has arrived, and its name is Server Name Indication ! SNI is a TLS extension that allows multiple certificates per IP address. Paul Querna has an excellent and easy explanation of what SNI is about – which I reproduce here :

When a client connects to a server using SSL, the server will send the Public Certificate to them. This enables them to actually decrypt the data sent from the server later. Here is a short simplified example:

1. C: (TLS Handshake) Hello, I support XYZ Encryption.
2. S: (TLS Handshake) Hi There, Here is my Public Certificate,
                      and lets use this encryption algorithm.
3. C: (TLS Handshake) Sounds good to me.
4. C: (Encrypted) HTTP Request
5. S: (Encrypted) HTTP Reply

The problem in HTTP is we don’t know which Public Certificate to send, until step 4. This is long after the public certificate has been sent. Protocols such as IMAP and SMTP, which use STARTTLS, have a different pattern:

1. C: (Cleartext) I am using server 'mail.example.com'
2. S: (Cleartext) By The Way, I also support TLS Encryptionn.
3. C: (Cleartext) Lets use Encryption, aka 'STARTTLS'.
4. C: (TLS Handshake) Hello, I support XYZ Encryption.
5. S: (TLS Handshake) Hi There, Here is my Public Certificate,
                      and lets use this encryption algorithm.
6. C: (TLS Handshake) Sounds good to me.
7. C & S: (Encrypted) Exchange Data

Since the client tells the server which host it is connecting to in step 1, the server can pick the correct certificate in step 5. It is possible to do this in HTTP, using TLS Upgrade. This is slightly more complicated, and presents other security issues. The Server Name Indication approach has a much simplier setup:

1. C: (TLS Handshake) Hello, I support XYZ Encryption, and
                      I am trying to connect to 'site.example.com'.
2. S: (TLS Handshake) Hi There, Here is my Public Certificate,
                      and lets use this encryption algorithm.
3. C: (TLS Handshake) Sounds good to me.
4. C: (Encrypted) HTTP Request
5. S: (Encrypted) HTTP Reply

The only difference is a few extra bytes sent in Step 1. The client passes along which hostname it wants, and the server now has a clue which public certificate to send.

The good people at CAcert are following closely how SNI is supported in major pieces of web infrastructure. To summarize, SNI has been supported in mod_gnutls since 2005, but the ominous warning on the mod_gnutls home page does not make mass deployment likely in the short term : “mod_gnutls is a very new module. If you truely care about making your server secure, do not use this module yet. With time and love, this module can be a viable alternative to mod_ssl, but it is not ready“. But fear not : Apache bug 34607 tracks the development of SNI support for mod_ssl, and it only has to wait for the 0.9.9 release of OpenSSL which is said to include support for SNI. So the future is bright ! Support on the client side is more patchy at the moment, but it will likely improve fast as soon as the servers are available.

So when I say the the savior has arrived, I should rather say that it is still underway and it is taking its time. SNI is described in section 3.1 of RFC3546 which dates from June 2003 ! And Paul’s post is from April 2005 – although at that time SNI was already supported in mod_gnutls. I am surprised that the development of such a liberating feature so critical to the providers of collective hosting has been so slow in a an essential pillar of infrastructure such as OpenSSL. I am even more surprised that I have not heard of it before – but now I am quite excited about it !

Since CAcert is tracking SNI support, I guess they will eventually offer name based certificates. Count me in !

Code and PHP and Systems14 Aug 2007 at 14:35 by Jean-Marc Liotier

Since I began playing with Net_SmartIRC, I found a new way to put that library to work : a Munin plugin script to monitor the number of users in an IRC channel.

Here is an example of the graphical output provided by Munin :

As you can see, the Debian IRC channel is a very crowded place ! You may also notice small gaps in the data : the script sometimes fails on a refused connection, and I have not elucidated the cause. But as the graph shows, I have coded the script so that those failure cases only result in a null output, which Munin handles well by showing a blank record.

Because my lacking skills and crass lazyness prevented me from writing it all in a single language, I hacked that plugin by simply patching together the parts I could produce rapidly :

The PHP script is uses Net_SmartIRC which is available in Debian as php-net-smartirc. It must be configured by modifying the hardcoded server and channel – that may not be what is best in production use, but for the moment it works for me. Here is the full extent of the PHP code :

< ?php
include_once('/usr/share/php/Net/SmartIRC.php');
$irc = &new Net_SmartIRC();
//$irc->setDebug(SMARTIRC_DEBUG_ALL);
$irc->setUseSockets(TRUE);
$irc->setBenchmark(TRUE);
$irc->connect('irc.eu.freenode.net', 6667);
$irc->login('usercount', 'Users counting service for Munin monitoring',
'0', 'usercount');
$irc->getList('#test_channel');
$resultar = $irc->objListenFor(SMARTIRC_TYPE_LIST);
$irc->disconnect();
if (is_array($resultar)) {
    echo $resultar[0]->rawmessageex[4];
} else {
}
?>

The irc_channel_users Bash script is also quite simple. Apart from the barely modified boilerplate adapted from other simple Munin bash scripts, the specific meat of the script is as follow :

work_directory=/home/jim/applications/munin/irc_channel_users
php_interpreter=`which php`
user_population=`$php_interpreter $work_directory/irc_channel_users.php
 | awk -F"#" '{print($1)}' | grep -e '^[0-9]+$'`
echo -n "population.value "
echo $user_population

As you can see, the munin bash script is mostly about setting a few Munin variables, calling the php script and formatting the output.

Here are sample outputs :

15:32 munin@kivu /etc/munin/plugins% ./irc_channel_users autoconf
yes

15:32 munin@kivu /etc/munin/plugins% ./irc_channel_users config
graph_title #b^2 IRC channel users
graph_args --base 1000 -l 0
graph_vlabel population
graph_scale no
population.label users

15:32 munin@kivu /etc/munin/plugins% ./irc_channel_users
population.value 6

No demonstration is available on a public site, but the above graph is about all there is to know about the output of this plugin.

The code resides on its own page and updates if they ever appear shall be stored there.

This experience taught me that coding basic Munin plugins is fun and easy. I will certainly come back to it for future automated graphing needs.

And for those who wonder about the new syntax highlighting, it is produced using GeSHi by Ryan McGeary‘s very nice WP-Syntax WordPress plugin.

Next Page »