Design archived articles

Subscribe to the RSS feed for this category only

Design and Knowledge management and Politics and Security and Technology26 May 2014 at 14:07 by Jean-Marc Liotier

Skimming an entirely unrelated article, I stumbled upon this gem:

Recently, a number of schools have started using a program called CourseSmart, which uses e-book analytics to alert teachers if their students are studying the night before tests, rather than taking a long-haul approach to learning. In addition to test scores, the CourseSmart algorithm assigns each student an “engagement index” which can determine not just if a student is studying, but also if they’re studying properly. In theory, a person could receive a “satisfactory” C grade in a particular class, only to fail on “engagement

This immediately reminded me of Neal Stephenson’s 1992 novel, Snow Crash where a government employee’s reading behavior has been thoroughly warped into simulacrum by a lifetime of overbearing surveillance:

Y.T.’s mom pulls up the new memo, checks the time, and starts reading it. The estimated reading time is 15.62 minutes. Later, when Marietta does her end-of-day statistical roundup, sitting in her private office at 9:00 P.M., she will see the name of each employee and next to it, the amount of time spent reading this memo, and her reaction, based on the time spent, will go something like this:
– Less than 10 min.: Time for an employee conference and possible attitude counseling.
– 10-14 min.: Keep an eye on this employee; may be developing slipshod attitude.
– 14-15.61 min.: Employee is an efficient worker, may sometimes miss important details.
– Exactly 15.62 min.: Smartass. Needs attitude counseling.
– 15.63-16 min.: Asswipe. Not to be trusted.
– 16-18 min.: Employee is a methodical worker, may sometimes get hung up on minor details.
– More than 18 min.: Check the security videotape, see just what this employee was up to (e.g., possible unauthorized restroom break).

Y.T.’s mom decides to spend between fourteen and fifteen minutes reading the memo. It’s better for younger workers to spend too long, to show that they’re careful, not cocky. It’s better for older workers to go a little fast, to show good management potential. She’s pushing forty. She scans through the memo, hitting the Page Down button at reasonably regular intervals, occasionally paging back up to pretend to reread some earlier section. The computer is going to notice all this. It approves of rereading. It’s a small thing, but over a decade or so this stuff really shows up on your work-habits summary.

Dystopian panoptical horrors were supposed to be cautionary tales – not specifications for new projects…

As one Hacker News commenter put it : in the future, you don’t read books; books read you !

Post-scriptum… Isn’t it funny that users don’t mind being spied upon by apps and pages but get outraged when e-books do ? It may be because in their minds, e-books are still books… But shouldn’t all documents and all communicated information be as respectful of their reader as books are ?

Design and Mobile computing and Networking & telecommunications and Systems and Technology19 Nov 2010 at 16:32 by Jean-Marc Liotier

In France, at least two mobile networks operators out of three (I won’t tell you which ones) have relied on Cell ID alone to identify cells… A mistake because contrary to what the “Cell ID” moniker suggests, it can’t identify a cell on its own.

A cell is only fully identified by combining with the Location Area Identity (LAI). The LAI is an aggregation of Mobile Country Code (MCC), Mobile Network Code (MNC – which identifies the PLMN in that country) and the Location Area Code (LAC – which identifies Location Area within the PLMN). The whole aggregate is called Cell Global Identification (CGI) – a rarely encountered term, but this GNU Radio GSM architecture document mentions it with details.

Since operators run their networks in their own context, they can consider that MCC and MNC are superfluous. And since the GSM and 3G specifications defines the Cell ID as a 16 bit identifier, the operators have believed that they had plenty for all the cells they could imagine, even taking multiple sectors into account – but that was many years ago. Even nowadays there are not that many cells in a French GSM network, but the growth in the number of bearer channels was not foreseen and each of them requires a different CellID – which multiplies the number of cells by their number.

So all  those who in the beginnings of GSM and in the prehistory of 3GPP decided that 65536 identifiers ought to be enough for everyone are now fixing their information systems in a hurry as they run out of available identifiers – not something anyone likes to do on a large critical production infrastructure.

Manufacturers and operators are together responsible for that, but alas this is just one occurrence of common shortsightedness in information systems design. Choosing unique identifiers is a basic modeling task that happens early in the life of a design – but it is a critical one. Here is what Wikipedia says about unique identifiers :

“With reference to a given (possibly implicit) set of objects, a unique identifier (UID) is any identifier which is guaranteed to be unique among all identifiers used for those objects and for a specific purpose.”

The “specific purpose” clause could be interpreted as exonerating the culprits from responsibility : given their knowledge at the time, the use of Cell ID alone was reasonable for their specific purpose. But they sinned by not making the unique identifier as unique as it possibly could. And even worst, they sinned by not following the full extent of the specification.

But I won’t be the one casting the first stone – hindsight is 20/20 and I doubt that any of us would have done better.

But still… Remember kids : make unique identifiers as unique as possible and follow the specifications !

Design and Systems and Technology14 Apr 2010 at 11:04 by Jean-Marc Liotier

A colleague asked me about acceptable response times for the graphical user interface of a web application. I was surprised to find that both the Gnome Human Interface Guidelines and the Java Look and Feel Design Guidelines provide exactly the same values and even the same text for the most part… One of them must have borrowed the other’s guidelines. I suspect that the ultimate source of their agreement is Jakob Nielsen’s advice :

0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.

1.0 second is about the limit for the user’s flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.

10 seconds is about the limit for keeping the user’s attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect.

Jakob cites Miller’s “Response time in man-computer conversational transactions” – a paper that dates back to 1968. It seems like in more than forty years the consensus about acceptable response times has not moved substantially – which could be explained by the numbers being determined by human nature, independently of technology.

But still, I am rattled by such unquestioned consensus – the absence of dissenting voices could be interpreted as a sign of methodological complacency.

Code and Design and Systems and Technology13 Apr 2010 at 16:27 by Jean-Marc Liotier

Following a link from @Bortzmeyer, I was leafing through Felix von Leitner’s “Source Code Optimization” – a presentation demonstrating how unreadable code is rarely worth the hassle considering how good at optimizing compilers have become nowadays. I have never written a single line of C or Assembler in my whole life – but I like to keep an understanding of what is going on at low level so I sometimes indulge in code tourism.

I got the author’s point, though I must admit that the details of his demonstration flew over my head. But I found the memory access timings table particularly evocative :

Access Cost
Page Fault, file on IDE disk 1.000.000.000 cycles
Page Fault, file in buffer cache 10.000 cycles
Page Fault, file on ram disk 5.000 cycles
Page Fault, zero page 3.000 cycles
Main memory access 200 cycles (Intel says 159)
L3 cache hit 52 cycles (Intel says 36)
L1 cache hit 2 cycles

Of course you know that swapping causes a huge performance hit and you have seen the benchmarks where throughput is reduced to a trickle as soon as the disk is involved. But still I find that quantifying the number of cycles wasted illustrates the point even better. Now you know why programmers insist on keeping memory usage tight.

Design and Free software and Writing31 Aug 2009 at 22:46 by Jean-Marc Liotier

Good old Issue 3959 got some minor activity lately as drhatch had an interesting insight :

I wish to call into question a fundamental assumption that has been made about this effort, the assumption that has held up development for years: that multiple layout capability must exist before outline view can be useful.

This is holding up outline view because multiple layout capability (issue 81480) is a big effort, and it, in turn, requires refactoring of writer’s usage of the drawing layer (issue 100875) and the latter has some significant technical difficulties. It seems unlikely that these issues will be finished soon.

The logic behind this assumption is that switching views will take too long if multiple layouts are not possible and/or most users will need simultaneous viewing for outline view to be useful. I disagree with both these assertions.

1. Simultaneous viewing is not necessary. I have been using Word’s outline view extensively for years without simultaneous viewing. Even though it’s possible with split screens, it takes up screen real estate that I want to use otherwise.

2. It won’t take that long to switch layouts [..]

I, for one, would much rather have an outline view soon, one that takes a couple of seconds to switch, and which is available only as a single view, than wait the extra time it is going to take for the multiple-layout refactoring to be finished. That would be enough for me for a long time.

This is a case of “perfect” being the enemy of “good enough”. Let’s just have “good enough” for a while first.

Is his experience anecdotal, or do people really seldom or never use Microsoft Word’s outline view simultaneously with another view ? Other users have chimed in, but me too contributions will soon be boring… So here is my attempt at helping quantify user expectations : this poll !

Of course, self selection by passionate users and links from OpenOffice forums will certainly bias the sampling beyond any semblance of representativity, but we’ll take that as better than nothing…

Code and Design and Knowledge management and Social networking and The Web21 Aug 2009 at 16:01 by Jean-Marc Liotier

LinkedIn’s profile PDF render is a useful service, but its output lacks in aesthetics. I like the HTML render by Jobspice, especially the one using the Green & Simple template – but I prefer hosting my resume on my own site. This is why since 2003 I have been using the XML Résumé Library. It is an XML and XSL based system for marking up, adding metadata to, and formatting résumés and curricula vitae. Conceptually, it is a perfect tool – and some trivial shell scripting provided me with a fully automated toolchain. But the project has been completely quiet since 2004 – and meanwhile we have seen the rise of the hresume microformat, an interesting case of “less is more” – especially compared to the even heavier HR-XML.

Interestingly, both LinkedIn and Jobspice use hresume. A PHP LinkedIn hResume grabber part of a WordPress plugin by Brad Touesnard takes the hresume microformat block from a LinkedIn public profile page and weeds out all the LinkedIn specific chaff. With pure hresume semantic XHTML, you just have to add CSS to obtain a presentable CV. So my plan is now to use LinkedIn as a resume writing aid and a social networking tool, and use hresume microformated output extracted from it to host a nice CSS styled CV on my own site.

Preparing to do that, I went through the “hResume examples in the wild” page of the microformats wiki and selected the favorite styles that I’ll use for inspiration :

Great excuse to play with CSS – and eventually publish an updated CV…

Design and Security and Systems and Technology09 Jun 2008 at 13:35 by Jean-Marc Liotier

Who these days has not witnessed the embarrassing failure modes of Microsoft Windows ? Blue screens of all hues and an assortment of badged dialog boxes make each crash into a very public display of incompetence.

I will not argue that Windows is more prone to failure than other operating systems – that potential war of religion is best left alone. What I am arguing is that failure modes should be graceful, or at least more discreet.

A black screen is neutral : the service is not delivered, but at least the most trafficked billboard in town is not hammering everyone with a random pseudo-technical message that actually means “my owners are clueless morons”.

Even better than a black screen : a low level routine that in case of system failure may display something harmless. Anything but an error message.

With so many information screens in the transportation industry, automated teller machines of all sorts and a growing number of advertising screens on roadsides, a properly and specifically configured system is necessary. What about “Microsoft Windows – Public Display Edition” ? Of course, users of Free Software don’t have to wait for a stubborn editor to understand the problems its customers are facing.

When the stakes are high enough, the costs of not managing risk through graceful degradation cannot be ignored. But let’s not underestimate the power of user inertia…

Design and Identity management and Knowledge management and Social networking and The Web20 Nov 2007 at 6:47 by Jean-Marc Liotier

Open is everything – the rest is details. That is why we must take the best use cases of the closed social networking world and port them in the open. This is a lofty goal in all meaning of the adjective, but a surprisingly large number of potential basic components are available to cut the way short.

Friend of a Friend (FOAF) enables the creation of a machine-readable ontology describing persons, their activities and their relations to other people and objects. This concept is a child of the semantic web school of thought that has its origins about as far ago as the Web itself. In a narrower but deeper way, XFN (XHTML Friends Network) enables web authors to indicate their relationships to people simply by adding attributes to hyperlinks.

Microformats such as hCard, xfn, rel-tag, hCalendar, hReview, xFolk, hResume, hListing, citation, media-info and others provide a foundation for normalizing the information sharing. Some major operators are starting to get it – for example my LinkedIn profile contains hCard and hResume data. If you like hresume, take a look at DOAC while you are at it !

Some code is already available to process that available information. For example, identity-matcher is a Rails plugin to match identities and import social network graphs across any site supporting the appropriate Microformats. This code extracted from the codebase of dopplr.com and this is probably how Dopplr now supports import from other social networks like Twitter.

But part of the appeal of a social networking platform is how it empowers the user with control of what information he makes available, how it makes it available and to whom. So microformats are not sufficient : a permission management and access control system is necessary, and that requires an authentication mechanism. That naturally takes us to OpenID.

OpenID is a decentralized single sign-on system. Using OpenID-enabled sites, web users do not need to remember traditional authentication tokens such as username and password. Instead, they only need to be previously registered on a website with an “identity provider”. OpenID solves the authentication problem without relying on any centralized website to confirm digital identity.

The OpenID project is going even further than just authentication – authentication is just the surface. What OpenID really is about is digital identity management. OpenID Attribute Exchange is an OpenID service extension for exchanging identity information between endpoints. Although the list of attributes included in the OpenID Attribute Exchange schema does not match a nice collection of microformats, a process is defined to submit new attributes. And anyway, such a standard looks like a great fit to cover the need for keeping the user in control of his own content.

Finally, the social graph is the support for applications that must interact with the user’s information wherever it is hosted. That is why Google’s OpenSocial specification proposes a common set of API for social applications across multiple websites.

So a few technologies for social networking do exist, and they seem able to provide building blocks for an open distributed social networking. The concept of open distributed social networking itself has been in people’s mind for a long time. But until now only large proprietary platforms have succeeded in seducing a critical mass of users. Thanks to them, there is now a large body of information about the best practices and use-cases. What is now necessary is to think about how those use-cases can be ported into a decentralized open environment.

Porting a closed single provider system into an open distributed environment while equaling or surpassing the quality of the user experience is a huge challenge. But social networking and digital identity management are such critical activities in people’s life that the momentum behind opening them may soon be as large as the one that led Internet pioneers to break down the walls between networks.

Design and The Web10 Nov 2007 at 16:48 by Jean-Marc Liotier

Because I live and breath computing and the Internet, I often forget what the casual users experience. So sometimes I watch one at random over his shoulder just to keep abreast of what the Myspace generation and the typewriter generation are doing. And every time my battle-hardened sysadmin heart still shudders at the sights.

These days, my favorite casual user habit is searching for an obvious URL using a search engine. For example I have seen “yahoo mail” typed as an argument in the Google search form – no once, but several times and by different users ! I would have thought that http://mail.yahoo.com/ or http://www.yahoo.com/mail/ or http://yahoo.com/mail or whatever other variations that have been setup by Yahoo provide enough obvious ways to reach the service. But apparently they don’t.

If even the most obvious of all are not typed, we can infer that today’s casual user does not memorize any URL anymore. Maybe the clean URL that we strive to produce are intended for the sole consumption of search engines and power users. And that’s one more reason why portals are so important.