Subscribe · Archives

Gallimaufry of Whits

True source

True source is the method that a programmer used to create a program, which must have included a high degree of comprehension. Anything that transforms source, like a compiler, into something that cannot be comprehended by the same original programmer does not count as true source. It is not the source, as in the place that the code comes from.

This definition comes with an important caveat. True source is not a binary characteristic, but a spectrum. Some sources can be truer than others. This is important because the clearer the code, the more freedom it gives.

Clear freedom

Code that a programmer understood when they wrote it may be incomprehensible to them several months later. People can even understand Befunge, an obfuscated, stack based programming language, during the course of writing it. Such understanding evaporates far more rapidly for Befunge than for most other languages.

Befunge is a deliberate anti-pattern, making the source as incomprehensible as possible. But efforts in the opposite direction are quite rare. Sometimes language designers do consider these things. Guido Van Rossum, for example, deprecated reduce(...) in Python because he believed it led to incomprehensible code. But usually, performance and poor design obviate comprehensibility.

This is unfortunate, because working on the clarity of code is essential to the continued survival of the general purpose computer.

The GNU project, for example, lists the freedom to "study how the program works, and change it so it does your computing as you wish" as software freedom number one. They explain this further by saying that access to source code is a precondition for this freedom, and that what is obfuscated "does not count as source code".

The freedom to "study how the program works" is not well phrased. One can study how a compiled program works by using a decompiler. Anything can be studied as long as it doesn't depend on remote execution, as in a DRM machine. What we need is not code which can be studied, but code which is as comprehensible as possible. We need code which is clear.

When I first learned about GNU/Linux, the promise of an operating system that anybody could learn from, change, and improve was enticing. But I soon learned that if I really wanted to do these things, there were many obstacles. The source was complex, and non-modular. It was difficult to learn, and even if you did learn it, it was difficult to change just a single component without messing up your system—or breaking things that depend on it, which is a separate important problem not discussed here.

Measuring clarity

The clarity of code can be difficult to measure, because the measurement must be manual, not automated. We can count cycles far more easily than we can rate comprehension. Not only that, but when A/B testing code to see if its performance has increased, we can run the test over again on the same hardware. Not so with clarity. Because the performance of people is not constant, unlike with computers, it has to be measured as an average over a group of people. The backgrounds of the people also have to be adjusted for. People who have read the code before also have an advantage in understanding the new code, because they can leverage the understanding that they obtained from the previous version. So you either have to use new people, or compensate for the increased understanding.

The most effective solution to this problem is one that makes this test routine as cheap and efficient as possible. A website where people can volunteer or be paid to review code would be a simple answer. A standard measure of clarity should also be introduced, a kind of readability test for code, like the Flesch, Dale–Chall, or Gunning Fog formulæ for prose.

As with prose, sometimes sophisticated code will be necessary. It depends very much on the concepts being worked on. If the code is intending to run involutions in octonion algebras, then it cannot be of the same level of difficulty as code which performs simple text templating. But difficulty is not to be conflated with clarity. Programmers should strive to create the clearest code which still achieves the chosen task.

Clarity often means that code must be augmented with a lot of prose, and even diagrams and interactive animations. This means that properly designed literate programming is on the opposite end of the spectrum from compiled code, and that regular non-literate (or badly designed literate) programs sit in the middle of this spectrum. Because clarity is an important freedom, this means that properly designed literate programming is more free than any known alternative, and the alternatives should be avoided.

The executable word

When you open a dictionary, to look up a word, it defines it in terms of other words. Usually you don't have to learn all the words in the definition too.

How did you learn your first words? You didn't use a dictionary. Language acquisition is a difficult process, and humans are very adept at it. In the 1960s and 1970s we thought that we would be able to teach computers to do the same, but we failed to instruct them all that well so far. It proved to be a much harder problem than anybody had anticipated.

So the computer seems no match for the written and spoken word. But there are some curious little niches where the computer excels.

Tribal computer

Imagine a soft cube, with four solar panels, one clockwinder, and a display, all mounted between the toughest known glass. This cube is a computer, and it's about to be airdropped onto an uncontacted tribe in Papua New Guinea.

We don't know what the tribe is going to do with it. They might try to smash it, or throw it in the fire. But if they do, the cube will react. Impacts are sensed, and the cube will respond as though it's an animal in pain. Fire will send it into hysterics. Gentle spinning, on the other hand, will make the cube react with pleasure. The greatest pleasure noises will be reserved for new exposure to sunlight, and winding of the clockwork mechanism.

The screen is a touchscreen. You can interact with it. The language of the tribe is unknown, but the screen will have a simple user interface using the best research into outsider reactions to technology. It will be possible to see the views of cameras mounted in the centres of each solar panel, projected onto the main screen. They'll have simple zoom controls. There will also be pre-recorded videos of people smiling and sending greetings, and of other scenes of the world. It will be possible to record video.

There will be an interface that links it with the latest weather results, so it will be able to predict the weather for the tribe. There will be cartoons that show simple stories. It will play music. It will be possible to set a countdown timer which goes off with a soft alarm, or a short jingle.

The cube will not be alone, to prevent the object from being fetishised. Several will be dropped, in groups at intervals. If the experiment is a success, further cubes will be dropped on further tribes.

Tribal library

Imagine a book, with tough Tyvek pages and close-stitched pages. The book is going to be dropped on the tribe, before the cube computers. It's a history of the world, written specially for the purpose by Bill Bryson, and translated into Icelandic. You're probably not Icelandic. (If you are, congratulations!) That means that you can't read this history of the world, and since the tribe isn't Icelandic either, neither can they. Tyvek is pretty indestructible, so they won't even be able to repurpose the book for any purpose.

Several more books are dropped on the tribe, all by Bryson, in further languages. They still don't understand any of them.

Somebody comes up with the idea of dropping an illustrated picture book instead. But it turns out to be exceedingly difficult to make even a picture that conveys complex information in a culturally independent way. The best attempts at this have been the Arecibo message, and the Pioneer plaque, but these were intended for advanced aliens who would understand, for example, that the Arecibo message could be rendered two dimensionally by factoring the length into two primes—23 and 73, if my Esperanto is up to scratch.

The tribal books are useless, but the tribal computer is easy to interact with. The tribe members may be scared of it and throw it out, but at least it's possible to interact with it. The words on the computer—the procedures, functions, programs, subroutines, or algorithms—are executable.

That's practically magic compared to the words in the book. The executable word is a truly astonishing thing. Yet the level of wonder that the tribe would display at the cubes compared to the books is almost entirely lost on us!

The word

Human languages are spoken and written relative to all human culture. They are embodied by human actions. Programming languages, in contrast, are executed on specific hardware, and embodied by their devices.

To define a new text to the tribe would require either the Champollion act of giving a key that maps from our language to theirs, or to build up the assocations again from scratch, like you would teach a language to a toddler.

To define a word on a computer, an executable word, requires only giving it the resources of the computer, which is usually a powered Von Neumann architecture. The computer must have the right hardware, both in terms of the instruction set of the processor and the devices that are intended to enable the interaction with the device. But the computer ships with its own culture. It is self-contained, and self-defining.

Robots of the future

Perhaps one day somebody will experiment with air dropping computers onto a Papua New Guinean tribe, but it's unlikely to pass an ethics board. But, in a way, science has airdropped these devices from the very properties of the universe onto us. We didn't know, before Turing's work, that such devices were universal, nor the extent of their capabilities. We're still not entirely sure what the extent of their capabilities are: there are many unsolved problems in computer science.

So perhaps we should be thinking about what kind of computers we should drop onto ourselves, to restore our wonder. In the 1990s, a popular science magazine in the United Kingdom, Focus, ran a very small, perhaps an eighth of a page sized, infobox about a new engineering advance. Scientists at a large company, perhaps Sony, had decided that instead of being cold and metallic, the robots of the future should be cute and tactile, like these ones.

One of the robots was a small cube, much like the ones intended for the tribes, except that it was pink at the bottom, with a wavy line separating it from an orange top. The bottom half was dimpled a bit like an antique golf ball, and the top side had a fin on it. One of the sides had a camera which looked like an eye in it, and possibly the hidden side had one too. The photo does not appear to be available on the web, and the volumes of the magazines were too copious to find the original.

Perhaps we're so intoxicated with keyboards and monitors that we forget that we can attach a multitude of devices to a computer. We may even forget that without such devices, a computer is useless. What good is a book when it's not being read? What good is a computer when it's hooked to literally no device? Touch screens are the first substantial change in interaction with a computer for half a century.

Performance is often cited as a primary concern of computer hardware and software developers alike. But performance can only be harnessed by imagination: faster computers only do new things when we imagine that they can.

Turing complete formats

When I designed HTML for the Web, I chose to avoid giving it more power than it absolutely needed - a 'principle of least power', which I have stuck to ever since. I could have used a language like Donald Knuth's 'TeX', which though it looks like a markup language is in fact a programming language. [...] It would allow you to express absolutely anything on the page, but would also have allowed Web pages that could crash, or loop forever. This is the tension.

Tim Berners-Lee (1999), Weaving the Web, p.197

Our web pages can now crash or loop forever with considerable ease. This is mostly thanks to JavaScript, but even CSS3 is Turing complete.

Some say this isn't a bad thing. Alan Kay said in 1997 that the principle of least power was a thing of the dark ages:

I was in the Air Force in 1961, and I saw it in 1961, and it probably goes back one year before. Back then, they really didn't have operating systems. Air training command had to send tapes of many kinds of records around from Air Force base to Air Force base. There was a question on how can you deal with all of these things that used to be card images, because tape had come in, [there] were starting to be more and more complicated formats, and somebody—almost certainly an enlisted man, because officers didn't program back then—came up with the following idea.

This person said, on the third part of the record on this tape we'll put all of the records of this particular type. On the second part—the middle part—we'll put all of the procedures that know how to deal with the formats on this third part of the tape. In the first part we'll put pointers into the procedures, and in fact, let's make the first ten or so pointers standard, like reading and writing fields, and trying to print; let's have a standard vocabulary for the first ten of these, and then we can have idiosyncratic ones later on. All you had to do [to] read a tape back in 1961, was to read the front part of a record—one of these big records—into core storage, and start jumping indirect through the pointers, and the procedures were there.

I really would like you to contrast that with what you have to do with HTML on the Internet. Think about it. HTML on the Internet has gone back to the dark ages because it presupposes that there should be a browser that should understand its formats. This has to be one of the worst ideas since MS-DOS. [Laughter] This is really a shame. It's maybe what happens when physicists decide to play with computers, I'm not sure. [Laughter]

Alan Kay (1997), The Computer Revolution Hasn't Happened Yet

This leads to the natural theory that all computer formats will tend to become Turing complete. Perhaps this is only true as a corollary of Zawinski's Law. But the stages of development are clear:

  1. Create a non-computational documentation or data format
  2. Develop tools for real-time, interactive manipulation of the format
  3. Shoehorn the tools for manipulating the format into the format itself

Congratulations, your format is now Turing complete.


If a format was designed with forward extensibility, with a standard vocabulary as mentioned by Kay, then stage three, shoehorning the tools into the format, can be done with ease. Dan Connolly used to trumpet this forward extensibility rule at frequent intervals. That it had already been invented in the US Air Force circa 1960 indicates that it's a natural concept.

Forward extensibility is a matter of authority. If you invent a method for injecting JavaScript into plain text files in the web, say by embedding some magic string next to a URL to the script itself, then the workability of the design would be open to various failures which have to be accounted for.

Imagine, for example, that you chose the magic string "Link: " to indicate a link that should be automatically loaded when a text file is browsed. This would be a very poor choice, given that Link: is also the name of an HTTP header, and there exist plain text dumps of HTTP headers which contain this header.

Even if you used a UUID or another URI for the magic string, what happens when somebody wants to write about the mechanism itself, without invoking the mechanism, in a text file? There would have to be some escape mechanism, a concept invented by Bob Bemer, for this to be possible.

On the web, the most reasonable approach would be not to use a magic string, but to use a new media type, such as text/scriptable-plain, to authoritatively indicate the new type. In other words, the forward extensibility mechanism of the web is the media type. The magic string is a badly engineered alternative. In individual formats, the forward extensibility mechanism can vary. HTML had an @profile attribute on the head element, for example, which was eventually beaten by the script element; whereas JSON has no means of forward extensibility.

The conclusion is that formats which have no means of forward extensibility are doomed to admit non-authoritative scraping facilitators, such as magic strings and hot comments, in order to evolve.


The problem with Turing complete formats is not, as Tim imagined, that they can crash or loop forever. Pages are hosted in software that encapsulates their state and can recover if certain resource limits are exceeded. When formats can compute, they must also be secure. The problem is that, as Alan Kay says, we don't know how to compute. We show no restraint. We make interfaces which are terrible:

I have discovered that there are two types of command interfaces in the world of computing: good interfaces and user interfaces.

Daniel J. Bernstein, The qmail security guarantee

And we, as a society, presently reduce our content to the size of a postage stamp amidst a cacophany of social media widgets, trackbacks, advertisements, navigation elements, and all manner of frightful and tortuous delenda. Many of these did not even require computation to become useless.

The tendency of formats towards Turing completeness is a kind of tragedy of the commons. The vast pastures of computation are our commons. Because we don't know how to compute, then we also don't know how to make the most out of our formats, because our formats naturally want to be computers.

Device access

Computers do not express themselves through monitors alone. They can play sounds, connect to networks, and save files to persistent storage. Some computers control motor engines, or regulate heating controls. Others are being used to water plants or fly unmanned helicopters.

Computers are not only defined by what kinds of computation they can do, which is a level playing field thanks to Turing completeness (though don't try emulating GTA: V in Befunge on a PDP-8), but also what devices they are connected to.

This has important ramifications for Turing complete formats, because the current trend is to segregate them from devices as much as possible, in the name of security. Gated access is slowly becoming available. Web pages can ask for geolocation information of the user, though they may not be granted it. But they can't ask for access to an arbitary device, so you still can't ostensibly use CoffeeScript to power your visitor's irrigation system.

Of course, forward extensibility hooks can be used to enable device access. You can create a new HTTP header that gives data about how to access your device to the web page that you're accessing. Security would be difficult, but solveable. It's difficult for anything to get in the way, as long as you have a general purpose computer.

Circumscribing possibility

Some years ago, perhaps when the ideas for pdf.js were first being mooted, some bright spark suggested on their weblog that browsers would one day be implemented on the fly. In other words, browsers would be replaced by "meta-browsers" that simply dispatch to JavaScript or its successor, and then load a browser.js file to do the actual rendering work. (I've been unable to locate this post; please tell me if you find it!) This is just like what Alan Kay was describing in his story.

The story of computation and the story of formats start to merge. The HTML5 stack, which already enables Chrome OS and Firefox OS, can be thought of as a new operating system with a more secure, but more restricted, kernel. We can already emulate Windows 1.01 and Linux in JavaScript, and asm.js is approaching native performance.

If we keep making new operating systems in old operating systems, where will it end? Will it one day be easier to port old emulators, like JSMESS, than to port old software? When the playing field is level, the game will be long.

I'm writing these articles in Markdown in emacs. A simple text format, in a portable textual operating system. Yet if I wanted to embed a script element, I could do so. You could watch a 6502 chip get emulated, or play a Minecraft derivative. You can't limit the general purpose computer. But power entails responsibility, and taming the computer is no easy task.

DRM machines

One of my biggest fears for computing is that we'll be denied even our linear bounded automata. I fear that instead of the general-purpose computing that Doctorow eloquently describes, we'll be forced to use what I call DRM machines instead.

A DRM machine isn't a generalised computer with Digital Rights Management code on it, as this poses less threat: the machine code can be inspected, and somebody can break the shackles. Somebody will always be able to circumvent such DRM eventually, no matter how much complexity is woven into it. Security by obscurity is expensive and fragile, and can't ultimately stand up to scrutiny.

No, the ingredients for a DRM machine are secure boot, and cloud computing. Using secure boot, you can make a machine that can only boot into a single guaranteed configuration. This could even be burned into the hardware, so that the secure boot mechanism itself is unconfigurable. Then such a machine can be made to securely boot into a system that phones home to some panopticon master.

Secure boot by itself is not evil. It's morally neutral. You can have a secure boot machine that securely boots to Debian GNU/Hurd. If a hardware boot was used, then you wouldn't be able to do certain things with the basic system, but Debian GNU/Hurd is a free OS, so you'd still be able to extend it, run an emulator on it, and so on.

A DRM machine, on the other hand, could be set up to check every action that is taken by the user. Every action could be delegated to a central authority point, that would check the "legality" of the action, according to some private specification of rules.

But why would people use such machines? People won't walk into a prison, will they? Sadly, it seems that time and time again, they will. There are already systems that are quite close to this scenario, in the games console world. There will be similar incentives, or pathological ideologies; there will, either way, be psychological tricks that coerce people into these systems. It will happen.

The best thing to do with a DRM machine is to smash it to thousands of tiny, unusable pieces. It should not be possible to recover a working system from a smashed DRM machine, otherwise it has not been smashed to its full potential. There may be laws enacted against smashing a DRM machine, with parallels drawn to how it's illegal to deface a banknote. The DRM machines may be considered a true servant of the state, and be given legal protections greater than those of humans. As true servants of our oppressors, they would need to be smashed.

General purpose computers would be the new samizdat. But they would be extreme contraband, far more feared than the printed word. They are the executable word. Possession of a general purpose computer would invoke instant severe repurcussions. Knowledge of how to create, maintain, and use general purpose computers would be amongst the things most strictly and stringently forbidden in any use of the DRM machine. The DRM machine would ultimately evolve to resemble a general purpose computer in as few characteristics as possible.

The most disturbing aspect for present computing is that the DRM machine can be created using off-the-shelf components which are, by themselves, morally and relatively ideologically neutral. Secure boot and cloud computing are not facing the ire of those who, some decades ago, were involved in the CND. But we could also use off-the-shelf components to create a truly secure, distributed network, impervious to eavesdropping and tampering. Just as a DRM machine is within our reach today, so is a brighter future. Those who have it in their power should make use of the available components.

Automatic Markdown conversion in emacs

When I edit a Markdown file in emacs, it automatically gets converted on save to an HTML file. Since such conversions often entail customisation, I use a standard .convert-markdown script to signal to emacs that it is possible to convert Markdown within a particular directory.

The source of the present file, for example, was a file called drafts/ This sits alongside a drafts/.convert-markdown script, which uses hoedown to convert Markdown to HTML in a separate articles/ directory.

When emacs is asked to save the drafts/ file, it calls any registered post-save hooks, one of which is called automatically-convert-markdown:

(defun string/ends-with (s ending)
  (let ((elength (length ending)))
    (string= (substring s (- 0 elength)) ending)))

(defun automatically-convert-markdown ()
  (let ((name (buffer-file-name)))
    (if (string/ends-with name ".md")
      (let ((convert (concat (file-name-directory name)
        (if (file-exists-p convert)
          (shell-command (concat convert " " name)))))))

Because the .convert-markdown path is general, and applicable to a wide range of setups, the following tiny shell command correctly specifies how to manually update all of the Markdown files within a signalled directory:

for fn in *.md
do ./.convert-markdown $fn

Here's a simple example bash script that can be used as a converter:

echo "<title>$(sed -n '/^# /{s/^# //;p;q;}')</title>" > $OUT
hoedown $1 | sed 's% *</p>$%%' >> $OUT
echo Wrote to $OUT

The .convert-markdown method makes it feel as though the Markdown is the actual source of the rendered page. It gives the impression of being able to use Markdown directly on the web.