So, the Plexnet is an Internet OS. But how do we build it? As of the 28th May 2007, we decided to go ahead with a plan ("Plan B") which says partly to concentrate on use cases. In considering the use cases, however, it's possible to demonstrate some things that are very useful for us to know in general, and that contribute back to the whole of Plan B. This document, then, is a kind of insight into how to move forward with Plexnet, with regards to a) abstract things, i.e. states of mind; and b) practical things, i.e. things that we actually do as a result! It's practical, but it's philosophy. Practical philosophy.
The best analogy to the Plexnet is the web. The web is a system built on top of the internet that lets us do a lot of things. So I asked myself, "does it make sense for us to be thinking about use cases?", and one of the best ways to answer that was to consider a question based on the analogy: "would it have made sense to think about use cases for the web?".
The original proposal by Tim Berners-Lee to his boss at CERN for the World Wide Web is a fascinating read; probably required reading for anyone building the Plexnet. The document opens, after an eye-candy diagram, with a use case called "Losing Information at CERN". It then considers systems that currently exist, and explains why they're deficient. It then has a section called "A solution: Hypertext", which, obviously, leads to explaining what the WWW is. There's then a requirements section, so that goals are set for the nascent system to grow to achieve as a measure of success, and that pretty much rounds it off.
This sets our minds at rest on the use cases issue, but it opens a fascinating new avenue for the philosophy of our project. The web is actually one very simple but very novel idea. The idea was that you take two interesting and fairly useful existing systems, the internet and hypertext, and you merge them. FTP and gopher already existed, so people could understand using the internet to exchange data. There were also people interested in hypertext, though separately to the internet people. What Tim did was to take the existing internet by modifying the FTP protocol (and using his knowledge of having worked on RPC systems at CERN) into a thing called HTTP, and using it to transfer documents made out of a new hypertext language, based heavily on an SGML language used at CERN called SGMLguid, which Tim called HTML. In fact, HTML is pretty much SGMLguid plus a single new, novel element: <a>! It wasn't the language that mattered all that much, it was the hypertext links, the <a>, and the internet, the HTTP protocol he'd made. He also had to invent URIs so as to address the things on the internet so that the links would work. URIs + HTTP + HTML = World Wide Web.
Now let's consider the Plexnet. With the Plexnet, we have a thing called the heptarchy, which is comprised of seven different existing technologies that we're modifying and improving, much like the web. (They are, for reference, "Entities, Events, Storage, Services, Interface, Identity, and Networking"). This doesn't seem like much to chew off when you don't consider history, but when you consider that the web was three things, "Identity, Networking, and Hypertext" (where Identity wasn't actually all that important), you suddenly see the immense complexity of the Plexnet. Two of the Plexnet's things are actually the same as the web's. What matters on the web is merely the fact that "Networking" and "Hypertext" are merged—that's the entire concept of the thing, and it's revolutionary. Revolution doesn't come from complexity, it comes from a clear vector; something that's very different from what exists now. It's the difference between using long words to garrulously and ostentatiously show that you're clever when you're not, and being clever by knowing lots of things. Some people do both, of course, so it's not the best analogy!
In one way, you can look at the idea of the Plexnet being that when you put these seven things together, you get something as revolutionary as the web. But because seven things are so much more complex than two, it's very, very difficult to see how. Indeed, when the web was first announced, it was exceedingly hard for people to get the idea of Hypertext + Internet. They just couldn't understand it whatsoever. The Hypertext people didn't get it, and the Internet people didn't get it. Now imagine how much worse the problem is when you have seven components rather than two! It's more than 7/2 times larger, because it's the connections between the components that you have to understand; so it actually turns out to be dozens of times more complicated to understand than the web. And the web wasn't understood anyway.
So how was the web made, and what does it tell us about making similar large systems? Tim wrote the first browser in a few months, but it only ran on the NeXT cube which was a relatively obscure system. Because of that, he got a graduate student to work on a very simple browser that would work on lots of systems. Both of the browsers were, by today's standards, exceedingly rudimentary, but they sorta worked. The first thing to be hypertextualised was the CERN phone book; but of course since the phone book had worked before, it wasn't all that great a thing. It just meant that more people could access it because of the browser that the graduate student had made.
The way the web first started taking off was by getting just a few people to run web servers, and a few people to write browsers. The idea is that the system is generic, quite simple (only two constituent technologies, Network and Hypertext!), and fixed; so people can come along and write their own clients. There were quite a few clients made, and quite a few webpages. It grew bit by bit quite slowly. Then came Mosaic, the first decent browser, and suddenly things really took off.
The principles here are that: a) the system was very simple; b) lots of people did the work, very scrappily, very independently, over a few years; and c) natural selection and the exponential networking aspect of the system took over and did the rest.
But that's a system that worked. What about gopher, and all the other systems that didn't work? Why didn't they work? And what do you do when they don't work? Well, let's take a look at a more contemporary system.
The Semantic Web is a good example of a failing system because firstly it was designed by the same person who designed the web, so he has a proven track record, and secondly because it's not really a failure at all; it's just not catching on as rapidly as the web. The idea of the Semantic Web is that the web is for documents but the Semantic Web is for data. So it's Data + Internet, all nicely linked together.
Even though the concept is about as simple as with the web, it hasn't taken off as fast. This, it appears though it's hard to tell since we don't have as much retrospective hindsight yet, is because of at least two factors: first, there aren't as many use cases as data is more scarcely produced and even consumed to some extent than documentation; and second, because the technology involved is really difficult to develop. Documentation doesn't have much structure. Data, on the other hand, is nothing but structure! People could write web clients easily when they understood hypertext and networking, which was relatively easy for graduate students; but to understand predicate logic scaled up to the size of the web has been engaging professors the world over for about a decade now without as much success. That's the difference, and it's a striking one.
If you're going to design a revolutionary large system, don't make it too complex, because even a little unexpected complexity can have enormous ramifications when you scale it up! And, especially, do not have redundancy.
When even systems that have only two components can fail when one of those components is a little too complicated, what chance does a system with seven components have? The answer, it would seem, is astonishingly little. So now what? What can we do about that?
One other difference between the web and the Semantic Web is that the web concentrated on use cases and was more obviously usable than the Semantic Web, which seems to be almost repellent to use cases. People love to debate technical details about the Semantic Web not only because the data aspect is so hard and brings up a lot to debate, but because the system as a whole is simply less conducive to decent use cases than the web. It's not necessarily that there are less use cases that's the problem; it certainly has hundreds, and even if the web has thousands, you only really need a few to be able to start work on the system. It's that the use cases for the Semantic Web are harder to conceptualise, and the ones that are easy to conceptualise tend to have large barriers in place such as corporate concerns about their data.
What we need to do with the Plexnet is to focus very sharply on the revolutionary part or parts of the high level design, if there even are any, and completely ignore the rest. It's not enough to say "if we combine these seven things, we'll probably have lots of useful things resulting" because as we've seen, people will stubbornly refuse to understand the ramifications, and the system will be way too complex for anyone to actually create. Then we just need to do what we've said in Plan B already: we need to make use cases. This document is essentially in part a very long-winded but thorough justification of use cases as a way of moving forward.
But it's more than that too. Part of the reason of including the Semantic Web example is to show what you can do when you fail, and how you can maximise the benefits of failure. When you have an heptarchical system, you're dozens of times more likely to fail than the Semantic Web; and even if we can whittle it down to merely the most revolutionary parts, we're still likely to fail. But the Semantic Web isn't a failure, primarily because any complex but revolutionary system is a good thing to focus intelligent minds on; they will produce interesting ideas and solutions about the system, even if they don't manage to achieve all of the overt goals.
For those who are unaware of Plan B, the idea is that it's a three pronged attack on the issues that we're facing, by 1) augmenting the culture and making sure that we persistently archive as much of our data as possible; 2) settling on use cases as the means of going forward with the project; and 3) sorting out all the other little related issues. This document has mainly discussed 2), but now we're seeing why 1) is so important. We can achieve things that go part way towards our ostensible goals and still be successful, as long as we have the right mindset towards that!
In other words, we mustn't think of the Plexnet as our goal. The Plexnet was actually introduced to fulfill another goal: to improve society. What we need to do is to reintroduce that goal, of improving society, on our way to creating the Plexnet. We mustn't lose sight of what we're doing, because it's not an all or nothing proposition; we must proceed by making small atomic things of value, whether they be ideas or bits of code or design patterns or whatever, and though we keep the idea of the large system in mind, we must realise that that is not our success criterion. The requirement is not to make technology for technology's sake; therefore the Plexnet isn't intrinsically important, but belief that it is seems to be the main thing that's hindering our doing anything useful at the moment.
This is, thus, the essence of Plan B more fully explicated. It is still a three pronged idea, and in this document we've explained why 1) and 2) are very important, but the "other issues" are just as important too and we'll encounter those on the way. The "other issues" that make up the third prong are basic engineering problems, so they can be solved using basic engineering techniques. In fact, 1) and 2) are basic engineering problems too; we just prioritised them because it was very clear that they are indeed the biggest things we need to concentrate on.
Let's get down to business. We've talked a lot about ideas and philosophy, so now what are the practical things?
1) We need to augment the culture, and improve the archiving. To augment the culture is difficult, but one way to do it is by improving communication. We need more people emailing, more people using irc. Can we make the email archives public? Can we make the irc logs from now on public? We need to make people more welcome. Can we have a CGI::IRC client connected to #esp on the 24weeks.com website again? Can we make sure everything is connected from a central site so that people can find all the things that we're doing? Can we all make an effort to point to where the centre of communcation is, whereever it might go?
To improve the archving takes technical solutions and social solutions. Can we have a MediaWiki installation with regular SQL dumps that can be downloaded and archived by any espian and member of the public? Can we have a single central espian domain, unrelated to the 24weeks project which is transient, where we upload as much of the data as we have from the past eight years or so? Can we make sure that as many people have access to uploading information to that site as possible? Can we agree on policies for uploading information so that the URIs are as short, clear, and consistent as possible, and that we don't tread on one another's toes? Can we all make an effort to popularise this central site? Can the MediaWiki installation be on this site? And those are just the technical things; the social things could engender an entirely new essay!
2) Use cases. Note that when we're discussing something on #esp, we tend to say things like "damn, if we had the Plexnet we'd be able to do that much better!" This seems to indicate that we all have an idea about what the Plexnet can do, but I think there is a fallacy here. The Plexnet is supposed to be revolutionary, so it's supposed to enable things, not (just) fix things. So in other words, it's very easy to apply this pattern: when something goes wrong, we simply say "the Plexnet will fix that!", and be pleased with ourselves. It can be used on almost any error; we want to fix everything! Does that mean that we'll have a system that fixes everything? No; just like the web didn't fix everything. It just made some hard things easier, and some previously impossible things merely hard.
Jeffarch suggested turning the 24concepts, one of the documents produced for the 24weeks project, into the first use cases. At first I thought this was an excellent suggestion, and I still think that it's an excellent thing to do eventually, but I don't think that it's the first thing we should do. First we need to fix the culture and the archiving, actually; otherwise all of this stuff isn't going to go anywhere and there's a danger it'll be lost. Secondy, we need to work on two things in tandem: the revolutionary part of the system, the "hook" that will be equivalent to Hypertext + Networking for the web, and the things that that will enable, i.e. the use cases. This means that we're actually in a bit of a catch-22. I think that what happened with the web is that it grew almost organically in Tim's mind... On the one hand, he recognised the benefits of the interconnection of knowledge. On the other hand, he had this problem where all his information wasn't interconnected. It's only when the two grew to such a level that it was really unbearable, because they were concrete, that suddenly the system grew. We're not talking about system design entirely here, we're talking about how a system actually comes into being—the actual genius behind a system, before you even come to design it. With Tim, the benefit of the interconnection of information could be realised with hypertext and the internet; and the problem he has was that data at CERN was a pig to find. It's the sticking together of those two components that drove the web.
So we can come up with use cases, but without a system designed to fix those use cases, all we have is use cases. We can have a system that's designed, but without any use cases, all we have is a system with no real application. What we need is the genius spark that brings those together. What Tav, who actually came up with the Plexnet, is claiming is that he's got that genius spark sussed out already. But for the rest of us, we don't see any system (just this really ridiculously complex heptarchy thing) and we don't see any use cases. Thus, I propose that once we have a decent website and a decent community in place, we work on these two tandem things. We actually have a vague notion of a system, and a vague notion (the fallacy of "wow, we should fix that!") of the use cases. They might fall apart entirely, and that has been a very great concern of mine, and it's what's led to the TAD, the document in which we began to come up with Plan B. But now we have hope because even if the Plexnet turns out to be entirely hollow, a toothless beast, it has given us a plan for creating systems rather like it. Already, even before it exists, the Plexnet is giving us peripheral benefits. Let's work on those.
Sean B. Palmer, inamidst.com