2001: A Space Odyssey Predicted the Future50 Years Ago

It was 1968. I was 8 years old. The space race was in full swing. For the first time, a space probe had recently landed on another planet (Venus). And I was eagerly studying everything I could to do with space. Then on April 2, 1968 (May 15 in the UK), the movie 2001: A Space Odyssey was released—and I was keen to see it.

So in the early summer of 1968 there I was, the first time I’d ever been in an actual cinema (yes, it was called that in the UK). I’d been dropped off for a matinee, and was pretty much the only person in the theater. And to this day, I remember sitting in a plush seat and eagerly waiting for the curtain to go up, and the movie to begin. It started with an impressive extraterrestrial sunrise. But then what was going on? Those weren’t space scenes. Those were landscapes, and animals. I was confused, and frankly a little bored. But just when I was getting concerned, there was a bone thrown in the air that morphed into a spacecraft, and pretty soon there was a rousing waltz—and a big space station turning majestically on the screen.

MGM/Everett Collection
MGM/Everett Collection

The next two hours had a big effect on me. It wasn’t really the spacecraft (I’d seen plenty of them in books by then, and in fact made many of my own concept designs). And at the time I didn’t care much about the extraterrestrials. But what was new and exciting for me in the movie was the whole atmosphere of a world full of technology—and the notion of what might be possible there, with all those bright screens doing things, and, yes, computers driving it all.

It would be another year before I saw my first actual computer in real life. But those two hours in 1968 watching 2001 defined an image of what the computational future could be like, that I carried around for years.

I think it was during the intermission to the movie that some seller of refreshments—perhaps charmed by a solitary kid so earnestly pondering the movie—gave me a "cinema program" about the movie. Half a century later I still have that program, complete with a food stain, and faded writing from my 8-year-old self, recording (with some misspelling) where and when I saw the movie.

What Actually Happened

A lot has happened in the past 50 years, particularly in technology, and it’s an interesting experience for me to watch 2001 again—and compare what it predicted with what’s actually happened. Of course, some of what’s actually been built over the past 50 years has been done by people like me, who were influenced in larger or smaller ways by 2001.

When Wolfram|Alpha was launched in 2009—showing some distinctly HAL-like characteristics—we paid a little homage to 2001 in our failure message (needless to say, one piece of notable feedback we got at the beginning was someone asking: "How did you know my name was Dave?!").

Wolfram Research

One very obvious prediction of 2001 that hasn’t panned out, at least yet, is routine, luxurious space travel. But like many other things in the movie, it doesn’t feel like what was predicted was off track; it’s just that—50 years later—we still haven’t got there yet.

So what about the computers in the movie? Well, they have lots of flat-screen displays, just like real computers today. In the movie, though, one obvious difference is that there’s one physical display per functional area; the notion of windows, or dynamically changeable display areas, hadn’t arisen yet.

Another difference is in how the computers are controlled. Yes, you can talk to HAL. But otherwise, it’s lots and lots of mechanical buttons. To be fair, cockpits today still have plenty of buttons—but the centerpiece is now a display. And, yes, in the movie there weren’t any touchscreens—or mice. (Both had actually been invented a few years before the movie was made, but neither was widely known.)

There also aren’t any keyboards to be seen (and in the high-tech spacecraft full of computers going to Jupiter, the astronauts are writing with pens on clipboards; presciently, no slide rules and no tape are shown—though there is one moment when a printout that looks awfully like a punched card is produced). Of course, there were keyboards for computers back in the 1960s. But in those days, very few people could type, and there probably didn’t seem to be any reason to think that would change. (Being something of a committed tool user, I myself was routinely using a typewriter even in 1968, though I didn’t know any other kids who were—and my hands at the time weren’t big or strong enough to do much other than type fast with one finger, a skill whose utility returned decades later with the advent of smartphones.)

What about the content of the computer displays? That might have been my favorite thing in the whole movie. They were so graphical, and communicating so much information so quickly. I had seen plenty of diagrams in books, and had even painstakingly drawn quite a few myself. But back in 1968 it was amazing to imagine that a computer could generate information, and display it graphically, so quickly.

MGM/Everett

Of course there was television (though color only arrived in the UK in 1968, and I’d only seen black and white). But television wasn’t generating images; it was just showing what a camera saw. There were oscilloscopes too, but they just had a single dot tracing out a line on the screen. So the computer displays in 2001 were, at least for me, something completely new.

At the time it didn’t seem odd that in the movie there were lots of printed directions (how to use the "Picturephone," or the zero-gravity toilet, or the hibernation modules). Today, any such instructions (and they’d surely be much shorter, or at least broken up a lot, for today’s less patient readers) would be shown onscreen. But when 2001 was made, the idea of word processing, and of displaying text to read onscreen, was still several years in the future—probably not least because at the time people thought of computers as machines for calculation, and there didn’t seem to be anything calculational about text.

There are lots of different things shown on the displays in 2001.  Even though there isn’t the idea of dynamically movable windows, the individual displays, when they’re not showing anything, go into a kind of iconic state, just showing in large letters codes like NAV or ATM or FLX or VEH or GDE.

When the displays are active they sometimes show things like tables of numbers, and sometimes show lightly animated versions of a whole variety of textbook-like diagrams. A few of them show 1980s-style animated 3D line graphics ("what’s the alignment of the spacecraft?", etc.)—perhaps modeled after analog airplane controls. But very often there’s also something else—and occasionally it fills a whole display. There’s something that looks like code, or a mixture of code and math.

It’s usually in a fairly modern-looking sans serif font (well, actually, a font called Manifold for IBM Selectric electric typewriters). Everything’s uppercase. And with stars and parentheses and names like TRAJ04, it looks a bit like early Fortran code (except that given the profusion of semicolons, it was more likely modeled on IBM’s PL/I language). But then there are also superscripts, and built-up fractions—like math.

Looking at this now, it’s a bit like trying to decode an alien language. What did the makers of the movie intend this to be about? A few pieces make sense to me. But a lot of it looks random and nonsensical—meaningless formulas full of unreasonably high-precision numbers. Considering all the care put into the making of 2001, this seems like a rare lapse—though perhaps 2001 started the long and somewhat unfortunate tradition of showing meaningless code in movies. (A recent counterexample is my son Christopher’s alien-language-analysis code for Arrival, which is actual Wolfram Language code that genuinely makes the visualizations shown.)

But would it actually make sense to show any form of code on real displays like the ones in 2001? After all, the astronauts aren’t supposed to be building the spacecraft; they’re only operating it. But here’s a place where the future is only just now arriving. During most of the history of computing, code has been something that humans write, and computers read. But one of my goals with the Wolfram Language is to create a true computational communication language that is high-level enough that not only computers, but also humans, can usefully read.

Yes, one might be able to describe in words some procedure that a spacecraft is executing. But one of the points of the Wolfram Language is to be able to state the procedure in a form that directly fits in with human computational thinking. So, yes, on the first real manned spacecraft going to Jupiter, it’ll make perfect sense to display code, though it won’t look quite like what’s in 2001.

Accidents of History

I’ve watched 2001 several times over the years, though not specifically in the year 2001 (that year for me was dominated by finishing my magnum opus A New Kind of Science). But there are several very obvious things in the movie 2001 that don’t ring true for the real year 2001—quite beyond the very different state of space travel.

One of the most obvious is that the haircuts and clothing styles and general formality look wrong. Of course these would have been very hard to predict. But perhaps one could at least have anticipated (given the hippie movement etc.) that clothing styles and so on would get less formal. But back in 1968, I certainly remember for example getting dressed up even to go on an airplane.

Another thing that today doesn’t look right in the movie is that nobody has a personal computer. Of course, back in 1968 there were still only a few thousand computers in the whole world—each weighing at least some significant fraction of a ton—and basically nobody imagined that one day individual people would have computers, and be able to carry them around.

As it happens, back in 1968 I’d recently been given a little plastic kit mechanical computer (called Digi-Comp I) that could (very laboriously) do 3-digit binary operations. But I think it’s fair to say that I had absolutely no grasp of how this could scale up to something like the computers in 2001. And indeed when I saw 2001 I imagined that to have access to technology like I saw in the movie, I’d have to be joining something like NASA when I was grown up.

What of course I didn’t foresee—and I’m not sure anyone did—is that consumer electronics would become so small and cheap. And that access to computers and computation would therefore become so ubiquitous.

In the movie, there’s a sequence where the astronauts are trying to troubleshoot a piece of electronics. Lots of nice computer-aided, engineering-style displays come up. But they’re all of printed circuit boards with discrete components. There are no integrated circuits or microprocessors—which isn’t surprising, because in 1968 these basically hadn’t been invented yet. (Correctly, there aren’t vacuum tubes, though. Apparently the actual prop used—at least for exterior views—was a gyroscope.)

It’s interesting to see all sorts of little features of technology that weren’t predicted in the movie. For example, when they’re taking commemorative pictures in front of the monolith on the Moon, the photographer keeps tipping the camera after each shot—presumably to advance the film inside. The idea of digital cameras that could electronically take pictures simply hadn’t been imagined then.

In the history of technology, there are certain things that just seem inevitable—even though sometimes they may take decades to finally arrive. An example are videophones. There were early ones even back in the 1930s. And there were attempts to consumerize them in the 1970s and 1980s. But even by the 1990s they were still exotic—though I remember that with some effort I successfully rented a pair of them in 1993—and they worked OK, even over regular phone lines.

On the space station in 2001, there’s a Picturephone shown, complete with an AT&T logo—though it’s the old Bell System logo that looks like an actual bell. And as it happens, when 2001 was being made, there was a real project at AT&T called the Picturephone.

Of course, in 2001 the Picturephone isn’t a cellphone or a mobile device. It’s a built-in object, in a kiosk—a pay Picturephone. In the actual course of history, though, the rise of cellphones occurred before the consumerization of videochat—so payphone and videochat technology basically never overlapped.

Also interesting in 2001 is that the Picturephone is a push-button phone, with exactly the same numeric button layout as today (though without the * and # ["octothorp"]). Push-button phones actually already existed in 1968, although they were not yet widely deployed. And, of course, because of the details of our technology today, when one actually does a videochat, I don’t know of any scenario in which one ends up pushing mechanical buttons.

There’s a long list of instructions printed on the Picturephone—but in actuality, just like today, its operation seems quite straightforward. Back in 1968, though, even direct long-distance dialing (without an operator) was fairly new—and wasn’t yet possible at all between different countries.

To use the Picturephone in 2001, one inserts a credit card. Credit cards had existed for a while even in 1968, though they were not terribly widely used. The idea of automatically reading credit cards (say, using a magnetic stripe) had actually been developed in 1960, but it didn’t become common until the 1980s. (I remember that in the mid-1970s in the UK, when I got my first ATM card, it consisted simply of a piece of plastic with holes like a punched card—not the most secure setup one can imagine.)

At the end of the Picturephone call in 2001, there’s a charge displayed: $1.70. Correcting for inflation, that would be about $12 today. By the standards of modern cellphones—or internet videochatting—that’s very expensive. But for a present-day satellite phone, it’s not so far off, even for an audio call. (Today’s handheld satphones can’t actually support the necessary data rates for videocalls, and networks on planes still struggle to handle videocalls.)

On the space shuttle (or, perhaps better, space plane) the cabin looks very much like a modern airplane—which probably isn’t surprising, because things like Boeing 737s already existed in 1968. But in a correct (at least for now) modern touch, the seat backs have TVs—controlled, of course, by a row of buttons. (And there’s also futuristic-for-the-1960s programming, like a televised women’s judo match.)

A curious film-school-like fact about 2001 is that essentially every major scene in the movie (except the ones centered on HAL) shows the consumption of food. But how would food be delivered in the year 2001? Well, like everything else, it was assumed that it would be more automated, with the result that in the movie a variety of elaborate food dispensers are shown. As it’s turned out, however, at least for now, food delivery is something that’s kept humans firmly in the loop (think McDonald’s, Starbucks, etc.).

In the part of the movie concerned with going to Jupiter, there are "hibernaculum pods" shown—with people inside in hibernation. And above these pods there are vital-sign displays, that look very much like modern ICU displays. In a sense, that was not such a stretch of a prediction, because even in 1968, there had already been oscilloscope-style EKG displays for some time.

Of course, how to put people into hibernation isn’t something that’s yet been figured out in real life. That it—and cryonics—should be possible has been predicted for perhaps a century. And my guess is that—like cloning or gene editing—to do it will take inventing some clever tricks. But in the end I expect it will pretty much seem like a historical accident in which year it’s figured out. It just so happens not to have happened yet.

There’s a scene in 2001 where one of the characters arrives on the space station and goes through some kind of immigration control (called "Documentation")—perhaps imagined to be set up as some kind of extension to the Outer Space Treaty from 1967. But what’s particularly notable in the movie is that the clearance process is handled automatically, using biometrics, or specifically, voiceprint identification. (The US insignia displayed are identical to the ones on today’s US passports, but in typical pre-1980s form, there’s a request for "surname" and "Christian name.")

There had been primitive voice recognition systems even in the 1950s ("what digit is that?"), and the idea of identifying speakers by voice was certainly known. But what was surely not obvious is that serious voice systems would need the kind of computer processing power that only became available in the late 2000s.

And in just the last few years, automatic biometric immigration control systems have started to become common at airports—though using face and sometimes fingerprint recognition rather than voice. (Yes, it probably wouldn’t work well to have lots of people talking at different kiosks at the same time.)

In the movie, the kiosk has buttons for different languages: English, Dutch, Russian, French, Italian, Japanese. It would have been very hard to predict what a more appropriate list for 2001 might have been.

Even though 1968 was still in the middle of the Cold War, the movie correctly portrays international use of the space station—though, like in Antarctica today, it portrays separate moon bases for different countries. Of course, the movie talks about the Soviet Union. But the fact the Berlin Wall would fall 21 years after 1968 isn’t the kind of thing that ever seems predictable in human history.

The movie shows logos from quite a few companies as well. The space shuttle is proudly branded Pan Am. And in at least one scene, its instrument panel has "IBM" in the middle. (There’s also an IBM logo on spacesuit controls during an EVA near Jupiter.)  On the space station there are two hotels shown: Hilton and Howard Johnson’s. There’s also a Whirlpool "TV dinner" dispenser in the galley of the spacecraft going to the Moon. And there’s the AT&T (Bell System) Picturephone, as well as an Aeroflot bag, and a BBC newscast. (The channel is "BBC 12," though in reality the expansion has only been from BBC 2 to BBC 4 in the past 50 years.) Companies have obviously risen and fallen over the course of 50 years, but it’s interesting how many of the ones featured in the movie still exist, at least in some form. Many of their logos are even almost the same—though AT&T and BBC are two exceptions, and the IBM logo got stripes added in 1972.

It’s also interesting to look at the fonts used in the movie. Some seem quite dated to us today, while others (like the title font) look absolutely modern. But what’s strange is that at times over the past 50 years some of those modern fonts would have seemed old and tired. But such, I suppose, is the nature of fashion. And it’s worth remembering that even those serifed fonts from stone inscriptions in ancient Rome are perfectly capable of looking sharp and modern.

Something else that’s changed since 1968 is how people talk, and the words they use. The change seems particularly notable in the technospeak. "We are running cross-checking routines to determine reliability of this conclusion" sounds fine for the 1960s, but not so much for today. There’s mention of the risk of "social disorientation" without "adequate preparation and conditioning, reflecting a kind of behaviorist view of psychology that at least wouldn’t be expressed the same way today.

It’s sort of charming when a character in  2001 says that whenever they "phone" a moon base, they get "a recording which repeats that the phone lines are temporarily out of order." One might not say something too different about landlines on Earth today, but it feels like with a moon base one should at least be talking about automatically finding out if their network is down, rather than about having a person call on the phone and listen to a recorded message.

Of course, had a character in 2001 talked about "not being able to ping their servers," or "getting 100% packet loss" it would have been completely incomprehensible to 1960s movie-goers—because those are concepts of a digital world which basically had just not been invented yet (even though the elements for it definitely existed). What about HAL?

MGM/Everett Collection

The most notable and enduring character from 2001 is surely the HAL 9000 computer, described (with exactly the same words as might be used today) as "the latest in machine intelligence." HAL talks, lipreads, plays chess, recognizes faces from sketches, comments on artwork, does psychological evaluations, reads from sensors and cameras all over the spaceship, predicts when electronics will fail, and—notably to the plot—shows a variety of human-like emotional responses.

It might seem remarkable that all these AI-like capabilities would be predicted in the 1960s. But actually, back then, nobody yet thought that AI would be hard to create—and it was widely assumed that before too long computers would be able to do pretty much everything humans can, though probably better and faster and on a larger scale.

But already by the 1970s it was clear that things weren’t going to be so easy, and before long the whole field of AI basically fell into disrepute—with the idea of creating something like HAL beginning to seem as fictional as digging up extraterrestrial artifacts on the Moon.

In the movie, HAL’s birthday is January 12, 1992 (though in the book version of 2001, it was 1997). And in 1997, in Urbana, Illinois, fictional birthplace of HAL (and, also, as it happens, the headquarters location of my company), I went to a celebration of HAL’s fictional birthday. People talked about all sorts of technologies relevant to HAL. But to me the most striking thing was how low the expectations had become. Almost nobody even seemed to want to mention "general AI" (probably for fear of appearing kooky), and instead people were focusing on solving very specific problems, with specific pieces of hardware and software.

Having read plenty of popular science (and some science fiction) in the 1960s, I certainly started from the assumption that one day HAL-like AIs would exist. And in fact I remember that in 1972, when I happened to end up delivering a speech to my whole school—and picking the topic of what amounts to AI ethics. I’m afraid that what I said I would now consider naive and misguided (and in fact I was perhaps partly misled by 2001). But, heck, I was only 12 at the time. And what I find interesting today is just that I thought AI was an important topic even back then.

For the remainder of the 1970s I was personally mostly very focused on physics (which, unlike AI, was thriving at the time). AI was still in the back of my mind, though, when for example I wanted to understand how brains might or might not relate to statistical physics and to things like the formation of complexity. But what made AI really important again for me was that in 1981 I had launched my first computer language (SMP) and had seen how successful it was at doing mathematical and scientific computations—and I got to wondering what it would take to do computations about (and know about) everything.

My immediate assumption was that it would require full brain-like capabilities, and therefore general AI. But having just lived through so many advances in physics, this didn’t immediately faze me. And in fact, I even had a fairly specific plan. You see, SMP—like the Wolfram Language today—was fundamentally based on the idea of defining transformations to apply when expressions match particular patterns. I always viewed this as a rough idealization of certain forms of human thinking. And what I thought was that general AI might effectively just require adding a way to match not just precise patterns, but also approximate ones (e.g. "that’s a picture of an elephant, even though its pixels aren’t exactly the same as in the sample").

I tried a variety of schemes for doing this, one of them being neural nets. But somehow I could never formulate experiments that were simple enough to even have a clear definition of success. But by making simplifications to neural nets and a couple of other kinds of systems, I ended up coming up with cellular automata—which quickly allowed me to make some discoveries that started me on my long journey of studying the computational universe of simple programs, and made me set aside approximate pattern matching and the problem of AI.

At the time of HAL’s fictional birthday in 1997, I was actually right in the middle of my intense 10-year process of exploring the computational universe and writing A New Kind of Science—and it was only out of my great respect for 2001 that I agreed to break out of being a hermit for a day and talk about HAL.

It so happened that just three weeks before there had been the news of the successful cloning of Dolly the sheep.

And, as I pointed out, just like general AI, people had discussed cloning mammals for ages. But it had been assumed to be impossible, and almost nobody had worked on it—until the success with Dolly. I wasn’t sure what kind of discovery or insight would lead to progress in AI. But I felt certain that eventually it would come.

Meanwhile, from my study of the computational universe, I’d formulated my Principle of Computational Equivalence—which had important things to say about artificial intelligence. And at some level, what it said is that there isn’t some magic bright line that separates the intelligent from the merely computational.

Emboldened by this—and with the Wolfram Language as a tool—I then started thinking again about my quest to solve the problem of computational knowledge. It certainly wasn’t an easy thing. But after quite a few years of work, in 2009, there it was: Wolfram|Alpha—a general computational knowledge engine with a lot of knowledge about the world. And particularly after Wolfram|Alpha was integrated with voice input and voice output in things like Siri, it started to seem in many ways quite HAL-like.

HAL in the movie had some more tricks, though. Of course he had specific knowledge about the spacecraft he was running—a bit like the custom Enterprise Wolfram|Alpha systems that now exist at various large corporations. But he had other capabilities too—like being able to do visual recognition tasks.

And as computer science developed, such things had hardened into tough nuts that basically computers just can’t do. To be fair, there was lots of practical progress in things like OCR for text, and face recognition. But it didn’t feel general. And then in 2012, there was a surprise: a trained neural net was suddenly discovered to perform really well on standard image recognition tasks.

It was a strange situation. Neural nets had first been discussed in the 1940s, and had seen several rounds of waxing and waning enthusiasm over the decades. But suddenly just a few years ago they really started working. And a whole bunch of HAL-like tasks that had seemed out of range suddenly began to seem achievable.

In 2001, there’s the idea that HAL wasn’t just programmed, but somehow learned. And in fact HAL mentions at one point that HAL had a (human) teacher. And perhaps the gap between HAL’s creation in 1992 and deployment in 2001 was intended to correspond to HAL’s human-like period of education. (Arthur C. Clarke probably changed the birth year to 1997 for the book because he thought that a 9-year-old computer would be obsolete.)

But the most important thing that’s made modern machine learning systems actually start to work is precisely that they haven’t been trained at human-type rates. Instead, they’ve immediately been fed millions or billions of example inputs—and then they’ve been expected to burn huge amounts of CPU time systematically finding what amount to progressively better fits to those examples. (It’s conceivable that an active learning machine could be set up to basically find the examples it needs within a human-schoolroom-like environment, but this isn’t how the most important successes in current machine learning have been achieved.)

So can machines now do what HAL does in the movie? Unlike a lot of the tasks presumably needed to run an actual spaceship, most of the tasks the movie concentrates on HAL doing are ones that seem quintessentially human. And most of these turn out to be well-suited to modern machine learning—and month by month more and more of them have now been successfully tackled.

But what about knitting all these tasks together, to make a complete HAL? One could conceivably imagine having some giant neural net, and training it for all aspects of life. But this doesn’t seem like a good way to do things. After all, if we’re doing celestial mechanics to work out the trajectory of a spacecraft, we don’t have to do it by matching examples; we can do it by actual calculation, using the achievements of mathematical science.

We need our HAL to be able to know about a lot of kinds of things, and to be able to compute about a lot of kinds of things, including ones that involve human-like recognition and judgement.

In the book version of 2001, the name HAL was said to stand for Heuristically programmed ALgorithmic computer. And the way Arthur C. Clarke explained it is that this was supposed to mean "it can work on a program that’s already set up, or it can look around for better solutions and you get the best of both worlds."

And at least in some vague sense, this is actually a pretty good description of what I’ve built over the past 30 years as the Wolfram Language. The programs that are already set up happen to try to encompass a lot of the systematic knowledge about computation and about the world that our civilization has accumulated.

But there’s also the concept of searching for new programs. And actually the science that I’ve done has led me to do a lot of work searching for programs in the computational universe of all possible programs. We’ve had many successes in finding useful programs that way, although the process is not as systematic as one might like.

In recent years, the Wolfram Language has also incorporated modern machine learning—in which one is effectively also searching for programs, though in a restricted domain defined for example by weights in a neural network, and constructed so that incremental improvement is possible.

Could we now build a HAL with the Wolfram Language? I think we could at least get close. It seems well within range to be able to talk to HAL in natural language about all sorts of relevant things, and to have HAL use knowledge-based computation to control and figure out things about the spaceship (including, for example, stimulating components of it).

The "computer as everyday conversation companion" side of things is less well developed, not least because it’s not as clear what the objective might be there. But it’s certainly my hope that in the next few years—in part to support applications like computational smart contracts (and yes, it would have been good to have one of those set up for HAL)—that things like my symbolic discourse language project will provide a general framework for doing this.

“Incapable of Error”

Do computers make mistakes? When the first electronic computers were made in the 1940s and 1950s, the big issue was whether the hardware in them was reliable. Did the electrical signals do what they were supposed to, or did they get disrupted, say because a moth ("bug") flew inside the computer?

By the time mainframe computers were developed in the early 1960s, such hardware issues were pretty well under control. And so in some sense one could say (and marketing material did) that computers were perfectly reliable.

HAL reflects this sentiment in 2001. "The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error."

From a modern point of view, saying this kind of thing seems absurd. After all, everyone knows that computer systems—or, more specifically, software systems—inevitably have bugs. But in 1968, bugs weren’t really understood.

After all, computers were supposed to be perfect, logical machines. And so, the thinking went, they must operate in a perfect way. And if anything went wrong, it must, as HAL says in the movie, "be attributable to human error." Or, in other words, that if the human were smart and careful enough, the computer would always do the right thing.

When Alan Turing did his original theoretical work in 1936 to show that universal computers could exist, he did it by writing what amounts to a program for his proposed universal Turing machine. And even in this very first program (which is only a page long), it turns out that there were already bugs.

But, OK, one might say, with enough effort, surely one can get rid of any possible bug. Well, here’s the problem: to do so requires effectively foreseeing every aspect of what one’s program could ever do. But in a sense, if one were able to do that, one almost doesn’t need the program in the first place.

And actually, pretty much any program that’s doing nontrivial things is likely to show what I call computational irreducibility, which implies that there’s no way to systematically shortcut what the program does. To find out what it does, there’s basically no choice but just to run it and watch what it does. Sometimes this might be seen like a desirable feature—for example if one’s setting up a cryptocurrency that one wants it to take irreducible effort to mine.

And, actually, if there isn’t computational irreducibility in a computation, then it’s a sign that the computation isn’t being done as efficiently as it could be.

What is a bug? One might define it as a program doing something one doesn’t want. So maybe we want the pattern on the left created by a very simple program to never die out. But the point is that there may be no way in anything less than an infinite time to answer the halting problem of whether it can in fact die out. So, in other words, figuring out if the program "has a bug" and does something one doesn’t want may be infinitely hard.

And of course we know that bugs are not just a theoretical problem; they exist in all large-scale practical software. And unless HAL only does things that are so simple that we foresee every aspect of them, it’s basically inevitable that HAL will exhibit bugs.

But maybe, one might think, HAL could at least be given some overall directives—like be nice to humans, or other potential principles of AI ethics. But here’s the problem: given any precise specification, it’s inevitable that there will unintended consequences. One might says these are bugs in the specification, but the problem is they’re inevitable. When computational irreducibility is present, there’s basically never any finite specification that can avoid any conceivable unintended consequence.

Or, said in terms of 2001, it’s inevitable that HAL will be capable of exhibiting unexpected behavior. It’s just a consequence of being a system that does sophisticated computation. It lets HAL show creativity and take initiative. But it also means HAL’s behavior can’t ever be completely predicted.

The basic theoretical underpinnings to know this already existed in the 1950s or even earlier. But it took experience with actual complex computer systems in the 1970s and 1980s for intuition about bugs to develop. And it took my explorations of the computational universe in the 1980s and 1990s to make it clear how ubiquitous the phenomenon of computational irreducibility actually is, and how much it affects basically any sufficiently broad specification.

How Did They Get It Right?

It’s interesting to see what the makers of 2001 got wrong about the future, but it’s impressive how much they got right. So how did they do it? Well, between Stanley Kubrick and Arthur C. Clarke (and their scientific consultant Fred Ordway III), they solicited input from a fair fraction of the top technology companies of the day—and (though there’s nothing in the movie credits about them) received a surprising amount of detailed information about the plans and aspirations of these companies, along with quite a few designs custom-made for the movie as a kind of product placement.

In the very first space scene in the movie, for example, one sees an assortment of differently shaped spacecraft, that were based on concept designs from the likes of Boeing, Grumman and General Dynamics, as well as NASA. (In the movie, there are no aerospace manufacturer logos—and NASA also doesn’t get a mention; instead the assorted spacecraft carry the flags of various countries.)

But so where did the notion of having an intelligent computer come from? I don’t think it had an external source. I think it was just an idea that was very much in the air at the time. My late friend Marvin Minsky, who was one of the pioneers of AI in the 1960s, visited the set of 2001 its filming. But Kubrick apparently didn’t ask him about AI; instead he asked about things like computer graphics, the naturalness of computer voices, and robotics. (Marvin claims to have suggested the configuration of arms that was used for the pods on the Jupiter spacecraft.)

But what about the details of HAL? Where did those come from? The answer is that they came from IBM.

IBM was at the time by far the world’s largest computer company, and it also conveniently happened to be headquartered in New York City, which is where Kubrick and Clarke were doing their work. IBM—as now—was always working on advanced concepts that they could demo. They worked on voice recognition. They worked on image recognition. They worked on computer chess. In fact, they worked on pretty much all the specific technical features of HAL shown in 2001. Many of these features are even shown in the "Information Machine" movie IBM made for the 1964 World’s Fair in New York City (though, curiously, that movie has a dynamic multi-window form of presentation that wasn’t adopted for HAL).

In 1964, IBM had proudly introduced their System/360 mainframe computers. And the rhetoric about HAL having a flawless operational record could almost be out of IBM’s marketing material for the 360. And of course HAL was physically big—like a mainframe computer (actually even big enough that a person could go inside the computer). But there was one thing about HAL that was very non-IBM. Back then, IBM always strenuously avoided ever saying that computers could themselves be smart; they just emphasized that computers would do what people told them to. (Somewhat ironically, the internal slogan that IBM used for its employees was "Think." It took until the 1980s for IBM to start talking about computers as smart—and for example in 1980 when my friend Greg Chaitin was advising the then-head of research at IBM he was told it was deliberate policy not to pursue AI, because IBM didn’t want its human customers to fear they might be replaced by AIs.)

An interesting letter from 1966 surfaced recently. In it, Kubrick asks one of his producers (a certain Roger Caras, who later became well known as a wildlife TV personality): "Does I.B.M. know that one of the main themes of the story is a psychotic computer?" Kubrick is concerned that they will feel swindled. The producer writes back, talking about IBM as "the technical advisor for the computer," and saying that IBM will be OK so long as they are "not associated with the equipment failure by name."

But was HAL supposed to be an IBM computer? The IBM logo appears a couple of times in the movie, but not on HAL. Instead, HAL has a nameplate with "HAL" written on blue, followed by "9000" written on black.

It’s certainly interesting that the blue is quite like IBM’s characteristic "big blue" blue. It’s also very curious that if you go one step forward in the alphabet from the letters H A L, you get I B M. Arthur C. Clarke always claimed this was a coincidence, and it probably was. But my guess is that at some point, that blue part of HAL’s nameplate was going to say "IBM."

Like some other companies, IBM was fond of naming its products with numbers. And it’s interesting to look at what numbers they used. In the 1960s, there were a lot of 3- and 4-digit numbers starting with 3’s and 7’s, including a whole 7000 series, etc. But, rather curiously, there was not a single one starting with 9: there was no IBM 9000 series. In fact, IBM didn’t have a single product whose name started with 9 until the 1990s. And I suspect that was due to HAL.

By the way, the IBM liaison for the movie was their head of PR, C. C. Hollister, who was interviewed in 1964 by the New York Times about why IBM—unlike its competitors—ran general advertising (think Super Bowl), given that only a thin stratum of corporate executives actually made purchasing decisions about computers. He responded that their ads were "designed to reach… the articulators or the 8 million to 10 million people that influence opinion on all levels of the nation’s life" (today one would say "opinion makers," not "articulators"). He then added "It is important that important people understand what a computer is and what it can do." And in some sense, that’s what HAL did, though not in the way Hollister might have expected.

Predicting the Future

OK, so now we know—at least over the span of 50 years—what happened to the predictions from 2001, and in effect how science fiction did (or did not) turn into science fact. So what does this tell us about predictions we might make today?

In my observation things break into three basic categories. First, there are things people have been talking about for years, that will eventually happen—though it’s not clear when. Second, there are surprises that basically nobody expects, though sometimes in retrospect they may seem somewhat obvious. And third, there are things people talk about, but that potentially just won’t ever be possible in our universe, given how its physics works.

Something people have talked about for ages, that surely will eventually happen, is routine space travel. When 2001 was released, no humans had ever ventured beyond Earth orbit. But even by the very the next year, they’d landed on the Moon. And 2001 made what might have seemed like a reasonable prediction that by the year 2001 people would routinely be traveling to the Moon, and would be able to get as far as Jupiter.

Now of course in reality this didn’t happen. But actually it probably could have, if it had been considered a sufficient priority. But there just wasn’t the motivation for it. Yes, space has always been more broadly popular than, say, ocean exploration. But it didn’t seem important enough to put the necessary resources into.

Will it ever happen? I think it’s basically a certainty. But will it take 5 years or 50? It’s very hard to tell—though based on recent developments I would guess about halfway between.

People have been talking about space travel for well over a hundred years. They’ve been talking about what’s now called AI for even longer. And, yes, at times there’ve been arguments about how some feature of human intelligence is so fundamentally special that AI will never capture it. But I think it’s pretty clear at this point that AI is on an inexorable path to reproduce any and all features of whatever we would call intelligence.

A more mundane example of what one might call inexorable technology development is videophones. Once one had phones and one had television, it was sort of inevitable that eventually one would have videophones. And, yes, there were prototypes in the 1960s. But for detailed reasons of computer and telecom capacity and cost, videophone technology didn’t really become broadly available for a few more decades. But it was basically inevitable that it eventually would.

In science fiction, basically ever since radio was invented, it was common to imagine that in the future everyone would be able to communicate through radio instantly. And, yes, it took the better part of a century. But eventually we got cellphones. And in time we got smartphones that could serve as magic maps, and magic mirrors, and much more.

An example that’s today still at an earlier stage in its development is virtual reality. I remember back in the 1980s trying out early VR systems. But back then, they never really caught on. But I think it’s basically inevitable that they eventually will. Perhaps it will require having video that’s at the same quality level as human vision (as audio has now been for a couple of decades). And whether it’s exactly VR, or instead augmented reality, that eventually becomes widespread is not clear. But something like that surely will. Though exactly when is not clear.

There are endless examples one can cite. People have been talking about self-driving cars since at least the 1960s. And eventually they will exist. People have talked about flying cars for even longer. Maybe helicopters could have gone in this direction, but for detailed reasons of control and reliability that didn’t work out. Maybe modern drones will solve the problem. But again, eventually there will be flying cars. It’s just not clear exactly when.

Similarly, there will eventually be robotics everywhere. I have to say that this is something I’ve been hearing will soon happen for more than 50 years, and progress has been remarkably slow. But my guess is that once it’s finally figured out how to really do general-purpose robotics—like we can do general-purpose computation—things will advance very quickly.

And actually there’s a theme that’s very clear over the past 50+ years: what once required the creation of special devices is eventually possible by programming something that is general purpose. In other words, instead of relying on the structure of physical devices, one builds up capabilities using computation.

What is the end point of this? Basically it’s that eventually everything will be programmable right down to atomic scales. In other words, instead of specifically constructing computers, we’ll basically build everything out of computers. To me, this seems like an inevitable outcome. Though it happens to be one that hasn’t yet been much discussed, or, say, explored in science fiction.

Returning to more mundane examples, there are other things that will surely be possible one day, like drilling into the Earth’s mantle, or having cities under the ocean (both subjects of science fiction in the past—and there’s even an ad for a Pan Am Underwater Hotel visible on the space station in 2001). But whether these kinds of things will be considered worth doing is not so clear. Bringing back dinosaurs? It’ll surely be possible to get a good approximation to their DNA. How long all the necessary bioscience developments will take I don’t know, but one day one will surely be able to have a live stegosaurus again.

Perhaps one of the oldest science fiction ideas ever is immortality. And, yes, human lifespans have been increasing. But will there come a point where humans can for practical purposes be immortal? I am quite certain that there will. Quite whether the path will be primarily biological, or primarily digital, or some combination involving molecular-scale technology, I do not know. And quite what it will all mean, given the inevitable presence of an infinite number of possible bugs (today’s medical conditions), I am not sure. But I consider it a certainty that eventually the old idea of human immortality will become a reality. (Curiously, Kubrick—who was something of an enthusiast for things like cryonics—said in an interview in 1968 that one of the things he thought might have happened by the year 2001 is the elimination of old age.)

So what’s an example of something that won’t happen? There’s a lot we can’t be sure about without knowing the fundamental theory of physics. (And even given such a theory, computational irreducibility means it can be arbitrarily hard to work out the consequence for some particular issue.)  But two decent candidates for things that won’t ever happen are Honey-I-Shrunk-the-Kids miniaturization and faster-than-light travel.

Well, at least these things don’t seem likely to happen the way they are typically portrayed in science fiction. But it’s still possible that things that are somehow functionally equivalent will happen. For example, it perfectly well could be possible to scan an object at an atomic scale, and then reinterpret it, and build up using molecular-scale construction at least a very good approximation to it that happens to be much smaller.

What about faster-than-light travel? Well, maybe one will be able to deform spacetime enough that it’ll effectively be possible. Or conceivably one will be able to use quantum mechanics to effectively achieve it. But these kinds of solutions assume that what one cares about are things happening directly in our physical universe.

But imagine that in the future everyone has effectively been uploaded into some digital system—so that the physics one’s experiencing is instead something virtualized. And, yes, at the level of the underlying hardware maybe there will be restrictions based on the speed of light. But for purposes of the virtualized experience, there’ll be no such constraint. And, yes, in a setup like this, one can also imagine another science fiction favorite: time travel (notwithstanding its many philosophical issues).

OK, so what about surprises? If we look at the world today, compared to 50 years ago, it’s easy to identify some surprises. Computers are far more ubiquitous than almost anyone expected. And there are things like the web, and social media, that weren’t really imagined (even though perhaps in retrospect they seem obvious).

There’s another surprise, whose consequences are so far much less well understood, but that I’ve personally been very involved with: the fact that there’s so much complexity and richness to be found in the computational universe.

Almost by definition, surprises tend to occur when understanding what’s possible, or what makes sense, requires a change of thinking, or some kind of paradigm shift. Often in retrospect one imagines that such changes of thinking just occur—say in the mind of one particular person—out of the blue. But in reality what’s almost always going on is that there’s a progressive stack of understanding developed—which, perhaps quite suddenly, allows one to see something new.

And in this regard it’s interesting to reflect on the storyline of 2001. The first part of the movie shows an alien artifact—a black monolith—that appears in the world of our ape ancestors, and starts the process that leads to modern civilization. Maybe the monolith is supposed to communicate critical ideas to the apes by some kind of telepathic transmission.

But I like to have another interpretation. No ape 4 million years ago had ever seen a perfect black monolith, with a precise geometrical shap

Read more: https://www.wired.com/story/2001-a-space-odyssey-predicted-the-future50-years-ago/

Related Articles

Markets Are Eating The World

For the last hundred years, individuals have worked for firms, and, by historical standards, large ones.

That many of us live in suburbs and drive our cars into the city to go to work at a large office building is so normal that it seems like it has always been this way. Of course, it hasn’t. In 1870, almost 50 percent of the U.S. population was employed in agriculture.[1] As of 2008, less than 2 percent of the population is directly employed in agriculture, but many people worked for these relatively new things called “corporations.”[2]

Many internet pioneers in the 90’s believed that the internet would start to break up corporations by letting people communicate and organize over a vast, open network. This reality has sort-of played out: the “gig economy” and rise in freelancing are persistent, if not explosive, trends. With the re-emergence of blockchain technology, talk of “the death of the firm” has returned. Is there reason to think this time will be different?

To understand why this time might (or might not) be different, let us first take a brief look back into Coasean economics and mechanical clocks.

In his 1937 paper, “The Nature of the Firm,” economist R.H. Coase asked “if markets were as efficient as economists believed at the time, why do firms exist at all? Why don’t entrepreneurs just go out and hire contractors for every task they need to get done?”[3]

If an entrepreneur hires employees, she has to pay them whether they are working or not. Contractors only get paid for the work they actually do. While the firm itself interacts with the market, buying supplies from suppliers and selling products or services to customers, the employees inside of it are insulated. Each employee does not renegotiate their compensation every time they are asked to do something new. But, why not?

Coase’s answer was transaction costs. Contracting out individual tasks can be more expensive than just keeping someone on the payroll because each task involves transaction costs.

Imagine if instead of answering every email yourself, you hired a contractor that was better than you at dealing with the particular issue in that email. However, it costs you something to find them. Once you found them you would have to bargain and agree on a price for their services then get them to sign a contract and potentially take them to court if they didn’t answer the email as stipulated in the contract.

Duke economist Mike Munger calls these three types of transaction costs triangulation, how hard it is to find and measure the quality of a service; transfer, how hard it is to bargain and agree on a contract for the good or service; and trust, whether the counterparty is trustworthy or you have recourse if they aren’t.

You might as well just answer the email yourself or, as some executives do, hire a full-time executive assistant. Even if the executive assistant isn’t busy all the time, it’s still better than hiring someone one off for every email or even every day.

Coase’s thesis was that in the presence of these transaction costs, firms will grow larger as long as they can benefit from doing tasks in-house rather than incurring the transaction costs of having to go out and search, bargain and enforce a contract in the market. They will expand or shrink until the cost of making it in the firm equals the cost of buying it on the market.

The lower the transaction costs are, the more efficient markets will be, and the smaller firms will be.

In a world where markets were extremely efficient, it would be very easy to find and measure things (low triangulation costs), it would be very easy to bargain and pay (low transfer costs), and it would be easy to trust the counterparty to fulfill the contract (low trust costs).

In that world, the optimal size of the firm is one person (or a very few people). There’s no reason to have a firm because business owners can just buy anything they need on a one-off basis from the market.[4] Most people wouldn’t have full-time jobs; they would do contract work.

Consumers would need to own very few things. If you needed a fruit dehydrator to prepare for a camping trip twice a year, you could rent one quickly and cheaply. If you wanted to take your family to the beach twice a year, you could easily rent a place just for the days you were there.

On the other hand, in a world that was extremely inefficient, it would be hard to find and measure things (high triangulation costs), it would be difficult to bargain and pay (high transfer costs) and it would be difficult to trust the counterparty to fulfill the contract (high trust costs).

In that world, firms would tend to be large. It would be inefficient to buy things from the market and so entrepreneurs would tend to accumulate large payrolls. Most people would work full-time jobs for large firms. If you wanted to take your family to the beach twice a year, you would need to own the beach house because it would be too inefficient to rent, the reality before online marketplaces like AirBnB showed up.

Consumers would need to own nearly everything they might conceivably need. Even if they only used their fruit dehydrator twice a year, they’d need to own it because the transaction costs involved in renting it would be too high.

If the structure of the economy is based on transaction costs, then what determines them?

Technological Eras and Transaction Costs

The primary determinant of transaction costs is technology.

The development of the wheel and domestication of horses and oxes decreased transfer costs by making it possible to move more goods further. Farmers who could bring their crops to market using an ox cart rather than carrying it by hand could charge less and still make the same profit.

The development of the modern legal system reduced the transaction cost of trust. It was possible to trust that your counterparty would fulfill their contract because they knew you had recourse if they didn’t.

The list goes on: standardized weights and  measures, the sail, the compass, the printing press, the limited liability corporation, canals, phones, warranties, container ships and, more recently, smartphones and the internet.

It’s hard to appreciate how impactful many of these technologies has been, because most of them had become so common by the time most of us were born that we take them for granted.

As the author Douglas Adams said, “Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.”

To see how technology affects transaction costs, and how that affects the way our society is organized, let’s consider something which we all think of as “normal and ordinary,”  but which has had a huge impact on our lives: the mechanical clock.

The Unreasonable Effectiveness of the Mechanical Clock

In 1314, The city of Caen installed a mechanical clock with the following inscription: “I give the hours voice to make the common folk rejoice.” “Rejoice” is a pretty strong reaction to a clock, but it wasn’t overstated, everyone in Caen was pretty jazzed about the mechanical clock. Why?

A key element of why we have jobs today as opposed to working as slaves or serfs bonded to the land as was common in the Feudal system is a direct result of the clock.

Time was important before the invention of the clock but was very hard to measure. Rome was full of sundials, and medieval Europe’s bell towers where, time was tolled, were the tallest structures in town.[5]

This was not cheap. In the larger and more important belfries, two bell-ringers lived full time, each serving as a check on the other. The bells themselves were usually financed by local guilds that relied on the time kept to tell their workers when they had to start working and when they could go home.

This system was problematic for a few reasons.

For one, it was expensive. Imagine if you had to pool funds together with your neighbors to hire two guys to sit in the tower down the street full time and ring the bell to wake you up in the morning.

For another, the bell could only signal a few events per day. If you wanted to organize a lunch meeting with a friend, you couldn’t ask the belltower to toll just for you. Medieval bell towers had not yet developed snooze functionality.

Finally, sundials suffered from accuracy problems. Something as common as clouds could make it difficult to tell precisely when dawn, dusk, and midday occurred.

In the 14th and 15th centuries, the expensive bell towers of Europe’s main cities got a snazzy upgrade that dramatically reduced transaction costs: the mechanical clock.

The key technological breakthrough that allowed the development was the escapement.

The escapement transfers energy to the clock’s pendulum to replace the energy lost to friction and keep it on time. Each swing of the pendulum releases a tooth of the escapement’s wheel gear, allowing the clock’s gear train to advance or “escape” by a set amount. This moves the clock’s hands forward at a steady rate.[6]

The accuracy of early mechanical clocks, plus or minus 10-15 minutes per day, was not notably better than late water clocks and less accurate than the sandglass, yet mechanical clocks became widespread. Why?

  1. Its automatic striking feature meant the clock could be struck every hour at lower cost, making it easier to schedule events than only striking at dawn, dusk and noon.
  2. It was more provably fair than the alternatives, which gave all parties greater confidence that the time being struck was accurate. (Workers were often suspicious that employers could bribe or coerce the bell-ringers to extend the workday, which was harder to do with a mechanical clock.)

Mechanical clocks broadcast by bell towers provided a fair (lower trust costs) and fungible [7] (lower transfer costs) measure of time. Each hour rung on the bell tower could be trusted to be the same length as another hour.

Most workers in the modern economy earn money based on a time-rate, whether the time period is an hour, a day, a week or a month. This is possible only because we have a measure of time which both employer and employee agree upon. If you hire someone to pressure-wash your garage for an hour, you may argue with them over the quality of the work, but you can both easily agree whether they spent an hour in the garage.

Prior to the advent of the mechanical clock, slavery and serfdom were the primary economic relationships, in part because the transaction cost of measuring time beyond just sunup and sundown was so high, workers were chained to their masters or lords.[8]

The employer is then able to use promotions, raises, and firing to incentivize employees to produce quality services during the time they are being paid for.[9]

In a system based on time-rate wages rather than slavery or serfdom, workers have a choice. If the talented blacksmith can get a higher time-rate wage from a competitor, she’s able to go work for them because there is an objective, fungible measure of time she’s able to trade.

As history has shown, this was a major productivity and quality-of-life improvement for both parties.[10]

It gradually became clear that mechanical time opened up entirely new categories of economic organization and productivity that had hitherto been not just impossible, but unimaginable.

We could look at almost any technology listed abovestandardized weights and measures, the sail, the compass, the printing press, etc.and do a similar analysis of how it affected transaction costs and eventually how it affected society as a result.

The primary effect is an increase in what we will call coordination scalability.

Coordination Scalability

“It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them.”   Alfred North Whitehead

About 70,000 years ago, there were between six and ten species of the genus homo. Now, of course, there is just one: Homo sapiens. Why did Homo sapiens prevail over the other species, like Homo neanderthalensis?

Homo sapiens prevailed because of their ability to coordinate. Coordination was made possible by increased neocortical size, which led to an ability to work together in large groups, not just as single individuals. Instead of single individuals hunting, groups could hunt and bring down larger prey more safely and efficiently.[11]

The brain of Homo sapiens has proven able to invent other, external structures which further increased coordination scalability by expanding the network of other people we could rely on.

Maybe the most important of these was language, but we have evolved many others since, including the mechanical clock.

The increased brain size has driven our species through four coordination revolutions: Neolithic, Industrial, Computing, Blockchain.

Neolithic Era: The Emergence of Division of Labor

The first economic revolution was a shift from humans as hunter-gatherers to homo sapiens as farmers.

Coordination scalability among hunter-gatherers was limited to the size of the band, which tended to range from 15 to 150 individuals.[12] The abandonment of a nomadic way of life and move to agriculture changed this by allowing specialization and the formation of cities.

Agriculture meant that people could, for the first time, accumulate wealth. Farmers could save excess crops to eat later or trade them for farming equipment, baskets or decorations. The problem was that this wealth was suddenly worth stealing and so farmers needed to defend their wealth.

Neolithic societies typically consisted of groups of farmers protected by what Mancur Olson called “stationary bandits,” basically warlords.[13] This allowed the emergence of much greater specialization. Farmers accumulated wealth and paid some to the warlords for protection, but even then there was still some left over, making it possible for individuals to specialize.

A city of 10,000 people requires, but also makes possible, specialists.

The limits of coordination scalability increased from 150 to thousands or, in some cases, tens of thousands. This was not necessarily a boon to human happiness. Anthropologist Jared Diamond called the move to agriculture “the worst mistake in the history of the human race.”[14] The quality of life for individuals declined: lifespans shortened, nutrition was worse leading to smaller stature, and disease was more prevalent.

But this shift was irresistible because specialization created so much more wealth and power that groups which adopted this shift came to dominate those that didn’t. The economies of scale in military specialization, in particular, were overwhelming. Hunt-gatherers couldn’t compete.

In the Neolithic era, the State was the limit of coordination scalability.

Industrial Era: Division of Labor Is Eating the World

Alongside the city-state, a new technology started to emerge that would further increase the limits of coordination scalability: money. To illustrate, let us take the European case, from ancient Greece to modernity, though the path in other parts of the world was broadly similar. Around 630 B.C., the Lydian kings recognized the need for small, easily transported coins worth no more than a few days’ labor. They made these ingots in a standard sizeabout the size of a thumbnail—and weight, and stamped an emblem of a lion’s head on them.

This eliminated one of the most time-consuming (and highest transaction cost) steps in commerce: weighing gold and silver ingots each time a transaction was made. Merchants could easily count the number of coins without worrying about cheating.

Prior to the invention of coins, trade had been limited to big commercial transactions, like buying a herd of cattle. With the reduced transfer cost facilitated by coins, Lydians began trading in the daily necessities of lifegrain, olive oil, beer, wine, and wood.[15]

The variety and abundance of goods which could suddenly be traded led to another innovation: the retail market.

Previously, buyers had to go to the home of sellers of whatever they needed. If you needed olive oil, you had to walk over to the olive oil lady’s house to get it. With the amount of trade that began happening after coinage, a central market emerged. Small stalls lined the market where each merchant specialized in (and so could produce more efficiently) a particular goodmeat, grain, jewelry, bread, cloth, etc. Instead of having to go the olive oil lady’s house, you could go to her stall and pick up bread from the baker while you were there.

From this retail market in Lydia sprang the Greek agora, Medieval market squares in Europe and, the suburban shopping mall and, eventually, the “online shopping malls” Amazon and Google. Though markets were around as early as 7th century BCE Lydia, they really hit their stride in The Industrial Revolution in the 18th century.[16]

Adam Smith was the first to describe in detail the effect of this marketization of the world. Markets made it possible to promote the division of labor across political units, not just within them. Instead of each city or country manufacturing all the goods they needed, different political entities could further divide labor. Coordination scalability started to stretch across political borders.

Coming back to Coase, firms will expand or shrink until “making” equals the cost of “buying.” Under this Industrial era, transaction costs made administrative and managerial coordination (making) more efficient than market coordination (buying) for most industries, which led to the rise of large firms.

The major efficiency gain of Industrial companies over their more “artisanal” forebearers was that using the techniques of mass production, they could produce products of a higher quality at a lower price. This was possible only if they were able to enforce standards throughout the supply chain. The triangulation transaction cost can be broken down into search and measurement: a company needed to find the vendor and to be able to measure the quality of the good or service.

In the early Industrial era, the supply chain was extremely fragmented. By bringing all the pieces into the firm, a large vertically integrated company could be more efficient.[17]

As an example, In the 1860s and 1870s, the Carnegie Corporation purchased mines to ensure it had reliable access to the iron ore and coke it needed to make steel. The upstream suppliers were unreliable and non-standardized and Carnegie Corporation could lower the cost of production by simply owning the whole supply chain.

This was the case in nearly every industry. By bringing many discrete entities under one roof and one system of coordination, greater economic efficiencies were gained and the multi-unit business corporation replaced the small, single-unit enterprise because administrative coordination enabled greater productivity through lower transaction costs per task than was possible before. Economies of scale flourished.

This system of large firms connected by markets greatly increased coordination scalability. Large multinational firms could stretch across political boundaries and provide goods and services more efficiently.

In Henry Ford’s world, the point where making equaled the cost of buying was pretty big. Ford built a giant plant at River Rouge just outside Detroit between 1917 and 1928 that took in iron ore and rubber at one end and sent cars out the other. At the factory’s peak, 100,000 people worked there. These economies of scale allowed Ford to dramatically drive down the cost of an automobile, making it possible for the middle class to own a car.[18]

As with Carnegie, Ford learned that supplier networks take a while to emerge and grow into something reliable. In 1917, doing everything himself was the only way to get the scale he needed to be able to make an affordable car.

One of the implications of this model was that industrial businesses required huge startup costs.

The only chance any entrepreneur had to compete required starting out with similarly massive amounts of capital required to build a factory large and efficient enough to compete with Ford.

For workers, this meant that someone in a specialized role, like an electric engineer or an underwriter, did not freelance or work for small businesses. Because the most efficient way to produce products was in large organizations, specialized workers could earn the most by working inside large organizations, be they Ford, AT&T or Chase Bank.

At the peak of the Industrial era, there were two dominant institutions: firms and markets.

Work inside the firm allowed for greater organization and specialization which, in the presence of high transaction costs was more economically efficient.

Markets were more chaotic and less organized, but also more motivating. Henry Ford engaged with the market and made out just a touch better than any of his workers; there just wasn’t room for many Henry Fords.

This started to dissolve in the second half of the 20th century. Ford no longer takes iron ore and rubber as the inputs to their factories, but has a vast network of upstream suppliers.[19] The design and manufacturing of car parts now happens over a long supply chain, which the car companies ultimately assemble and sell.

One reason is that supplier networks became more standardized and reliable. Ford can now buy ball bearings and brake pads more efficiently than he can make them, so he does. Each company in the supply chain focuses on what they know best and competition forces them to constantly improve.

By the 1880s, it cost Carnegie more to operate the coke ovens in-house than to buy it from an independent source, so he sold off the coke ovens and bought it from the open market. Reduced transaction costs in the form of more standardized and reliable production technology caused both Ford and Carnegie corporation to shrink as Coase’s theory would suggest.

The second reason is that if you want to make a car using a network of cooperating companies, you have to be able to coordinate their efforts, and you can do that much better with telecommunication technology broadly and computers specifically. Computers reduce the transaction costs that Coase argued are the raison d’etre of corporations. That is a fundamental change.[20]

The Computing Era: Software Is Eating the World

Computers, and the software and networks built on top of them, had a new economic logic driven by lower transaction costs.

Internet aggregators such as Amazon, Facebook, Google, Uber and Airbnb reduced the transaction costs for participants on their platforms. For the industries that these platforms affected, the line between “making” and “buying” shifted toward buying. The line between owning and renting shifted toward renting.

Primarily, this was done through a reduction in triangulation costs (how hard it is to find and measure the quality of a service), and transfer costs (how hard it is to bargain and agree on a contract for the good or service).

Triangulation costs came down for two reasons. One was the proliferation of smartphones, which made it possible for services like Uber and Airbnb to exist. The other was the increasing digitization of the economy. Digital goods are both easier to find (think Googling versus going to the library or opening the Yellow Pages) and easier to measure the quality of (I know exactly how many people read my website each day and how many seconds they are there, the local newspaper does not).

The big improvement in transfer costs was the result of matchmaking: bringing together and facilitating the negotiation of mutually beneficial commercial or retail deals.  

Take Yelp, the popular restaurant review app. Yelp allows small businesses like restaurants, coffee shops, and bars to advertise to an extremely targeted group: individuals close enough to come to the restaurant and that searched for some relevant term. A barbecue restaurant in Nashville can show ads only to people searching their zip code for terms like “bbq” and “barbecue.” This enables small businesses that couldn’t afford to do radio or television advertising to attract customers.

The existence of online customer reviews gives consumers a more trusted way to evaluate the restaurant.

All of the internet aggregators, including Amazon, Facebook, and Google, enabled new service providers by creating a market and standardizing the rules of that market to reduce transaction costs.[21]

The “sharing economy” is more accurately called the “renting economy” from the perspective of consumers, and the “gig economy” from the perspective of producers. Most of the benefits are the result of new markets enabled by lower transaction costs, which allows consumers to rent rather than own, including “renting” some else’s time rather than employing them full time.

It’s easier to become an Uber driver than a cab driver, and an Airbnb host than a hotel owner. It’s easier to get your product into Amazon than Walmart. It’s easier to advertise your small business on Yelp, Google or Facebook than on a billboard, radio or TV.

Prior to the internet, the product designer was faced with the option of selling locally (which was often too small a market), trying to get into Walmart (which was impossible without significant funding and traction), or simply working for a company that already had distribution in Walmart.

On the internet, they could start distributing nationally or internationally on day one. The “shelf space” of Amazon or Google’s search engine results page was a lot more accessible than the shelf space of Walmart.

As a result, it became possible for people in certain highly specialized roles to work independently of firms entirely. Product designers and marketers could sell products through the internet and the platforms erected on top of it (mostly Amazon and Alibaba in the case of physical products) and have the potential to make as much or more as they could inside a corporation.

This group is highly motivated because their pay is directly based on how many products they sell. The aggregators and the internet were able to reduce the transaction costs that had historically made it economically inefficient or impossible for small businesses and individual entrepreneurs to exist.

The result was that in industries touched by the internet, we saw an industry structure of large aggregators and a long tail [22] of small business which were able to use the aggregators to reach previously unreachable, niche segments of the market. Though there aren’t many cities where a high-end cat furniture retail store makes economic sense, on Google or Amazon, it does.

source: stratechery.com

Before


After (Platform-Enabled Markets)


Firms


Platform


Long Tail



Walmart and big box retailers
Amazon Niche product designers and manufacturers

Cab companies
Uber Drivers with extra seats

Hotel chains
Airbnb Homeowners with extra rooms

Traditional media outlets
Google and Facebook Small offline and niche online businesses

For these industries, coordination scalability was far greater and could be seen in the emergence of micro-multinational businesses. Businesses as small as a half dozen people could manufacture in China, distribute products in North America, and employ people from Europe and Asia. This sort of outsourcing and the economic efficiencies it created had previously been reserved for large corporations.

As a result, consumers received cheaper, but also more personalized products from the ecosystem of aggregators and small businesses.

However, the rental economy still represents a tiny fraction of the overall economy. At any given time, only a thin subset of industries are ready to be marketized. What’s been done so far is only a small fraction of what will be done in the next few decades.

Yet, we can already start to imagine a world which Munger calls “Tomorrow 3.0.” You need a drill to hang some shelves in your new apartment. You open an app on your smartphone and tap “rent drill.” An autonomous car picks up a drill and delivers it outside your apartment in a keypad-protected pod and your phone vibrates “drill delivered.” Once you’re done, you put it back in the pod, which sends a message to another autonomous car nearby to come pick it up. The rental costs $5, much less than buying a commercial quality power drill. This is, of course, not limited to drillsit could have been a saw, fruit dehydrator, bread machine or deep fryer.

You own almost nothing, but have access to almost everything.

You, nor your neighbors, have a job, at least in the traditional sense. You pick up shifts or client work as needed and maybe manage a few small side businesses. After you finish drilling the shelves in, you might sit down at your computer and see what work requests are open and work for a few hours on designing a new graphic or finishing up the monthly financial statements for a client.

This is a world in which triangulation and transfer costs have come down dramatically, resulting in more renting than buying from consumers and more gig work than full-time jobs for producers.

This is a world we are on our way to already, and there aren’t any big, unexpected breakthroughs that need to happen first.

But what about the transaction cost of trust?

In the computer era, the areas that have been affected most are what could be called low-trust industries. If the sleeping mask you order off of Amazon isn’t as high-quality as you thought, that’s not a life or death problem.

What about areas where trust is essential?

Enter stage right: blockchains.

The Blockchain Era: Blockchain Markets Are Eating the World

One area where trust matters a lot is money. Most of the developed world doesn’t think about the possibility of fiat money [23] not being trustworthy because it hasn’t happened in our lifetimes. For those that have experienced it, including major currency devaluations, trusting that your money will be worth roughly the same tomorrow as it is today is a big deal.

Citizens of countries like Argentina and particularly Venezuela have been quicker to adopt bitcoin as a savings vehicle because their economic history made the value of censorship resistance more obvious.

Due to poor governance, the inflation rate in Venezuela averaged 32.42 percent from 1973 until 2017. Argentina was even worse; the inflation rate there averaged 200.80 percent between 1944 and 2017.

The story of North America and Europe is different. In the second half of the 20th century, monetary policy has been stable.

The Bretton Woods Agreement, struck in the aftermath of the Second World War, aggregated control of most of the globe’s monetary policy in the hands of the United States. The European powers acceded to this in part because the U.S. dollar was backed by gold, meaning that the U.S. government was subject to the laws of physics and geology of gold mining. They could not expand the money supply any faster than gold could be taken out of the ground.

With the abandonment of the gold standard under Nixon in 1973, control over money and monetary policy has moved into a historically small group of central bankers and powerful political and financial leaders and is no longer restricted by gold.

Fundamentally, the value of the U.S. dollar today is based on trust. There is no gold in a vault that backs the dollars in your pocket. Most fiat currencies today have value because the market trusts that the officials in charge of U.S. monetary policy will manage it responsibly.

It is at this point that the debate around monetary policy devolves into one group that imagines this small group of elitist power brokers sitting in a dark room on large leather couches surrounded by expensive art and mahogany bookshelves filled with copies of The Fountainhead smoking cigars and plotting against humanity using obscure financial maneuvering.

Another group, quite reasonably, points to the economic prosperity of the last half-century under this system and insists on the quackery of the former group.

A better way to understand the tension between a monetary system based on gold versus one based on fiat money this has been offered by political science professor Bruce Bueno de Mesquita:  “Democracy is a better form of government than dictatorships, not because presidents are intrinsically better people than dictators, but simply because presidents have less agency and power than dictators.”

Bueno de Mesquita calls this Selectorate Theory. The selectorate represents the number of people who have influence in a government, and thus the degree to which power is distributed. The selectorate of a dictatorship will tend to be very small: the dictator and a few cronies. The selectorate in democracy tends to be much larger, typically encompassing the Executive, Legislative, and Judicial branches and the voters which elect them.

Historically, the size of the selectorate involves a tradeoff between the efficiency and the robustness of the governmental system. Let’s call this the “Selectorate Spectrum.”

Dictatorships can be more efficient than democracies because they don’t have to get many people on board to make a decision. Democracies, by contrast, are more robust, but at the cost of efficiency.

Conservatives and progressives alike bemoan how little their elected representatives get done but happily observe how little their opponents accomplish. A single individual with unilateral power can accomplish far more (good or bad) than a government of “checks and balances.” The long-run health of a government means balancing the tradeoff between robustness and efficiency. The number of stakeholders cannot be so large that nothing gets done or the country will never adapt nor too small that one or a small group of individuals can hijack the government for personal gain.

This tension between centralized efficiency and decentralized robustness exists in many other areas. Firms try to balance the size of the selectorate to make it large enough so there is some accountability (e.g. a board and shareholder voting) but not so large as to make it impossible to compete in a marketby centralizing most decisions in the hands of a CEO.

We can view both the current monetary system and the internet aggregators through the lens of the selectorate. In both areas, the trend over the past few decades is that the robustness of a large selectorate has been traded away for the efficiency of a small one.[24]

A few individualsheads of central banks, leaders of state, corporate CEOs, and leaders of large financial entities like sovereign wealth funds and pensions fundscan move markets and politics globally with even whispers of significant change. This sort of centralizing in the name of efficiency can sometimes lead to long feedback loops with potentially dramatic consequences.

Said another way, much of what appears efficient in the short term may not be efficient but hiding risk somewhere, creating the potential for a blow-up. A large selectorate tends to appear to be working less efficiently in the short term, but can be more robust in the long term, making it more efficient in the long term as well. It is a story of the Tortoise and the Hare: slow and steady may lose the first leg, but win the race.

In the Beginning, There Was Bitcoin

In October 2008, an anonymous individual or group using the pseudonym Satoshi Nakamoto sent an email to a cypherpunk mailing list, explaining a new system called bitcoin. The opening line of the conclusion summed up the paper:

“We have proposed a system for electronic transactions without relying on trust”

When the network went live a few months later in January 2009, Satoshi embedded the headline of a story running that day in The London Times:

“The Times 03/Jan/2009 Chancellor on brink of second bailout for banks”

Though we can’t know for sure what was going through Satoshi’s mind at the time, the most likely explanation based is that Satoshi was reacting against the decisions being made in response to the 2008 Global Financial Crisis by the small selectorate in charge of monetary policy.

Instead of impactful decisions about the monetary system like a bailout being reliant upon a single individual, the chancellor, Satoshi envisioned bitcoin as a more robust monetary system, with a larger selectorate beyond the control of a single individual.

But why create a new form of money? Throughout history, the most common way for individuals to show their objections to their nation’s monetary policy was by trading their currency for some commodity like gold, silver, or livestock that they believed would hold its value better than the government-issued currency.

Gold, in particular, has been used as a form of money for nearly 6,000 years for one primary reason: the stock-to-flow ratio. Because of how gold is deposited in the Earth’s crust, it’s very difficult to mine. Despite all the technological changes in the last few hundred years, this has meant that the amount of new gold mined in a given year (the flow) has averaged between 1-2 percent of the total gold supply (stock) with very little variation year to year.

As a result, the total gold supply has never increased by more than 1-2 percent per year. In comparison to Venezuela’s 32.4 percent inflation and Argentina’s 200.80 percent inflation, gold’s inflation is far lower and more predictable.

Viewed through the lens of Selectorate Theory, we can say that gold or other commodity forms of money have a larger selectorate and are more robust than government-issued fiat currency. In the same way a larger group of stakeholders in a democracy constrains the actions of any one politician, the geological properties of gold constrained governments and their monetary policy.

Whether or not these constraints were “good” or “bad” is still a matter of debate. The Keynesian school of economics, which has come to be the view of mainstream economics, emerged out of John Maynard Keynes’s reaction to the Great Depression, which he thought was greatly exacerbated by the commitment to the gold standard and that governments should manage monetary policy to soften the cyclical nature of markets.

The Austrian and monetarist schools believe that human behavior is too idiosyncratic to model accurately with mathematics and that minimal government intervention is best. Attempts to intervene can be destabilizing and lead to inflation so a commitment to the gold standard is the lesser evil in the long run.

Taken in good faith, these schools represent different beliefs about the ideal point on the Selectorate Spectrum. Keynesians believe that greater efficiency could be gained by giving government officials greater control over monetary policy without sacrificing much robustness. Austrians and monetarists argue the opposite, that any short-term efficiency gains actually create huge risks to the long-term health of the system.

Viewed as a money, bitcoin has many gold-like properties, embodying something closer to the Austrian and monetarist view of ideal money. For one, we know exactly how many bitcoin will be created21 millionand the rate at which they will be created. Like gold, the ability to change this is outside of the control of a single or small group of individuals, giving it a predictable stock-to-flow ratio and making it extremely difficult to inflate.

Similar to gold, the core bitcoin protocol also makes great trade-offs in terms of efficiency in the name of robustness.[25]

However, bitcoin has two key properties of fiat money which gold lacksit is very easy to divide and transport. Someone in Singapore can send 1/100th of a bitcoin to someone in Canada in less than an hour. Sending 1/100th of a gold bar would be a bit trickier.

In his 1998 book, Cryptonomicon, science fiction author Neal Stephenson imagined a bitcoin-like money built by the grandchild of Holocaust survivors who wanted to create a way for individuals to escape totalitarian regimes without giving up all their wealth. It was difficult, if not impossible, for Jews to carry gold bars out of Germany, but what if all they had to do was remember a 12-word password phrase? How might history have been different?

Seen in this way, bitcoin offers a potentially better trade-off between robustness and efficiency. Its programmatically defined supply schedule means the inflation rate will be lower than gold (making it more robust) while it’s digital nature makes it as divisible and transportable as any fiat currency (making it more efficient).

Using a nifty combination of economic incentives for mining (proof-of-work system) and cryptography (including blockchain), bitcoin allowed individuals to engage in a network that was both open (like a market) and coordinated (like a firm) without needing a single or small group of power brokers to facilitate the coordination.

Said another way, bitcoin was the first example of money going from being controlled from a small group of firm-like entities (central banks) to being market-driven. What cryptocurrency represents is the technology-enabled possibility that anyone can make their own form of money.

Whether or not bitcoin survives, that Pandora’s Box is now open. In the same way computing and the internet opened up new areas of the economy to being eaten by markets, blockchain and cryptocurrency technology have opened up a different area to be eaten by markets: money.

The Future of Public Blockchains

Bitcoin is unique among forms of electronic money because it is both trustworthy and maintained by a large selectorate rather than a small one.

There was a group that started to wonder whether the same underlying technology could be used to develop open networks in other areas by reducing the transaction cost of trust.[26]

One group, the monetary maximalists, thinks not. According to them, public blockchains like bitcoin will only ever be useful as money because it is the area where trust is most important and so you can afford to trade everything else away. The refugee fleeing political chaos does not care that a transaction takes an hour to go through and costs $10 or even $100. They care about having the most difficult to seize, censorship-resistant form of wealth.

Bitcoin, as it exists today, enhances coordination scalability by allowing any two parties to transact without relying on a centralized intermediary and by allowing individuals in unstable political situations to store their wealth in the most difficult-to-seize form ever created.

The second school of thought is that bitcoin is the first example of a canonical, trustworthy ledger with a large selectorate and that there could be other types of ledgers which are able to emulate it.

At its core, money is just a ledger. The amount of money in your personal bank account is a list of all the transactions coming in (paychecks, deposits, etc.) and all the transactions going out (paying rent, groceries, etc.). When you add all those together, you get a balance for your account.

Historically, this ledger was maintained by a single entity, like your bank. In the case of U.S. dollars, the number in circulation can be figured out by adding up how much money the U.S. government has printed and released into the market and how much it has taken back out of the market.

What else could be seen as a ledger?

The answer is “nearly everything.” Governments and firms can be seen just as groups of ledgers. Governments maintain ledgers of citizenship, passports, tax obligations, social security entitlements and property ownership. Firms maintain ledgers of employment, assets, processes, customers and intellectual property.

Economists sometimes refer to firms as “a nexus of contracts.” The value of the firm comes from those contracts and how they are structured within the “ledger of the firm.” Google has a contract with users to provide search results, with advertisers to display ads to users looking for specific search terms, and with employees to maintain the quality of their search engine. That particular ledger of contracts is worth quite a lot.

Mechanical time opened up entirely new categories of economic organization. It allowed for trade to be synchronized at great distanceswithout mechanical time, there would have been no railroads (how would you know when to go?) and no Industrial Revolution. Mechanical time allowed for new modes of employment that lifted people out of serfdom and slavery.[27]

In the same way, it may be that public blockchains make it possible to have ledgers that are trustworthy without requiring a centralized firm to manage them. This would shift the line further in favor of “renting” over “buying” by reducing the transaction cost of trust.

Entrepreneurs may be able to write a valuable app and release for anyone and everyone who needs that functionality. The entrepreneur would collect micro-payments in their wallet. A product designer could release their design into the wild and consumers could download it to be printed on their 3D printer almost immediately.[28]

For the first 10 years of bitcoin’s existence, this hasn’t been possible. Using a blockchain has meant minimizing the transaction cost of trust at all costs, but that may not always be the case. Different proposals are already being built out that allow for more transactions to happen without compromising the trust which bitcoin and other crypto-networks offer.

There are widely differing opinions on what the best way to scale blockchains are. One faction, usually identifying as Web 3/smart contracting platform/Ethereum, believes that scaling quickly at the base layer is essential and can be done with minimal security risk while the other groups believe that scaling should be done slowly and only where it does not sacrifice the censorship-resistant nature of blockchains (bitcoin). Just like the debate between Keynesian and Austrian/monetarist views of monetary policy, these views represent different beliefs about the optimal tradeoff point on the Selectorate Spectrum. But, both groups believe that significant progress can be made on making blockchains more scalable without sacrificing too much trust.

Public blockchains may allow aggregation without the aggregators. For certain use cases, perhaps few, perhaps many, public blockchains like bitcoin will allow the organization and coordination benefits of firms and the motivation of markets while maintaining a large selectorate.

Ultimately, what we call society is a series of overlapping and interacting ledgers.

In order for ledgers to function, they must be organized according to rules. Historically, rules have required rulers to enforce them. Because of network effects, these rulers tend to become the most powerful people in society. In medieval Europe, the Pope enforced the rules of Christianity and so he was among the most powerful.

Today, Facebook controls the ledger of our social connections. Different groups of elites control the university ledgers and banking ledgers.

Public blockchains allow people to engage in a coordinated and meritocratic network without requiring a small selectorate.

Blockchains may introduce markets into corners of society that have never before been reached. In doing so, blockchains have the potential to replace ledgers previously run by kings, corporations, and aristocracies. They could extend the logic of the long tail to new industries and lengthen the tail for suppliers and producers by removing rent-seeking behavior and allowing for permissionless innovation.

Public blockchains allow for rules without a ruler. It began with money, but they may move on to corporate ledgers, social ledgers and perhaps eventually, the nation-state ledger.[29]

Acknowledgments: Credit for the phrase “Markets Are Eating the World” to Patri Friedman.


  1. https://www.bls.gov/opub/mlr/1981/11/art2full.pdf
  2. https://www.bls.gov/emp/tables/employment-by-major-industry-sector.htm
  3. http://www3.nccu.edu.tw/~jsfeng/CPEC11.pdf
  4. There are, of course, other types of transaction costs than the ones listed here. A frequent one brought up in response to Coase is company culture, which nearly all entrepreneurs and investors agree is an important factor in a firm’s productivity. This is certainly true, but the broader point about the relationship between firm size and transaction costs hold—culture is just another transaction cost.
  5. http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/synch.html
  6. https://en.wikipedia.org/wiki/Escapement
  7. Fungibility is the property of a good or a commodity whose individual units are interchangeable. For example, one ounce of pure silver is fungible with any other ounce of pure silver. This is not the same for most goods: a dining table chair is not fungible with a fold-out chair.
  8. Piece rates, paying for some measurement of a finished output like bushels of apples or balls of yarn, seems fairer. But they suffer from two issues: For one, the output of the labor depends partially on the skill and effort of the laborer, but also on the vagaries of the work environment. This is particularly true in a society like that of medieval Europe, where nearly everyone worked in agriculture. The best farmer in the world can’t make it rain. The employee wants something like insurance that they will still be compensated for the effort in the case of events outside their control, and the employer who has more wealth and knowledge of market conditions takes on these risks in exchange for increased profit potential.
  9. For the worker, time doesn’t specify costs such as effort, skill or danger. A laborer would want to demand a higher time-rate wage for working in a dangerous mine than in a field. A skilled craftsman might demand a higher time-rate wage than an unskilled craftsman.
  10. The advent of the clock was necessary for the shift from farms to cities. Sunup to sundown worked effectively as a schedule for farmers because summer was typically when the most labor on farms was required, so longer days were useful. For craftsman or others working in cities, their work was not as driven by the seasons and so a trusted measure of time that didn’t vary with the seasons was necessary. The advent of a trusted measure of time led to an increase in the quantity, quality and variety of goods and services because urban, craftsman type work was now more feasible.
  11. https://unenumerated.blogspot.com/2017/02/money-blockchains-and-social-scalability.html. I am using the phrase “coordination scalability” synonymously with how Nick uses “social scalability.” A few readers suggested that social scalability was a confusing term as it made them think of scaling social networks.
  12. 150 is often referred to as Dunbar’s number, referring to a number calculated by University of Oxford anthropologist and psychologist Robin Dunbar using a ratio of neocortical volume to total brain volume and mean group size. For more see  https://www.newyorker.com/science/maria-konnikova/social-media-affect-math-dunbar-number-friendships. The lower band of 15 was cited in Pankaj Ghemawat’s World 3.0
  13. https://www.jstor.org/stable/2938736
  14. http://discovermagazine.com/1987/may/02-the-worst-mistake-in-the-history-of-the-human-race
  15. Because what else would you want to do besides eat bread dipped in fresh olive oil and drink fresh beer and wine?
  16. From The History of Money by Jack Weatherford.
  17. It also allowed them to squeeze out competitors at different places in the supply chain and put them out of business which Standard Oil did many times before finally being broken up by anti-trust legislation.
  18. http://www.paulgraham.com/re.html
  19. Tomorrow 3.0 by Michael Munger
  20. http://www.paulgraham.com/re.html
  21. There were quite a few things, even pre-internet, in the intersection between markets and firms, like approved vendor auction markets for government contracting and bidding, but they were primarily very high ticket items where higher transaction costs could be absorbed. The internet brought down the threshold for these dramatically to something as small as a $5 cab ride.
  22. The Long Tail was a concept WIRED editor Chris Anderson used to describe the proliferation of small, niche businesses that were possible after the end of the “tyranny of geography.” https://www.wired.com/2004/10/tail/
  23. From Wikipedia: “Fiat money is a currency without intrinsic value that has been established as money, often by government regulation. Fiat money does not have use value, and has value only because a government maintains its value, or because parties engaging in exchange agree on its value.” By contrast, “Commodity money is created from a good, often a precious metal such as gold or silver.” Almost all of what we call money today, from dollars to euros to yuan, is fiat.
  24. Small institutions can get both coordination and a larger selectorate by using social norms. This doesn’t enable coordination scalability though as it stops working somewhere around Dunbar’s number of 150.
  25. Visa processes thousands of transactions per second, while the bitcoin network’s decentralized structure processes a mere seven transactions per second. The key difference being that Visa transactions are easily reversed or censored whereas bitcoin’s are not.
  26. https://medium.com/@cdixon/crypto-tokens-a-breakthrough-in-open-network-design-e600975be2ef
  27. https://medium.com/cryptoeconomics-australia/the-blockchain-economy-a-beginners-guide-to-institutional-cryptoeconomics-64bf2f2beec4
  28. https://medium.com/cryptoeconomics-australia/the-blockchain-economy-a-beginners-guide-to-institutional-cryptoeconomics-64bf2f2beec4
  29. https://twitter.com/naval/status/877467629308395521