Cracking the Crypto War

On December 2, 2015, a man named Syed Rizwan Farook and his wife, Tashfeen Malik, opened fire on employees of the Department of Public Health in San Bernardino, California, killing 14 people and injuring 22 during what was supposed to be a staff meeting and holiday celebration. The shooters were tracked down and killed later in the day, and FBI agents wasted no time trying to understand the motivations of Farook and to get the fullest possible sense of his contacts and his network. But there was a problem: Farook’s iPhone 5c was protected by Apple’s default encryption system. Even when served with a warrant, Apple did not have the ability to extract the information from its own product.

The government filed a court order, demanding, essentially, that Apple create a new version of the operating system that would enable it to unlock that single iPhone. Apple defended itself, with CEO Tim Cook framing the request as a threat to individual liberty.

“We have a responsibility to help you protect your data and protect your privacy,” he said in a press conference. Then-FBI chief James Comey reportedly warned that Cook’s attitude could cost lives. “I just don’t want to get to a day where people look at us with tears in their eyes and say, ‘My daughter is missing and you have her cell phone—what do you mean you can’t tell me who she was ­texting before she disappeared?’ ” The controversy over Farook’s iPhone reignited a debate that was known in the 1990s as the Crypto Wars, when the government feared the world was “going dark” and tried—and ultimately failed—to impede the adoption of technologies that could encode people’s information. Only this time, with super­computers in everybody’s pockets and the endless war on terror, the stakes were higher than ever.

A few months after the San Bernardino shooting, President Obama sat for an interview at the South by Southwest conference and argued that government officials must be given some kind of shortcut—or what’s known as exceptional access—to encrypted content during criminal and antiterrorism investigations. “My conclusion so far is that you cannot take an absolutist view on this,” he said. “If the tech community says, ‘Either we have strong, perfect encryption or else it’s Big Brother and an Orwellian world’—what you’ll find is that after something really bad happens, the politics of this will swing and it will become sloppy and rushed, and it will go through Congress in ways that have not been thought through. And then you really will have dangers to our civil liberties.”

In typical Obama fashion, the president was leaning toward a compromise, a grand bargain between those who insist that the NSA and FBI need all the information they can get to monitor potential terrorists or zero in on child abusers and those who believe building any sort of exceptional access into our phones would be a fast track to a totalitarian surveillance state. And like so many of Obama’s proposed compromises, this one went nowhere. To many cryptographers, there was simply no way that companies like Apple and Google could provide the government with legal access to customer data without compromising personal privacy and even national security. Exceptional access was a form of technology, after all, and any of its inevitable glitches, flaws, or bugs could be exploited to catastrophic ends. To suggest otherwise, they argued, was flat wrong. Flat-Earth wrong. Which was, as any good engineer or designer knows, an open invitation for someone to prove them wrong.

This past January, Ray Ozzie took a train from his home in Massachusetts to New York City for a meeting in a conference room of the Data Science Institute at Columbia University. The 14th-­floor aerie was ringed by wide windows and looked out on a clear but chilly day. About 15 people sat around the conference table, most of them middle-­aged academics—people from the law school, scholars in government policy, and computer scientists, including cryptographers and security specialists—nibbling on a light lunch while waiting for Ozzie’s presentation to begin.

Jeannette Wing—the host of the meeting and a former corporate VP of Microsoft Research who now heads the Data Science Institute—introduced Ozzie to the group. In the invitation to this “private, informal session,” she’d referenced his background, albeit briefly. Ozzie was once chief technical officer at Microsoft as well as its chief software architect, posts he had assumed after leaving IBM, where he’d gone to work after the company had purchased a product he created, Lotus Notes. Packed in that sentence was the stuff of legend: Notes was a groundbreaking product that rocketed businesses into internet-style communications when the internet was barely a thing. The only other person who ever held the chief software architect post at Microsoft was Bill Gates, and Ozzie had also helped create the company’s cloud business.

He had come to Columbia with a proposal to address the impasse over exceptional access, and the host invited the group to “critique it in a constructive way.” Ozzie, trim and vigorous at 62, acknowledged off the bat that he was dealing with a polarizing issue. The cryptographic and civil liberties community argued that solving the problem was virtually impossible, which “kind of bothers me,” he said. “In engineering if you think hard enough, you can come up with a solution.” He believed he had one.

He started his presentation, outlining a scheme that would give law enforcement access to encrypted data without significantly increasing security risks for the billions of people who use encrypted devices. He’d named his idea Clear.

How Clear Works

Step 1

Obtain warrant for locked, encrypted phone that is evidence in a criminal investigation.

Step 2

Access special screen that generates a QR code containing an encrypted PIN.

Step 3

Send picture of QR code to the phone’s manufacturer, which confirms the warrant is legal.

Step 4

Manufacturer transmits decrypted PIN to investigators, who use it to unlock the phone.

It works this way: The vendor—say it’s Apple in this case, but it could be Google or any other tech company—starts by generating a pair of complementary keys. One, called the vendor’s “public key,” is stored in every iPhone and iPad. The other vendor key is its “private key.” That one is stored with Apple, protected with the same maniacal care that Apple uses to protect the secret keys that certify its operating system updates. These safety measures typically involve a tamper-­proof machine (known as an HSM or hardware security module) that lives in a vault in a specially protected building under biometric lock and smartcard key.

That public and private key pair can be used to encrypt and decrypt a secret PIN that each user’s device automatically generates upon activation. Think of it as an extra password to unlock the device. This secret PIN is stored on the device, and it’s protected by encrypting it with the vendor’s public key. Once this is done, no one can decode it and use the PIN to unlock the phone except the vendor, using that highly protected private key.

So, say the FBI needs the contents of an iPhone. First the Feds have to actually get the device and the proper court authorization to access the information it contains—Ozzie’s system does not allow the authorities to remotely snatch information. With the phone in its possession, they could then access, through the lock screen, the encrypted PIN and send it to Apple. Armed with that information, Apple would send highly trusted employees into the vault where they could use the private key to unlock the PIN. Apple could then send that no-longer-secret PIN back to the government, who can use it to unlock the device.

Ozzie designed other features meant to ­reassure skeptics. Clear works on only one device at a time: Obtaining one phone’s PIN would not give the authorities the means to crack anyone else’s phone. Also, when a phone is unlocked with Clear, a special chip inside the phone blows itself up, freezing the contents of the phone thereafter. This prevents any tampering with the contents of the phone. Clear can’t be used for ongoing surveillance, Ozzie told the Columbia group, because once it is employed, the phone would no longer be able to be used.

He waited for the questions, and for the next two hours, there were plenty of them. The word risk came up. The most dramatic comment came from computer science professor and cryptographer Eran Tromer. With the flair of Hercule Poirot revealing the murderer, he announced that he’d discovered a weakness. He spun a wild scenario involving a stolen phone, a second hacked phone, and a bank robbery. Ozzie conceded that Tromer found a flaw, but not one that couldn’t be fixed.

At the end of the meeting, Ozzie felt he’d gotten some good feedback. He might not have changed anyone’s position, but he also knew that unlocking minds can be harder than unlocking an encrypted iPhone. Still, he’d taken another baby step in what is now a two-years-and-counting quest. By focusing on the engineering problem, he’d started to change the debate about how best to balance privacy and law enforcement access. “I do not want us to hide behind a technological smoke screen,” he said that day at Columbia. “Let’s debate it. Don’t hide the fact that it might be possible.”

In his home office outside Boston, Ray Ozzie works on a volunteer project designing and making safety-testing kits for people in nuclear radiation zones.
Cole Wilson

The first, and most famous, exceptional-access scheme was codenamed Nirvana. Its creator was an NSA assistant deputy director named Clinton Brooks, who realized in the late 1980s that newly discovered advances in cryptography could be a disaster for law enforcement and intelligence agencies. After initial despair, Brooks came up with an idea that he envisioned would protect people’s privacy while preserving government’s ability to get vital information. It involved generating a set of encryption keys, unique to each device, that would be held by government in heavily protected escrow. Only with legal warrants could the keys be retrieved and then used to decode encrypted data. Everyone would get what they wanted. Thus … Nirvana.

The plan was spectacularly botched. Brooks’ intent was to slowly cook up an impervious technical framework and carefully introduce it in the context of a broad and serious national discussion about encryption policy, where all stakeholders would hash out the relative trade-offs of law enforcement access to information and privacy. But in 1992, AT&T developed the Telephone Security Device 3600, which could scramble phone conversations. Its strong encryption and relatively low price unleashed a crypto panic in the NSA, the FBI, and even the tech-friendly officials in the new Clinton administration. Then the idea came up of using Brooks’ key escrow technology, which by that time was being implemented with a specialized component called the Clipper Chip, to combat these enhanced encryption systems. After a few weeks, the president himself agreed to the plan, announcing it on April 16, 1993.

All hell broke loose as technologists and civil libertarians warned of an Orwellian future in which the government possessed a backdoor to all our information. Suddenly the obscure field of cryptography became a hot button. (I still have a T-shirt with the rallying cry “Don’t Give Big Brother a Master Key.”) And very good questions were raised: How could tech companies sell their wares overseas if foreign customers knew the US could get into their stuff? Wouldn’t actual criminals use other alternatives to encrypt data? Would Clipper Chip technology, moving at government speed, hobble the fast-moving tech world?

Ultimately, Clipper’s death came not from policy, but science. A young Bell Labs cryptographer named Matt Blaze discovered a fatal vulnerability, undoubtedly an artifact of the system’s rushed implementation. Blaze’s hack led the front page of The New York Times. The fiasco tainted all subsequent attempts at installing government backdoors, and by 1999, most government efforts to regulate cryptography had been abandoned, with barely a murmur from the FBI or the NSA.

For the next dozen or so years, there seemed to be a Pax Cryptographa. You seldom heard the government complain about not having enough access to people’s personal information. But that was in large part because the government already had a frightening abundance of access, a fact made clear in 2013 by Edward Snowden. When the NSA contractor revealed the extent of his employer’s surveillance capabilities, people were shocked at the breadth of its activities. Massive snooping programs were sweeping up our “metadata”—who we talk to, where we go—while court orders allowed investigators to scour what we stored in the cloud. The revelations were also a visceral blow to the leaders of the big tech companies, who discovered that their customers’ data had essentially been plundered at the source. They vowed to protect that data more assiduously, this time regarding the US government as one of their attackers. Their solution: encryption that even the companies themselves could not decode. The best example was the iPhone, which encrypted users’ data by default with iOS 8 in 2014.

Law enforcement officials, most notably Comey of the FBI, grew alarmed that these heightened encryption schemes would create a safe haven for crooks and terrorists. He directed his staff to look at the potential dangers of increasing encryption and began giving speeches that called for that blast from the past, lingering like a nasty chord from ’90s grunge: exceptional access.

The response from the cryptographic community was swift and simple: Can’t. Be. Done. In a landmark 2015 paper called “Keys Under Doormats,” a group of 15 cryptographers and computer security experts argued that, while law enforcement has reasons to argue for access to encrypted data, “a careful scientific analysis of the likely impact of such demands must distinguish what might be desirable from what is technically possible.” Their analysis claimed that there was no foreseeable way to do this. If the government tried to implement exceptional access, they wrote, it would “open doors through which criminals and malicious nation-states can attack the very individuals law enforcement seeks to defend.”

The 1990s Crypto Wars were back on, and Ray Ozzie didn’t like what he was hearing. The debate was becoming increasingly politicized. Experts in cryptography, he says, “were starting to pat themselves on the back, taking extreme positions about truisms that weren’t so obvious to me.” He knew that great achievements of cryptography had come from brilliant scientists using encryption protocols to perform a kind of magic: sharing secrets between two people who had never met, or creating digital currency that can’t be duplicated for the purposes of fraud. Could a secure system of exceptional access be so much harder? So Ozzie set out to crack the problem. He had the time to do it. He’d recently sold a company he founded in 2012, Talko, to Microsoft. And he was, to quote a friend, “post-economic,” having made enough money to free him from financial concerns. Working out of his home north of Boston, he began to fool around with some ideas. About two weeks later, he came up with Clear.

The strength of Ozzie’s system lies in its simplicity. Unlike Clinton Brooks, who relied on the government to safeguard the Clipper Chip’s encrypted keys, Ozzie is putting his trust in corporations, a decision that came from his experience in working for big companies like Lotus, IBM, and Microsoft. He was intimately familiar with the way that tech giants managed their keys. (You could even argue that he helped invent that structure, since Lotus Notes was the first software product to get a license to export strong encryption overseas and thus was able to build it into its products.) He argues that the security of the entire mobile universe already relies on the protection of keys—those vital keys used to verify operating system updates, whose compromise could put billions of users at risk. (Every time you do an OS update, Apple certifies it by adding a unique ID and “signing” it to let your device know it’s really Apple that is rewriting your iPhone’s code.) Using that same system to provide exceptional access, he says, introduces no new security weaknesses that vendors don’t already deal with.

Ozzie knew that his proposal danced on the third rail of the crypto debate—many before him who had hinted at a technical solution to exceptional access have been greeted with social media pitchforks. So he decided to roll out his proposal quietly, showing Clear to small audiences under an informal nondisclosure agreement. The purpose was to get feedback on his system, and, if he was lucky, to jar some people out of the mindset that regarded exceptional access as a crime against science. His first stop, in September 2016, was in Seattle, where he met with his former colleagues at Microsoft. Bill Gates greeted the idea enthusiastically. Another former colleague, Butler Lampson—a winner of the Turing Award, the Nobel Prize of computer science—calls the approach “completely reasonable … The idea that there’s no way to engineer a secure way of access is ridiculous.” (Microsoft has no formal comment.)

Ozzie went on to show Clear to representatives from several of the biggest tech companies—Apple, Google, Facebook—none of whom had any interest whatsoever in voluntarily implementing any sort of exceptional access. Their focus was to serve their customers, and their customers want security. (Or, as Facebook put it in a statement to WIRED: “We have yet to hear of a technical solution to this challenge that would not risk weakening security for all users.”) At one company, Ozzie squared off against a technical person who found the proposal offensive. “I’ve seen this happen to engineers a million times when they get backed into a corner,” Ozzie says. “I told him ‘I’m not saying you should do this. I’m trying to refute the argument that it can’t be done.’ ”

Unsurprisingly, Ozzie got an enthusiastic reception from the law enforcement and intelligence communities. “It’s not just whether his scheme is workable,” says Rich Littlehale, a special agent in the Tennessee Bureau of Investigation. “It’s the fact that someone with his experience and understanding is presenting it.” In an informal meeting with NSA employees at its Maryland headquarters, Ozzie was startled to hear that the agency had come up with something almost identical at some point. They’d even given it a codename.

During the course of his meetings, Ozzie learned he was not alone in grappling with this issue. The names of three other scientists working on exceptional access popped up—Ernie Brickell, Stefan Savage, and Robert Thibadeau—and he thought it might be a good idea if they all met in private. Last August the four scientists gathered in Meg Whitman’s boardroom at Hewlett Packard Enterprise in Palo Alto. (Ozzie is a board member, and she let him borrow the space.) Though Thibadeau’s work pursued a different course, Ozzie found that the other two were pursuing solutions similar to his. What’s more, Savage has bona fides to rival Ozzie’s. He’s a world-­renowned expert on security research, and he and Ozzie share the same motivations. “We say we are scientists, and we let the data take us where they will, but not on this issue,” Savage says. “People I very much respect are saying this can’t be done. That’s not why I got into this business.”

Ozzie’s efforts come as the government is getting increasingly desperate to gain access to encrypted information. In a speech earlier this year, FBI director Christopher Wray said the agency was locked out of 7,775 devices in 2017. He declared the situation intolerable. “I reject this notion that there could be such a place that no matter what kind of lawful authority you have, it’s utterly beyond reach to protect innocent citizens,” he said.

Deputy attorney general Rod Rosenstein, in a speech at the Naval Academy late last year, was even more strident. “Warrant-proof encryption defeats the constitutional balance by elevating privacy above public safety,” he said. What’s needed, he said, is “responsible encryption … secure encryption that allows access only with judicial authorization.”

A Brief History of the Crypto Wars

1976

Scientists introduce public key cryptography, in which private and public complementary keys are used to encrypt and unlock data.

1982

RSA becomes one of the first companies to market encryption to the business and consumer world.

1989

Lotus Notes becomes the first software to obtain a license to export strong encryption overseas.

1993

The Clinton administration announces a plan to use the so-called Clipper Chip.

1994

A computer scientist finds a critical vulnerability in theClipper Chip. The US abandons the program within two years.

1999

The Clinton administration removes nearly all restrictions on the export of encryption products.

2013

Former NSA contractor Edward Snowden reveals classified information about government surveillance programs.

2014

Apple introduces default encryption in iOS 8.

2016

After a mass shooting in California, the Feds file a court order against Apple to access the contents of a shooter’s phone.

Since Apple, Google, Facebook, and the rest don’t see much upside in changing their systems, only a legislative demand could grant law enforcement exceptional access. But there doesn’t seem to be much appetite in Congress to require tech companies to tailor their software to serve the needs of law enforcement agencies. That might change in the wake of some major incident, especially if it were discovered that advance notice might have been gleaned from an encrypted mobile device.

As an alternative to exceptional access, cryptographers and civil libertarians have begun promoting an approach known as lawful hacking. It turns out that there is a growing industry of private contractors who are skilled in identifying flaws in the systems that lock up information. In the San Bernardino case, the FBI paid a reported $900,000 to an unnamed contractor to help them access the data on Farook’s iPhone. Many had suspected that the mysterious contractor was an Israeli company called Cellebrite, which has a thriving business in extracting data from iPhones for law enforcement agencies. (Cellebrite has refused to confirm or deny its involvement in the case, and its representatives declined to comment for this story.) A report by a think tank called the EastWest Institute concluded that other than exceptional access, lawful hacking is the only workable alternative.

But is it ethical? It seems odd to have security specialists promoting a system that depends on a reliable stream of vulnerabilities for hired hackers to exploit. Think about it: Apple can’t access its customers’ data—but some random company in Israel can fetch it for its paying customers? And with even the NSA unable to protect its own hacking tools, isn’t it inevitable that the break-in secrets of these private companies will eventually fall into the hands of criminals and other bad actors? There is also a danger that forces within the big tech companies could enrich themselves through lawful hacking. As one law enforcement official pointed out to me, lawful hacking creates a marketplace for so-called zero-day flaws—vulnerabilities discovered by outsiders that the manufacturers don’t know about—and thus can be exploited by legal and nonlegal attackers. So we shouldn’t be surprised if malefactors inside tech companies create and bury these trapdoors in products, with hopes of selling them later to the “lawful hackers.”

Lawful hacking is techno-capitalism at its shadiest, and, in terms of security alone, it makes the mechanisms underlying Clear (court orders, tamper­-proof contents) look that much more appealing. No matter where you stand in the crypto debate, it makes sense that a carefully considered means of implementing exceptional access would be far superior to a scheme that’s hastily concocted in the aftermath of a disaster. (See Clipper.) But such an approach goes nowhere unless people believe that it doesn’t violate math, physics, and Tim Cook’s vows to his customers. That is the bar that Ozzie hopes he can clear.

The “Keys Under Doormats” gang has raised some good criticisms of Clear, and for the record, they resent Ozzie’s implication that their minds are closed. “The answer is always, show me a proposal that doesn’t harm security,” says Dan Boneh, a celebrated cryptographer who teaches at Stanford. “How do we balance that against the legitimate need of security to unlock phones? I wish I could tell you.”

One of the most salient objections goes to the heart of Ozzie’s claim that his system doesn’t really increase risk to a user’s privacy, because manufacturers like Apple already employ intricate protocols to protect the keys that verify its operating system updates. Ozzie’s detractors reject the equivalence. “The exceptional access key is different from the signing key,” says Susan Landau, a computer scientist who was also a ­coauthor of the “Doormat” paper. “A signing key is used rarely, but the exceptional access key will be used a lot.” The implication is that setting up a system to protect the PINs of billions of phones, and process thousands of requests from law enforcement, will inevitably have huge gaps in security. Ozzie says this really isn’t a problem. Invoking his experience as a top executive at major tech firms, he says that they already have frameworks that can securely handle keys at scale. Apple, for example, uses a key system so that thousands of developers can be verified as genuine—the iOS ecosystem couldn’t work otherwise.

Ozzie has fewer answers to address criticisms about how his system—or any that uses exceptional access—would work internationally. Would every country, even those with authoritarian governments, be able to compel Apple or Google to cough up the key to unlock the contents of any device within its jurisdiction? Ozzie concedes that’s a legitimate concern, and it’s part of the larger ongoing debate about how we regulate the flow of information and intellectual property across borders. He is also the first to point out that he doesn’t have all the answers about exceptional access, and he isn’t trying to create a full legal and technological framework. He is merely trying to prove that something could work.

Maybe that’s where Ozzie’s plan plunges into the choppiest waters. Proving something is nigh impossible in the world of crypto and security. Time and again, supposedly impervious systems, created by the most brilliant cryptographers and security specialists, get undermined by clever attackers, and sometimes just idiots who stumble on unforeseen weaknesses. “Security is not perfect,” says Matthew Green, a cryptographer at Johns Hopkins. “We’re really bad at it.”

But as bad as security can be, we rely on it anyway. What’s the alternative? We trust it to protect our phone updates, our personal information, and now even cryptocurrencies. All too often, it fails. What Ozzie is saying is that exceptional access is no different. It isn’t a special case singled out by the math gods. If we agree that a relatively benign scheme is possible, then we can debate whether we should do it on the grounds of policy.

Maybe we’d even decide that we don’t want exceptional access, given all the other tools government has to snoop on us. Ozzie could return to his post-economic retirement, and law enforcement and civil libertarians would return to their respective corners, ready to slug it out another day. Let the Crypto Wars continue.


Steven Levy (@stevenlevy) wrote about the new Apple headquarters in issue 25.06.

This article appears in the May issue. Subscribe now.


More on Encryption

Read more: https://www.wired.com/story/crypto-war-clear-encryption/

Related Articles

Markets Are Eating The World

For the last hundred years, individuals have worked for firms, and, by historical standards, large ones.

That many of us live in suburbs and drive our cars into the city to go to work at a large office building is so normal that it seems like it has always been this way. Of course, it hasn’t. In 1870, almost 50 percent of the U.S. population was employed in agriculture.[1] As of 2008, less than 2 percent of the population is directly employed in agriculture, but many people worked for these relatively new things called “corporations.”[2]

Many internet pioneers in the 90’s believed that the internet would start to break up corporations by letting people communicate and organize over a vast, open network. This reality has sort-of played out: the “gig economy” and rise in freelancing are persistent, if not explosive, trends. With the re-emergence of blockchain technology, talk of “the death of the firm” has returned. Is there reason to think this time will be different?

To understand why this time might (or might not) be different, let us first take a brief look back into Coasean economics and mechanical clocks.

In his 1937 paper, “The Nature of the Firm,” economist R.H. Coase asked “if markets were as efficient as economists believed at the time, why do firms exist at all? Why don’t entrepreneurs just go out and hire contractors for every task they need to get done?”[3]

If an entrepreneur hires employees, she has to pay them whether they are working or not. Contractors only get paid for the work they actually do. While the firm itself interacts with the market, buying supplies from suppliers and selling products or services to customers, the employees inside of it are insulated. Each employee does not renegotiate their compensation every time they are asked to do something new. But, why not?

Coase’s answer was transaction costs. Contracting out individual tasks can be more expensive than just keeping someone on the payroll because each task involves transaction costs.

Imagine if instead of answering every email yourself, you hired a contractor that was better than you at dealing with the particular issue in that email. However, it costs you something to find them. Once you found them you would have to bargain and agree on a price for their services then get them to sign a contract and potentially take them to court if they didn’t answer the email as stipulated in the contract.

Duke economist Mike Munger calls these three types of transaction costs triangulation, how hard it is to find and measure the quality of a service; transfer, how hard it is to bargain and agree on a contract for the good or service; and trust, whether the counterparty is trustworthy or you have recourse if they aren’t.

You might as well just answer the email yourself or, as some executives do, hire a full-time executive assistant. Even if the executive assistant isn’t busy all the time, it’s still better than hiring someone one off for every email or even every day.

Coase’s thesis was that in the presence of these transaction costs, firms will grow larger as long as they can benefit from doing tasks in-house rather than incurring the transaction costs of having to go out and search, bargain and enforce a contract in the market. They will expand or shrink until the cost of making it in the firm equals the cost of buying it on the market.

The lower the transaction costs are, the more efficient markets will be, and the smaller firms will be.

In a world where markets were extremely efficient, it would be very easy to find and measure things (low triangulation costs), it would be very easy to bargain and pay (low transfer costs), and it would be easy to trust the counterparty to fulfill the contract (low trust costs).

In that world, the optimal size of the firm is one person (or a very few people). There’s no reason to have a firm because business owners can just buy anything they need on a one-off basis from the market.[4] Most people wouldn’t have full-time jobs; they would do contract work.

Consumers would need to own very few things. If you needed a fruit dehydrator to prepare for a camping trip twice a year, you could rent one quickly and cheaply. If you wanted to take your family to the beach twice a year, you could easily rent a place just for the days you were there.

On the other hand, in a world that was extremely inefficient, it would be hard to find and measure things (high triangulation costs), it would be difficult to bargain and pay (high transfer costs) and it would be difficult to trust the counterparty to fulfill the contract (high trust costs).

In that world, firms would tend to be large. It would be inefficient to buy things from the market and so entrepreneurs would tend to accumulate large payrolls. Most people would work full-time jobs for large firms. If you wanted to take your family to the beach twice a year, you would need to own the beach house because it would be too inefficient to rent, the reality before online marketplaces like AirBnB showed up.

Consumers would need to own nearly everything they might conceivably need. Even if they only used their fruit dehydrator twice a year, they’d need to own it because the transaction costs involved in renting it would be too high.

If the structure of the economy is based on transaction costs, then what determines them?

Technological Eras and Transaction Costs

The primary determinant of transaction costs is technology.

The development of the wheel and domestication of horses and oxes decreased transfer costs by making it possible to move more goods further. Farmers who could bring their crops to market using an ox cart rather than carrying it by hand could charge less and still make the same profit.

The development of the modern legal system reduced the transaction cost of trust. It was possible to trust that your counterparty would fulfill their contract because they knew you had recourse if they didn’t.

The list goes on: standardized weights and  measures, the sail, the compass, the printing press, the limited liability corporation, canals, phones, warranties, container ships and, more recently, smartphones and the internet.

It’s hard to appreciate how impactful many of these technologies has been, because most of them had become so common by the time most of us were born that we take them for granted.

As the author Douglas Adams said, “Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.”

To see how technology affects transaction costs, and how that affects the way our society is organized, let’s consider something which we all think of as “normal and ordinary,”  but which has had a huge impact on our lives: the mechanical clock.

The Unreasonable Effectiveness of the Mechanical Clock

In 1314, The city of Caen installed a mechanical clock with the following inscription: “I give the hours voice to make the common folk rejoice.” “Rejoice” is a pretty strong reaction to a clock, but it wasn’t overstated, everyone in Caen was pretty jazzed about the mechanical clock. Why?

A key element of why we have jobs today as opposed to working as slaves or serfs bonded to the land as was common in the Feudal system is a direct result of the clock.

Time was important before the invention of the clock but was very hard to measure. Rome was full of sundials, and medieval Europe’s bell towers where, time was tolled, were the tallest structures in town.[5]

This was not cheap. In the larger and more important belfries, two bell-ringers lived full time, each serving as a check on the other. The bells themselves were usually financed by local guilds that relied on the time kept to tell their workers when they had to start working and when they could go home.

This system was problematic for a few reasons.

For one, it was expensive. Imagine if you had to pool funds together with your neighbors to hire two guys to sit in the tower down the street full time and ring the bell to wake you up in the morning.

For another, the bell could only signal a few events per day. If you wanted to organize a lunch meeting with a friend, you couldn’t ask the belltower to toll just for you. Medieval bell towers had not yet developed snooze functionality.

Finally, sundials suffered from accuracy problems. Something as common as clouds could make it difficult to tell precisely when dawn, dusk, and midday occurred.

In the 14th and 15th centuries, the expensive bell towers of Europe’s main cities got a snazzy upgrade that dramatically reduced transaction costs: the mechanical clock.

The key technological breakthrough that allowed the development was the escapement.

The escapement transfers energy to the clock’s pendulum to replace the energy lost to friction and keep it on time. Each swing of the pendulum releases a tooth of the escapement’s wheel gear, allowing the clock’s gear train to advance or “escape” by a set amount. This moves the clock’s hands forward at a steady rate.[6]

The accuracy of early mechanical clocks, plus or minus 10-15 minutes per day, was not notably better than late water clocks and less accurate than the sandglass, yet mechanical clocks became widespread. Why?

  1. Its automatic striking feature meant the clock could be struck every hour at lower cost, making it easier to schedule events than only striking at dawn, dusk and noon.
  2. It was more provably fair than the alternatives, which gave all parties greater confidence that the time being struck was accurate. (Workers were often suspicious that employers could bribe or coerce the bell-ringers to extend the workday, which was harder to do with a mechanical clock.)

Mechanical clocks broadcast by bell towers provided a fair (lower trust costs) and fungible [7] (lower transfer costs) measure of time. Each hour rung on the bell tower could be trusted to be the same length as another hour.

Most workers in the modern economy earn money based on a time-rate, whether the time period is an hour, a day, a week or a month. This is possible only because we have a measure of time which both employer and employee agree upon. If you hire someone to pressure-wash your garage for an hour, you may argue with them over the quality of the work, but you can both easily agree whether they spent an hour in the garage.

Prior to the advent of the mechanical clock, slavery and serfdom were the primary economic relationships, in part because the transaction cost of measuring time beyond just sunup and sundown was so high, workers were chained to their masters or lords.[8]

The employer is then able to use promotions, raises, and firing to incentivize employees to produce quality services during the time they are being paid for.[9]

In a system based on time-rate wages rather than slavery or serfdom, workers have a choice. If the talented blacksmith can get a higher time-rate wage from a competitor, she’s able to go work for them because there is an objective, fungible measure of time she’s able to trade.

As history has shown, this was a major productivity and quality-of-life improvement for both parties.[10]

It gradually became clear that mechanical time opened up entirely new categories of economic organization and productivity that had hitherto been not just impossible, but unimaginable.

We could look at almost any technology listed abovestandardized weights and measures, the sail, the compass, the printing press, etc.and do a similar analysis of how it affected transaction costs and eventually how it affected society as a result.

The primary effect is an increase in what we will call coordination scalability.

Coordination Scalability

“It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them.”   Alfred North Whitehead

About 70,000 years ago, there were between six and ten species of the genus homo. Now, of course, there is just one: Homo sapiens. Why did Homo sapiens prevail over the other species, like Homo neanderthalensis?

Homo sapiens prevailed because of their ability to coordinate. Coordination was made possible by increased neocortical size, which led to an ability to work together in large groups, not just as single individuals. Instead of single individuals hunting, groups could hunt and bring down larger prey more safely and efficiently.[11]

The brain of Homo sapiens has proven able to invent other, external structures which further increased coordination scalability by expanding the network of other people we could rely on.

Maybe the most important of these was language, but we have evolved many others since, including the mechanical clock.

The increased brain size has driven our species through four coordination revolutions: Neolithic, Industrial, Computing, Blockchain.

Neolithic Era: The Emergence of Division of Labor

The first economic revolution was a shift from humans as hunter-gatherers to homo sapiens as farmers.

Coordination scalability among hunter-gatherers was limited to the size of the band, which tended to range from 15 to 150 individuals.[12] The abandonment of a nomadic way of life and move to agriculture changed this by allowing specialization and the formation of cities.

Agriculture meant that people could, for the first time, accumulate wealth. Farmers could save excess crops to eat later or trade them for farming equipment, baskets or decorations. The problem was that this wealth was suddenly worth stealing and so farmers needed to defend their wealth.

Neolithic societies typically consisted of groups of farmers protected by what Mancur Olson called “stationary bandits,” basically warlords.[13] This allowed the emergence of much greater specialization. Farmers accumulated wealth and paid some to the warlords for protection, but even then there was still some left over, making it possible for individuals to specialize.

A city of 10,000 people requires, but also makes possible, specialists.

The limits of coordination scalability increased from 150 to thousands or, in some cases, tens of thousands. This was not necessarily a boon to human happiness. Anthropologist Jared Diamond called the move to agriculture “the worst mistake in the history of the human race.”[14] The quality of life for individuals declined: lifespans shortened, nutrition was worse leading to smaller stature, and disease was more prevalent.

But this shift was irresistible because specialization created so much more wealth and power that groups which adopted this shift came to dominate those that didn’t. The economies of scale in military specialization, in particular, were overwhelming. Hunt-gatherers couldn’t compete.

In the Neolithic era, the State was the limit of coordination scalability.

Industrial Era: Division of Labor Is Eating the World

Alongside the city-state, a new technology started to emerge that would further increase the limits of coordination scalability: money. To illustrate, let us take the European case, from ancient Greece to modernity, though the path in other parts of the world was broadly similar. Around 630 B.C., the Lydian kings recognized the need for small, easily transported coins worth no more than a few days’ labor. They made these ingots in a standard sizeabout the size of a thumbnail—and weight, and stamped an emblem of a lion’s head on them.

This eliminated one of the most time-consuming (and highest transaction cost) steps in commerce: weighing gold and silver ingots each time a transaction was made. Merchants could easily count the number of coins without worrying about cheating.

Prior to the invention of coins, trade had been limited to big commercial transactions, like buying a herd of cattle. With the reduced transfer cost facilitated by coins, Lydians began trading in the daily necessities of lifegrain, olive oil, beer, wine, and wood.[15]

The variety and abundance of goods which could suddenly be traded led to another innovation: the retail market.

Previously, buyers had to go to the home of sellers of whatever they needed. If you needed olive oil, you had to walk over to the olive oil lady’s house to get it. With the amount of trade that began happening after coinage, a central market emerged. Small stalls lined the market where each merchant specialized in (and so could produce more efficiently) a particular goodmeat, grain, jewelry, bread, cloth, etc. Instead of having to go the olive oil lady’s house, you could go to her stall and pick up bread from the baker while you were there.

From this retail market in Lydia sprang the Greek agora, Medieval market squares in Europe and, the suburban shopping mall and, eventually, the “online shopping malls” Amazon and Google. Though markets were around as early as 7th century BCE Lydia, they really hit their stride in The Industrial Revolution in the 18th century.[16]

Adam Smith was the first to describe in detail the effect of this marketization of the world. Markets made it possible to promote the division of labor across political units, not just within them. Instead of each city or country manufacturing all the goods they needed, different political entities could further divide labor. Coordination scalability started to stretch across political borders.

Coming back to Coase, firms will expand or shrink until “making” equals the cost of “buying.” Under this Industrial era, transaction costs made administrative and managerial coordination (making) more efficient than market coordination (buying) for most industries, which led to the rise of large firms.

The major efficiency gain of Industrial companies over their more “artisanal” forebearers was that using the techniques of mass production, they could produce products of a higher quality at a lower price. This was possible only if they were able to enforce standards throughout the supply chain. The triangulation transaction cost can be broken down into search and measurement: a company needed to find the vendor and to be able to measure the quality of the good or service.

In the early Industrial era, the supply chain was extremely fragmented. By bringing all the pieces into the firm, a large vertically integrated company could be more efficient.[17]

As an example, In the 1860s and 1870s, the Carnegie Corporation purchased mines to ensure it had reliable access to the iron ore and coke it needed to make steel. The upstream suppliers were unreliable and non-standardized and Carnegie Corporation could lower the cost of production by simply owning the whole supply chain.

This was the case in nearly every industry. By bringing many discrete entities under one roof and one system of coordination, greater economic efficiencies were gained and the multi-unit business corporation replaced the small, single-unit enterprise because administrative coordination enabled greater productivity through lower transaction costs per task than was possible before. Economies of scale flourished.

This system of large firms connected by markets greatly increased coordination scalability. Large multinational firms could stretch across political boundaries and provide goods and services more efficiently.

In Henry Ford’s world, the point where making equaled the cost of buying was pretty big. Ford built a giant plant at River Rouge just outside Detroit between 1917 and 1928 that took in iron ore and rubber at one end and sent cars out the other. At the factory’s peak, 100,000 people worked there. These economies of scale allowed Ford to dramatically drive down the cost of an automobile, making it possible for the middle class to own a car.[18]

As with Carnegie, Ford learned that supplier networks take a while to emerge and grow into something reliable. In 1917, doing everything himself was the only way to get the scale he needed to be able to make an affordable car.

One of the implications of this model was that industrial businesses required huge startup costs.

The only chance any entrepreneur had to compete required starting out with similarly massive amounts of capital required to build a factory large and efficient enough to compete with Ford.

For workers, this meant that someone in a specialized role, like an electric engineer or an underwriter, did not freelance or work for small businesses. Because the most efficient way to produce products was in large organizations, specialized workers could earn the most by working inside large organizations, be they Ford, AT&T or Chase Bank.

At the peak of the Industrial era, there were two dominant institutions: firms and markets.

Work inside the firm allowed for greater organization and specialization which, in the presence of high transaction costs was more economically efficient.

Markets were more chaotic and less organized, but also more motivating. Henry Ford engaged with the market and made out just a touch better than any of his workers; there just wasn’t room for many Henry Fords.

This started to dissolve in the second half of the 20th century. Ford no longer takes iron ore and rubber as the inputs to their factories, but has a vast network of upstream suppliers.[19] The design and manufacturing of car parts now happens over a long supply chain, which the car companies ultimately assemble and sell.

One reason is that supplier networks became more standardized and reliable. Ford can now buy ball bearings and brake pads more efficiently than he can make them, so he does. Each company in the supply chain focuses on what they know best and competition forces them to constantly improve.

By the 1880s, it cost Carnegie more to operate the coke ovens in-house than to buy it from an independent source, so he sold off the coke ovens and bought it from the open market. Reduced transaction costs in the form of more standardized and reliable production technology caused both Ford and Carnegie corporation to shrink as Coase’s theory would suggest.

The second reason is that if you want to make a car using a network of cooperating companies, you have to be able to coordinate their efforts, and you can do that much better with telecommunication technology broadly and computers specifically. Computers reduce the transaction costs that Coase argued are the raison d’etre of corporations. That is a fundamental change.[20]

The Computing Era: Software Is Eating the World

Computers, and the software and networks built on top of them, had a new economic logic driven by lower transaction costs.

Internet aggregators such as Amazon, Facebook, Google, Uber and Airbnb reduced the transaction costs for participants on their platforms. For the industries that these platforms affected, the line between “making” and “buying” shifted toward buying. The line between owning and renting shifted toward renting.

Primarily, this was done through a reduction in triangulation costs (how hard it is to find and measure the quality of a service), and transfer costs (how hard it is to bargain and agree on a contract for the good or service).

Triangulation costs came down for two reasons. One was the proliferation of smartphones, which made it possible for services like Uber and Airbnb to exist. The other was the increasing digitization of the economy. Digital goods are both easier to find (think Googling versus going to the library or opening the Yellow Pages) and easier to measure the quality of (I know exactly how many people read my website each day and how many seconds they are there, the local newspaper does not).

The big improvement in transfer costs was the result of matchmaking: bringing together and facilitating the negotiation of mutually beneficial commercial or retail deals.  

Take Yelp, the popular restaurant review app. Yelp allows small businesses like restaurants, coffee shops, and bars to advertise to an extremely targeted group: individuals close enough to come to the restaurant and that searched for some relevant term. A barbecue restaurant in Nashville can show ads only to people searching their zip code for terms like “bbq” and “barbecue.” This enables small businesses that couldn’t afford to do radio or television advertising to attract customers.

The existence of online customer reviews gives consumers a more trusted way to evaluate the restaurant.

All of the internet aggregators, including Amazon, Facebook, and Google, enabled new service providers by creating a market and standardizing the rules of that market to reduce transaction costs.[21]

The “sharing economy” is more accurately called the “renting economy” from the perspective of consumers, and the “gig economy” from the perspective of producers. Most of the benefits are the result of new markets enabled by lower transaction costs, which allows consumers to rent rather than own, including “renting” some else’s time rather than employing them full time.

It’s easier to become an Uber driver than a cab driver, and an Airbnb host than a hotel owner. It’s easier to get your product into Amazon than Walmart. It’s easier to advertise your small business on Yelp, Google or Facebook than on a billboard, radio or TV.

Prior to the internet, the product designer was faced with the option of selling locally (which was often too small a market), trying to get into Walmart (which was impossible without significant funding and traction), or simply working for a company that already had distribution in Walmart.

On the internet, they could start distributing nationally or internationally on day one. The “shelf space” of Amazon or Google’s search engine results page was a lot more accessible than the shelf space of Walmart.

As a result, it became possible for people in certain highly specialized roles to work independently of firms entirely. Product designers and marketers could sell products through the internet and the platforms erected on top of it (mostly Amazon and Alibaba in the case of physical products) and have the potential to make as much or more as they could inside a corporation.

This group is highly motivated because their pay is directly based on how many products they sell. The aggregators and the internet were able to reduce the transaction costs that had historically made it economically inefficient or impossible for small businesses and individual entrepreneurs to exist.

The result was that in industries touched by the internet, we saw an industry structure of large aggregators and a long tail [22] of small business which were able to use the aggregators to reach previously unreachable, niche segments of the market. Though there aren’t many cities where a high-end cat furniture retail store makes economic sense, on Google or Amazon, it does.

source: stratechery.com

Before


After (Platform-Enabled Markets)


Firms


Platform


Long Tail



Walmart and big box retailers
Amazon Niche product designers and manufacturers

Cab companies
Uber Drivers with extra seats

Hotel chains
Airbnb Homeowners with extra rooms

Traditional media outlets
Google and Facebook Small offline and niche online businesses

For these industries, coordination scalability was far greater and could be seen in the emergence of micro-multinational businesses. Businesses as small as a half dozen people could manufacture in China, distribute products in North America, and employ people from Europe and Asia. This sort of outsourcing and the economic efficiencies it created had previously been reserved for large corporations.

As a result, consumers received cheaper, but also more personalized products from the ecosystem of aggregators and small businesses.

However, the rental economy still represents a tiny fraction of the overall economy. At any given time, only a thin subset of industries are ready to be marketized. What’s been done so far is only a small fraction of what will be done in the next few decades.

Yet, we can already start to imagine a world which Munger calls “Tomorrow 3.0.” You need a drill to hang some shelves in your new apartment. You open an app on your smartphone and tap “rent drill.” An autonomous car picks up a drill and delivers it outside your apartment in a keypad-protected pod and your phone vibrates “drill delivered.” Once you’re done, you put it back in the pod, which sends a message to another autonomous car nearby to come pick it up. The rental costs $5, much less than buying a commercial quality power drill. This is, of course, not limited to drillsit could have been a saw, fruit dehydrator, bread machine or deep fryer.

You own almost nothing, but have access to almost everything.

You, nor your neighbors, have a job, at least in the traditional sense. You pick up shifts or client work as needed and maybe manage a few small side businesses. After you finish drilling the shelves in, you might sit down at your computer and see what work requests are open and work for a few hours on designing a new graphic or finishing up the monthly financial statements for a client.

This is a world in which triangulation and transfer costs have come down dramatically, resulting in more renting than buying from consumers and more gig work than full-time jobs for producers.

This is a world we are on our way to already, and there aren’t any big, unexpected breakthroughs that need to happen first.

But what about the transaction cost of trust?

In the computer era, the areas that have been affected most are what could be called low-trust industries. If the sleeping mask you order off of Amazon isn’t as high-quality as you thought, that’s not a life or death problem.

What about areas where trust is essential?

Enter stage right: blockchains.

The Blockchain Era: Blockchain Markets Are Eating the World

One area where trust matters a lot is money. Most of the developed world doesn’t think about the possibility of fiat money [23] not being trustworthy because it hasn’t happened in our lifetimes. For those that have experienced it, including major currency devaluations, trusting that your money will be worth roughly the same tomorrow as it is today is a big deal.

Citizens of countries like Argentina and particularly Venezuela have been quicker to adopt bitcoin as a savings vehicle because their economic history made the value of censorship resistance more obvious.

Due to poor governance, the inflation rate in Venezuela averaged 32.42 percent from 1973 until 2017. Argentina was even worse; the inflation rate there averaged 200.80 percent between 1944 and 2017.

The story of North America and Europe is different. In the second half of the 20th century, monetary policy has been stable.

The Bretton Woods Agreement, struck in the aftermath of the Second World War, aggregated control of most of the globe’s monetary policy in the hands of the United States. The European powers acceded to this in part because the U.S. dollar was backed by gold, meaning that the U.S. government was subject to the laws of physics and geology of gold mining. They could not expand the money supply any faster than gold could be taken out of the ground.

With the abandonment of the gold standard under Nixon in 1973, control over money and monetary policy has moved into a historically small group of central bankers and powerful political and financial leaders and is no longer restricted by gold.

Fundamentally, the value of the U.S. dollar today is based on trust. There is no gold in a vault that backs the dollars in your pocket. Most fiat currencies today have value because the market trusts that the officials in charge of U.S. monetary policy will manage it responsibly.

It is at this point that the debate around monetary policy devolves into one group that imagines this small group of elitist power brokers sitting in a dark room on large leather couches surrounded by expensive art and mahogany bookshelves filled with copies of The Fountainhead smoking cigars and plotting against humanity using obscure financial maneuvering.

Another group, quite reasonably, points to the economic prosperity of the last half-century under this system and insists on the quackery of the former group.

A better way to understand the tension between a monetary system based on gold versus one based on fiat money this has been offered by political science professor Bruce Bueno de Mesquita:  “Democracy is a better form of government than dictatorships, not because presidents are intrinsically better people than dictators, but simply because presidents have less agency and power than dictators.”

Bueno de Mesquita calls this Selectorate Theory. The selectorate represents the number of people who have influence in a government, and thus the degree to which power is distributed. The selectorate of a dictatorship will tend to be very small: the dictator and a few cronies. The selectorate in democracy tends to be much larger, typically encompassing the Executive, Legislative, and Judicial branches and the voters which elect them.

Historically, the size of the selectorate involves a tradeoff between the efficiency and the robustness of the governmental system. Let’s call this the “Selectorate Spectrum.”

Dictatorships can be more efficient than democracies because they don’t have to get many people on board to make a decision. Democracies, by contrast, are more robust, but at the cost of efficiency.

Conservatives and progressives alike bemoan how little their elected representatives get done but happily observe how little their opponents accomplish. A single individual with unilateral power can accomplish far more (good or bad) than a government of “checks and balances.” The long-run health of a government means balancing the tradeoff between robustness and efficiency. The number of stakeholders cannot be so large that nothing gets done or the country will never adapt nor too small that one or a small group of individuals can hijack the government for personal gain.

This tension between centralized efficiency and decentralized robustness exists in many other areas. Firms try to balance the size of the selectorate to make it large enough so there is some accountability (e.g. a board and shareholder voting) but not so large as to make it impossible to compete in a marketby centralizing most decisions in the hands of a CEO.

We can view both the current monetary system and the internet aggregators through the lens of the selectorate. In both areas, the trend over the past few decades is that the robustness of a large selectorate has been traded away for the efficiency of a small one.[24]

A few individualsheads of central banks, leaders of state, corporate CEOs, and leaders of large financial entities like sovereign wealth funds and pensions fundscan move markets and politics globally with even whispers of significant change. This sort of centralizing in the name of efficiency can sometimes lead to long feedback loops with potentially dramatic consequences.

Said another way, much of what appears efficient in the short term may not be efficient but hiding risk somewhere, creating the potential for a blow-up. A large selectorate tends to appear to be working less efficiently in the short term, but can be more robust in the long term, making it more efficient in the long term as well. It is a story of the Tortoise and the Hare: slow and steady may lose the first leg, but win the race.

In the Beginning, There Was Bitcoin

In October 2008, an anonymous individual or group using the pseudonym Satoshi Nakamoto sent an email to a cypherpunk mailing list, explaining a new system called bitcoin. The opening line of the conclusion summed up the paper:

“We have proposed a system for electronic transactions without relying on trust”

When the network went live a few months later in January 2009, Satoshi embedded the headline of a story running that day in The London Times:

“The Times 03/Jan/2009 Chancellor on brink of second bailout for banks”

Though we can’t know for sure what was going through Satoshi’s mind at the time, the most likely explanation based is that Satoshi was reacting against the decisions being made in response to the 2008 Global Financial Crisis by the small selectorate in charge of monetary policy.

Instead of impactful decisions about the monetary system like a bailout being reliant upon a single individual, the chancellor, Satoshi envisioned bitcoin as a more robust monetary system, with a larger selectorate beyond the control of a single individual.

But why create a new form of money? Throughout history, the most common way for individuals to show their objections to their nation’s monetary policy was by trading their currency for some commodity like gold, silver, or livestock that they believed would hold its value better than the government-issued currency.

Gold, in particular, has been used as a form of money for nearly 6,000 years for one primary reason: the stock-to-flow ratio. Because of how gold is deposited in the Earth’s crust, it’s very difficult to mine. Despite all the technological changes in the last few hundred years, this has meant that the amount of new gold mined in a given year (the flow) has averaged between 1-2 percent of the total gold supply (stock) with very little variation year to year.

As a result, the total gold supply has never increased by more than 1-2 percent per year. In comparison to Venezuela’s 32.4 percent inflation and Argentina’s 200.80 percent inflation, gold’s inflation is far lower and more predictable.

Viewed through the lens of Selectorate Theory, we can say that gold or other commodity forms of money have a larger selectorate and are more robust than government-issued fiat currency. In the same way a larger group of stakeholders in a democracy constrains the actions of any one politician, the geological properties of gold constrained governments and their monetary policy.

Whether or not these constraints were “good” or “bad” is still a matter of debate. The Keynesian school of economics, which has come to be the view of mainstream economics, emerged out of John Maynard Keynes’s reaction to the Great Depression, which he thought was greatly exacerbated by the commitment to the gold standard and that governments should manage monetary policy to soften the cyclical nature of markets.

The Austrian and monetarist schools believe that human behavior is too idiosyncratic to model accurately with mathematics and that minimal government intervention is best. Attempts to intervene can be destabilizing and lead to inflation so a commitment to the gold standard is the lesser evil in the long run.

Taken in good faith, these schools represent different beliefs about the ideal point on the Selectorate Spectrum. Keynesians believe that greater efficiency could be gained by giving government officials greater control over monetary policy without sacrificing much robustness. Austrians and monetarists argue the opposite, that any short-term efficiency gains actually create huge risks to the long-term health of the system.

Viewed as a money, bitcoin has many gold-like properties, embodying something closer to the Austrian and monetarist view of ideal money. For one, we know exactly how many bitcoin will be created21 millionand the rate at which they will be created. Like gold, the ability to change this is outside of the control of a single or small group of individuals, giving it a predictable stock-to-flow ratio and making it extremely difficult to inflate.

Similar to gold, the core bitcoin protocol also makes great trade-offs in terms of efficiency in the name of robustness.[25]

However, bitcoin has two key properties of fiat money which gold lacksit is very easy to divide and transport. Someone in Singapore can send 1/100th of a bitcoin to someone in Canada in less than an hour. Sending 1/100th of a gold bar would be a bit trickier.

In his 1998 book, Cryptonomicon, science fiction author Neal Stephenson imagined a bitcoin-like money built by the grandchild of Holocaust survivors who wanted to create a way for individuals to escape totalitarian regimes without giving up all their wealth. It was difficult, if not impossible, for Jews to carry gold bars out of Germany, but what if all they had to do was remember a 12-word password phrase? How might history have been different?

Seen in this way, bitcoin offers a potentially better trade-off between robustness and efficiency. Its programmatically defined supply schedule means the inflation rate will be lower than gold (making it more robust) while it’s digital nature makes it as divisible and transportable as any fiat currency (making it more efficient).

Using a nifty combination of economic incentives for mining (proof-of-work system) and cryptography (including blockchain), bitcoin allowed individuals to engage in a network that was both open (like a market) and coordinated (like a firm) without needing a single or small group of power brokers to facilitate the coordination.

Said another way, bitcoin was the first example of money going from being controlled from a small group of firm-like entities (central banks) to being market-driven. What cryptocurrency represents is the technology-enabled possibility that anyone can make their own form of money.

Whether or not bitcoin survives, that Pandora’s Box is now open. In the same way computing and the internet opened up new areas of the economy to being eaten by markets, blockchain and cryptocurrency technology have opened up a different area to be eaten by markets: money.

The Future of Public Blockchains

Bitcoin is unique among forms of electronic money because it is both trustworthy and maintained by a large selectorate rather than a small one.

There was a group that started to wonder whether the same underlying technology could be used to develop open networks in other areas by reducing the transaction cost of trust.[26]

One group, the monetary maximalists, thinks not. According to them, public blockchains like bitcoin will only ever be useful as money because it is the area where trust is most important and so you can afford to trade everything else away. The refugee fleeing political chaos does not care that a transaction takes an hour to go through and costs $10 or even $100. They care about having the most difficult to seize, censorship-resistant form of wealth.

Bitcoin, as it exists today, enhances coordination scalability by allowing any two parties to transact without relying on a centralized intermediary and by allowing individuals in unstable political situations to store their wealth in the most difficult-to-seize form ever created.

The second school of thought is that bitcoin is the first example of a canonical, trustworthy ledger with a large selectorate and that there could be other types of ledgers which are able to emulate it.

At its core, money is just a ledger. The amount of money in your personal bank account is a list of all the transactions coming in (paychecks, deposits, etc.) and all the transactions going out (paying rent, groceries, etc.). When you add all those together, you get a balance for your account.

Historically, this ledger was maintained by a single entity, like your bank. In the case of U.S. dollars, the number in circulation can be figured out by adding up how much money the U.S. government has printed and released into the market and how much it has taken back out of the market.

What else could be seen as a ledger?

The answer is “nearly everything.” Governments and firms can be seen just as groups of ledgers. Governments maintain ledgers of citizenship, passports, tax obligations, social security entitlements and property ownership. Firms maintain ledgers of employment, assets, processes, customers and intellectual property.

Economists sometimes refer to firms as “a nexus of contracts.” The value of the firm comes from those contracts and how they are structured within the “ledger of the firm.” Google has a contract with users to provide search results, with advertisers to display ads to users looking for specific search terms, and with employees to maintain the quality of their search engine. That particular ledger of contracts is worth quite a lot.

Mechanical time opened up entirely new categories of economic organization. It allowed for trade to be synchronized at great distanceswithout mechanical time, there would have been no railroads (how would you know when to go?) and no Industrial Revolution. Mechanical time allowed for new modes of employment that lifted people out of serfdom and slavery.[27]

In the same way, it may be that public blockchains make it possible to have ledgers that are trustworthy without requiring a centralized firm to manage them. This would shift the line further in favor of “renting” over “buying” by reducing the transaction cost of trust.

Entrepreneurs may be able to write a valuable app and release for anyone and everyone who needs that functionality. The entrepreneur would collect micro-payments in their wallet. A product designer could release their design into the wild and consumers could download it to be printed on their 3D printer almost immediately.[28]

For the first 10 years of bitcoin’s existence, this hasn’t been possible. Using a blockchain has meant minimizing the transaction cost of trust at all costs, but that may not always be the case. Different proposals are already being built out that allow for more transactions to happen without compromising the trust which bitcoin and other crypto-networks offer.

There are widely differing opinions on what the best way to scale blockchains are. One faction, usually identifying as Web 3/smart contracting platform/Ethereum, believes that scaling quickly at the base layer is essential and can be done with minimal security risk while the other groups believe that scaling should be done slowly and only where it does not sacrifice the censorship-resistant nature of blockchains (bitcoin). Just like the debate between Keynesian and Austrian/monetarist views of monetary policy, these views represent different beliefs about the optimal tradeoff point on the Selectorate Spectrum. But, both groups believe that significant progress can be made on making blockchains more scalable without sacrificing too much trust.

Public blockchains may allow aggregation without the aggregators. For certain use cases, perhaps few, perhaps many, public blockchains like bitcoin will allow the organization and coordination benefits of firms and the motivation of markets while maintaining a large selectorate.

Ultimately, what we call society is a series of overlapping and interacting ledgers.

In order for ledgers to function, they must be organized according to rules. Historically, rules have required rulers to enforce them. Because of network effects, these rulers tend to become the most powerful people in society. In medieval Europe, the Pope enforced the rules of Christianity and so he was among the most powerful.

Today, Facebook controls the ledger of our social connections. Different groups of elites control the university ledgers and banking ledgers.

Public blockchains allow people to engage in a coordinated and meritocratic network without requiring a small selectorate.

Blockchains may introduce markets into corners of society that have never before been reached. In doing so, blockchains have the potential to replace ledgers previously run by kings, corporations, and aristocracies. They could extend the logic of the long tail to new industries and lengthen the tail for suppliers and producers by removing rent-seeking behavior and allowing for permissionless innovation.

Public blockchains allow for rules without a ruler. It began with money, but they may move on to corporate ledgers, social ledgers and perhaps eventually, the nation-state ledger.[29]

Acknowledgments: Credit for the phrase “Markets Are Eating the World” to Patri Friedman.


  1. https://www.bls.gov/opub/mlr/1981/11/art2full.pdf
  2. https://www.bls.gov/emp/tables/employment-by-major-industry-sector.htm
  3. http://www3.nccu.edu.tw/~jsfeng/CPEC11.pdf
  4. There are, of course, other types of transaction costs than the ones listed here. A frequent one brought up in response to Coase is company culture, which nearly all entrepreneurs and investors agree is an important factor in a firm’s productivity. This is certainly true, but the broader point about the relationship between firm size and transaction costs hold—culture is just another transaction cost.
  5. http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/synch.html
  6. https://en.wikipedia.org/wiki/Escapement
  7. Fungibility is the property of a good or a commodity whose individual units are interchangeable. For example, one ounce of pure silver is fungible with any other ounce of pure silver. This is not the same for most goods: a dining table chair is not fungible with a fold-out chair.
  8. Piece rates, paying for some measurement of a finished output like bushels of apples or balls of yarn, seems fairer. But they suffer from two issues: For one, the output of the labor depends partially on the skill and effort of the laborer, but also on the vagaries of the work environment. This is particularly true in a society like that of medieval Europe, where nearly everyone worked in agriculture. The best farmer in the world can’t make it rain. The employee wants something like insurance that they will still be compensated for the effort in the case of events outside their control, and the employer who has more wealth and knowledge of market conditions takes on these risks in exchange for increased profit potential.
  9. For the worker, time doesn’t specify costs such as effort, skill or danger. A laborer would want to demand a higher time-rate wage for working in a dangerous mine than in a field. A skilled craftsman might demand a higher time-rate wage than an unskilled craftsman.
  10. The advent of the clock was necessary for the shift from farms to cities. Sunup to sundown worked effectively as a schedule for farmers because summer was typically when the most labor on farms was required, so longer days were useful. For craftsman or others working in cities, their work was not as driven by the seasons and so a trusted measure of time that didn’t vary with the seasons was necessary. The advent of a trusted measure of time led to an increase in the quantity, quality and variety of goods and services because urban, craftsman type work was now more feasible.
  11. https://unenumerated.blogspot.com/2017/02/money-blockchains-and-social-scalability.html. I am using the phrase “coordination scalability” synonymously with how Nick uses “social scalability.” A few readers suggested that social scalability was a confusing term as it made them think of scaling social networks.
  12. 150 is often referred to as Dunbar’s number, referring to a number calculated by University of Oxford anthropologist and psychologist Robin Dunbar using a ratio of neocortical volume to total brain volume and mean group size. For more see  https://www.newyorker.com/science/maria-konnikova/social-media-affect-math-dunbar-number-friendships. The lower band of 15 was cited in Pankaj Ghemawat’s World 3.0
  13. https://www.jstor.org/stable/2938736
  14. http://discovermagazine.com/1987/may/02-the-worst-mistake-in-the-history-of-the-human-race
  15. Because what else would you want to do besides eat bread dipped in fresh olive oil and drink fresh beer and wine?
  16. From The History of Money by Jack Weatherford.
  17. It also allowed them to squeeze out competitors at different places in the supply chain and put them out of business which Standard Oil did many times before finally being broken up by anti-trust legislation.
  18. http://www.paulgraham.com/re.html
  19. Tomorrow 3.0 by Michael Munger
  20. http://www.paulgraham.com/re.html
  21. There were quite a few things, even pre-internet, in the intersection between markets and firms, like approved vendor auction markets for government contracting and bidding, but they were primarily very high ticket items where higher transaction costs could be absorbed. The internet brought down the threshold for these dramatically to something as small as a $5 cab ride.
  22. The Long Tail was a concept WIRED editor Chris Anderson used to describe the proliferation of small, niche businesses that were possible after the end of the “tyranny of geography.” https://www.wired.com/2004/10/tail/
  23. From Wikipedia: “Fiat money is a currency without intrinsic value that has been established as money, often by government regulation. Fiat money does not have use value, and has value only because a government maintains its value, or because parties engaging in exchange agree on its value.” By contrast, “Commodity money is created from a good, often a precious metal such as gold or silver.” Almost all of what we call money today, from dollars to euros to yuan, is fiat.
  24. Small institutions can get both coordination and a larger selectorate by using social norms. This doesn’t enable coordination scalability though as it stops working somewhere around Dunbar’s number of 150.
  25. Visa processes thousands of transactions per second, while the bitcoin network’s decentralized structure processes a mere seven transactions per second. The key difference being that Visa transactions are easily reversed or censored whereas bitcoin’s are not.
  26. https://medium.com/@cdixon/crypto-tokens-a-breakthrough-in-open-network-design-e600975be2ef
  27. https://medium.com/cryptoeconomics-australia/the-blockchain-economy-a-beginners-guide-to-institutional-cryptoeconomics-64bf2f2beec4
  28. https://medium.com/cryptoeconomics-australia/the-blockchain-economy-a-beginners-guide-to-institutional-cryptoeconomics-64bf2f2beec4
  29. https://twitter.com/naval/status/877467629308395521