Why I (Still) Love Tech: In Defense of a Difficult Industry

Nerds, we did it. We have graduated, along with oil, real estate, insurance, and finance, to the big T. Trillions of dollars. Trillions! Get to that number any way you like: Sum up the market cap of the major tech companies, or just take Apple’s valuation on a good day. Measure the number of dollars pumped into the economy by digital productivity, whatever that is. Imagine the possible future earnings of Amazon.

The things we loved—the Commodore Amigas and AOL chat rooms, the Pac-Man machines and Tamagotchis, the Lisp machines and RFCs, the Ace paperback copies of Neuromancer in the pockets of our dusty jeans—these very specific things have come together into a postindustrial Voltron that keeps eating the world. We accelerated progress itself, at least the capitalist and dystopian parts. Sometimes I’m proud, although just as often I’m ashamed. I am proudshamed.

June 2019. Subscribe to WIRED.

Stefan Dinse/EyeEm/Getty Images (clouds)

And yet I still love the big T, by which I mean either “technology” or “trillions of dollars.” Why wouldn’t I? I came to New York City at the age of 21, in the era of Java programming, when Yahoo! still deserved its exclamation point. I’d spent my childhood expecting nuclear holocaust and suddenly came out of college with a knowledge of HTML and deep beliefs about hypertext, copies of WIRED (hello) and Ray Gun bought at the near-campus Uni-Mart. The 1996 theme at Davos was “Sustaining Globalization”; the 1997 theme was “Building the Network Society.” One just naturally follows the other. I surfed the most violent tsunami of capital growth in the history of humankind. And what a good boy am I!

My deep and abiding love of software in all its forms has sent me—me—a humble suburban Pennsylvania son of a hard­scrabble creative writing professor and a puppeteer, around the world. I lived in a mansion in Israel, where we tried to make artificial intelligence real (it didn’t work out), and I visited the Roosevelt Room of the White House to talk about digital strategy. I’ve keynoted conferences and camped in the backyard of O’Reilly & Associates, rising as the sun dappled through my tent and emerging into a field of nerds. I’ve been on TV in the morning, where the makeup people, who cannot have easy lives, spackled my fleshy Irish American face with pancake foundation and futilely sought to smash down the antennae-like bristle of my hair, until finally saying in despair, “I don’t know what else to do?” to which I say, “I understand.”

When I was a boy, if you’d come up behind me (in a nonthreatening way) and whispered that I could have a few thousand Cray supercomputers in my pocket, that everyone would have them, that we would carry the sum of human ingenuity next to our skin, jangling in concert with our coins, wallets, and keys? And that this Lilliputian mainframe would have eyes to see, a sense of touch, a voice to speak, a keen sense of direction, and an urgent desire to count my actual footsteps and everything I read and said as I traipsed through the noosphere? Well, I would have just burst, burst. I would have stood up and given the techno­barbaric yawp of a child whose voice has yet to change. Who wants jet packs when you can have 256 friggabytes (because in 2019 we measure things in friggin’ gigabytes) resting upon your mind and body at all times? Billions of transistors, attached to green plastic, soldered by robots into a microscopic Kowloon Walled City of absolute technology that we call a phone, even though it is to the rotary phone as humans are to amoebas­. It falls out of my hand at night as I drift to sleep, and when I wake up it is nestled into my back, alarm vibrating, small and warm like a twitching baby possum.

I still love software. It partially raised me and is such a patient teacher. Being tall, white, enthusiastic, and good at computers, I’ve ended up the CEO of a software services company, working for various large enterprises to build their digital dreams—which you’d figure would be like being a kid in a candy store for me, sculpting software experiences all day until they ship to the web or into app stores. Except it’s more like being the owner of a candy factory, concerned about the rise in cost of Yellow 5 food coloring and the lack of qualified operators for the gumball-forming machine. And of course I rarely get to build software anymore.

I would like to. Something about the interior life of a computer remains infinitely interesting to me; it’s not romantic, but it is a romance. You flip a bunch of microscopic switches really fast and culture pours out.

A few times a year I find myself walking past 195 Broadway, a New York City skyscraper that has great Roman columns inside. It was once the offices of the AT&T corporation. The fingernail-sized processor in my phone is a direct descendant of the transistor, which was invented in AT&T’s Bell Labs (out in New Jersey). I pat my pocket and think, “That’s where you come from, little friend!” When the building was constructed, the company planned to put in a golden sculpture of a winged god holding forked lightning, called Genius of Telegraphy.

But by the time the building was finished AT&T had sold off the telegraph division, so the company called it Spirit of Electricity. But that must have been too specific, because it was renamed Spirit of Communication. And then in 1984, the Bell system, after decades of argument about its monopoly status, broke up (with itself and with America).

Now the New York offices are rented out to, among other things, a wedding planning website and a few media companies. The statue has been relocated to Dallas. Today everyone calls it Golden Boy.

In the late 1990s I was terrified of mailing lists. For years the best way to learn a piece of software—especially some undocumented, open sourced thing you had to use to make websites—was to join its community and subscribe to its mailing lists, tracking the bugs and new releases. Everything was a work in progress. Books couldn’t help you. There was no GitHub or Stack Overflow.

I could only bring myself to lurk, never to contribute. I couldn’t even ask questions. I was a web person, and web people weren’t real programmers. If I piped up, I was convinced they’d yell, “Get off this mailing list! You have no place in the community of libxml2! Naïf!” The very few times I submitted bugs or asked questions were horrible exercises in rewriting and fear. Finally I’d hit Send and—

Silence, often. No reply at all. I’d feel awful, and a little outraged at being ignored. I was trying so hard! I’d read the FAQs!

Eventually I met some of those magical programmers. I’d sneak into conferences. (Just tell the people at the entry you left your badge in the hotel room.) They were a bunch of very normal technologists contributing, through their goodwill and with their spare time, to open source software tools.

“I use your code every day,” I’d say. They were pleased to be recognized. Surprised at my excitement. They weren’t godlike at all. They were, in many ways, the opposite of godlike. But I am still a little afraid to file bug reports, even at my own company. I know I’m going to be judged.

Netflix and Google Books Are Blurring the Line Between Past and Present

  • Soon We Won't Program Computers. We'll Train Them Like Dogs

  • Why the Future Doesn't Need Us

  • So much about building software—more than anyone wants to admit—is etiquette. Long before someone tweeted “That’s not OK!” there were netiquette guides and rule books, glossaries, and jargon guides, like The New Hacker’s Dictionary, available in text-only format for download, or Hitchhiker’s Guide to the Internet, first released in 1987. Bibles. There were the FAQs that would aid newcomers to the global decentralized discussion board Usenet. FAQs kept people from rehashing the same conversation. When college freshmen logged on in September—because that’s where the internet happened back in the 1980s and ’90s, at colleges and a few corporations—they would be gently shown the FAQs and told how to behave. But then in 1993, AOL gave its users Usenet access—and that became known as the Eternal September. The ivory tower was overrun. That was the day the real internet ended, 26 years ago. It was already over when I got here.

    The rulemaking will never end. It’s rules all the way down. Coders care passionately about the position of their brackets and semicolons. User experience designers work to make things elegant and simple and accessible to all. They meet at conferences, on message boards, and today in private Slacks to hash out what is good and what is bad, which also means who is in, who is out.

    I keep meeting people out in the world who want to get into this industry. Some have even gone to coding boot camp. They did all the exercises. They tell me about their React apps and their Rails APIs and their page design skills. They’ve spent their money and time to gain access to the global economy in short order, and often it hasn’t worked.

    I offer my card, promise to answer their emails. It is my responsibility. We need to get more people into this industry.

    But I also see them asking, with their eyes, “Why not me?”

    And here I squirm and twist. Because—because we have judged you and found you wanting. Because you do not speak with a confident cadence, because you cannot show us how to balance a binary tree on a whiteboard, because you overlabored the difference between UI and UX, because you do not light up in the way that we light up when hearing about some obscure bug, some bad button, the latest bit of outrageousness on Hacker News. Because the things you learned are already, six months later, not exactly what we need. Because the industry is still overlorded by people like me, who were lucky enough to have learned the etiquette early, to even know there was an etiquette.

    I try to do better, and so does my company. How do you change an industry that will not stop, not even to catch its breath? We have no leaders, no elections. We never expected to take over the world! It was just a scene. You know how U2 was a little band in Ireland with some good albums, and over time grew into this huge, world-spanning band-as-brand with stadium shows with giant robotic structures, and Bono was hanging out with Paul Wolfowitz? Tech is like that, but it just kept going. Imagine if you were really into the group Swervedriver in the mid-’90s but by 2019 someone was on CNBC telling you that Swervedriver represented, I don’t know, 10 percent of global economic growth, outpacing returns in oil and lumber. That’s the tech industry.

    No one loves tech for tech’s sake. All of this was about power—power over the way stories were told, the ability to say things on my own terms. The aesthetic of technology is an aesthetic of power—CPU speed, sure, but what do you think we’re talking about when we talk about “design”? That’s just a proxy for power; design is about control, about presenting the menu to others and saying, “These are the options you wanted. I’m sorry if you wanted a roast beef sandwich, but sir, this is not an Arby’s.” That is Apple’s secret: It commoditizes the power of a computer and sells it to you as design.

    Technology is a whole world that looks nothing like the world it seeks to command. A white world, a male world, and—it breaks my heart to say it, for I’ve been to a lot of Meetups (now a WeWork company), and hosted some too—a lonely world. Maybe I’m just projecting some teenage metaphysics onto a lively and dynamic system, but I can’t fully back away from that sense of monolithic loneliness. We’re like a carpenter who spent so long perfecting his tools that he forgot to build the church.

    But not always. One night in October 2014, I had a few drinks and set up a single Linux server in the cloud and called it tilde.club, then tweeted out that I’d give anyone an account who wanted one. I was supposed to be working on something else, of course.

    Suddenly my email was full: Thousands of people were asking for logins. People of all kinds. So I made them accounts and watched in awe as they logged on to that server. You can put hundreds of people on one cheap cloud computer. It’s just plain text characters on a screen, like in the days of DOS, but it works. And they can use that to make hundreds of web pages, some beautiful, some dumb, exactly the way we made web pages in 1996. Hardly anyone knew what they were doing, but explaining how things worked was fun.

    For a few weeks, it was pure frolic. People made so many web pages, formed committees, collaborated. Someone asked if I’d sell it. People made their own tilde servers. It became a thing, but an inclusive thing. Everyone was learning a little about the web. Some were teaching. It moved so fast I couldn’t keep up. And in the end, of course, people went back whence they came—Twitter, Facebook, and their jobs. We’d had a very good party.

    The server is still up. Amazon sends a bill. I wish the party could have kept going.

    But briefly I had made a tiny pirate kingdom, run at a small loss, where people were kind. It was the opposite of loneliness. And that is what I wish for the whole industry. Eternal September is not to be hated, but accepted as the natural order of success. We should invite everyone in. We should say, We’re all new here.

    “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind.” This was John Perry Barlow’s “A Declaration of the Independence of Cyberspace,” a document many people took seriously, although I always found it a little much. Barlow was a prophet of network communication, an avatar of this magazine. “On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.” It’s signed from Davos, 1996 (the year of “Sustaining Globalization”).

    Exposure to the internet did not make us into a nation of yeoman mind-farmers (unless you count Minecraft). That people in the billions would self-assemble, and that these assemblies could operate in their own best interests, was … optimistic.

    But maybe! Maybe it could work. There was the Arab Spring, starting in 2010. Twitter and Facebook were suddenly enabling protest, supporting democracy, changing the world for the better. This was the thing we’d been waiting for—

    And then it wasn’t. Autocracy kept rearing its many heads, and people started getting killed. By 2014, Recep Tayyip Erdoğan was shutting off Twitter in Turkey to quell protests, and then it came home, first as Gamergate, wherein an online campaign of sexual harassment against women, somewhat related to videogames, metastasized into an army of enraged bots and threats. And as Gamergate went, so went the 2016 election. It was into this gloomy context that I made tilde.club that night—a blip of nostalgia and cheer fueled by a few Manhattans.

    People—smart, kind, thoughtful people—thought that comment boards and open discussion would heal us, would make sexism and racism negligible and tear down walls of class. We were certain that more communication would make everything better. Arrogantly, we ignored history and learned a lesson that has been in the curriculum since the Tower of Babel, or rather, we made everyone else learn it. We thought we were amplifying individuals in all their wonder and forgot about the cruelty, or at least assumed that good product design could wash that away. We were so hopeful, and we shaved the sides of our heads, and we never expected to take over the world.

    I’m watching the ideologies of our industry collapse. Our celebration of disruption of every other industry, our belief that digital platforms must always uphold free speech no matter how vile. Our transhumanist tendencies, that sci-fi faith in the singularity. Our general belief that software will eat the world and that the world is better for being eaten.

    It’s been hard to accept, at least for me, that each of our techy ideologies, while containing various merits, don’t really add up to a worldview, because technology is not the world. It’s just another layer in the Big Crappy Human System along with religion, energy, government, sex, and, more than anything else, money.

    I don’t know if I can point to any one thing and say “that’s tech” in 2019. (Well, maybe 3D graphics GPU card programming. That’s nerd central.) The cost of our success is that we are no longer unique. The secret club is no longer a gathering of misfits. We are the world. (We are the servers. We are the ones who gather faves and likes, so let’s start clicking. Sorry.)

    I’ve made a mistake, a lifelong one, correlating advancements in technology with progress. Progress is the opening of doors and the leveling of opportunity, the augmentation of the whole human species and the protection of other species besides. Progress is cheerfully facing the truth, whether flooding coastlines or falling teen pregnancy rates, and thinking of ways to preserve the processes that work and mitigate the risks. Progress is seeing calmly, accepting, and thinking of others.

    It’s not that technology doesn’t matter here. It does. We can enable humans to achieve progress. We make tools that humans use. But it might not be our place to lead.

    I wish I could take my fellow CEOs by the hand (they’re not into having their hands held) and show them Twitter, Facebook, Tumblr, and any of the other places where people are angry. Listen, I’d say, you’re safe. No one is coming for your lake house, even if they tweet “I’m coming for your lake house.” These random angry people are merely asking us to keep our promises. We told them 20-some years ago that we’d try to abolish government and bring a world of plenty. We told them we’d make them powerful, that we’d open gates of knowledge and opportunity. We said, “We take your privacy and security seriously at Facebook.” We said we were listening. So listen! They are submitting a specification for a world in which fairness is a true currency, and then they’re trying to hold everyone to the spec (which is, very often, the law). As someone who spent a lot of time validating XML and HTML pages, I empathize. If bitcoin can be real money, then fairness can be a real goal.

    We might have been them, if we’d been born later and read some different websites. And it’s only time before they will become us.

    Every morning I drop off my 7-year-old twins, a boy and a girl, at their public school, and they enter a building that was established a century ago and still functions well for the transmission of learning, a building filled with digital whiteboards but also old-fashioned chalkboards and good, worn books.

    I think often of the things the building has seen. It was built in an age of penmanship and copybooks, shelves of hardbound books and Dick and Jane readers; it made its way through blue mimeographs with their gasoline smell. Milkmen delivered with horses when it was built, and now every parking space is filled with Toyotas and school buses. Teachers and principals come young and retire decades later. There are certain places where craft supplies are stored. The oldest living student just turned 100 years old, and some students walked to his home and sang him “Happy Birthday.” They announced it at the multicultural music event.

    The school hasn’t moved in a century, but it is a white-hot place in time. Ten or twenty thousand little bodies have come through here on their way to what came next. While they are here, it’s their whole world. It feeds the children who need to be fed.

    I watch my kids go through the front doors. (I call this my “cognitive receipt,” because unless I see them I worry that I somehow forgot to drop them off.) Then I walk to the bus stop. The bus comes, and off we go, across an elevated highway and through a tunnel. Then we take the FDR Expressway and drive right under three bridges: the Brooklyn, the Manhattan, the Williamsburg. Each bridge has its own story, an artifact of its time, products of various forms of hope, necessity, and civic corruption, each one an essay on the nature of gravity and the tensile strength of wire. Everyone on the bus looks at their phone or looks out the window, or sometimes they read a book.

    Sometimes I think of the men who died making the Brooklyn Bridge; sometimes I play a game on my phone. This is as close as it gets to the sacred for me, to be on a public conveyance, in the arms of a transit authority, part of a system, to know that the infrastructure has been designed for my safety. In the winter, I can look down into the icy East River and fantasize about what it would take to push us into the river, because only a small, low concrete barrier keeps us from death. I think of how I’d escape and how I’d help others up. But the bus never hurtles into the water. They made sure of it.

    I know that my privacy is being interfered with, that I’m being watched, monitored, tracked by giant companies, and that I’m on video. (I wish I’d known how often I’d be on video in 2019, how often I’d need to see my own animated face in the corner of the video call.) I know also that I have been anticipated by the mineralogists who study asphalt and that I am surrounded by tolerances and angles, simple and complex machines.

    My children are safe in an old, too-warm building that has seen every system of belief and every kind of education, one that could easily last another 100 years, with glowing lichen on the wall in place of lights. Imagine how many light-emitting sneakers they’ll have by then.

    Maybe I should have moved to the Bay Area to be closer to this industry I love, and just let myself fall backward into tech. I could never muster it, even though I studied maps of San Francisco and pushed my wife to come with me and visit the corporate campuses of Apple, Google, and the like, which meant visiting a lot of parking lots.

    But I didn’t move. I stayed in New York, where on a recent Saturday I went to the library with my kids. It’s a little one-story library, right next to their school, and it’s as much a community center as repository of knowledge. I like quiet, so sometimes I get annoyed at all the computers and kids, the snacking moms and dads. But it’s 2019 and I live in a neighborhood where people need public libraries, and I live in a society.

    When we visited one day in February, there was a man in a vest behind me setting up some devices with wire and speakers. He was trying to connect two little boxes to the devices and also to two screens, and calling gently to a passing librarian for a spare HDMI cable. Kids were coming up and looking. They were particularly interested in the cupcakes he’d brought with him.

    “We’re having a birthday party,” he said, “for a little computer.”

    By which he meant the Raspberry Pi. Originally designed in the UK, it’s smaller than a can of soda and runs Linux. It costs $35. It came into the world in February 2012, sold as a green circuit board filled with electronics, with no case, nothing, and became almost instantly popular. In that and subsequent versions, 25 million units have been sold. A new one is much faster but basically the same size, and still costs $35.

    But for the terrible shyness that overcame me, I would have turned around right there and grasped that man’s hand. “Sir,” I would like to have said, “thank you for honoring this wonderful device.”

    You get your Raspberry Pi and hook it up to a monitor and a keyboard and a mouse, then you log on to it and … it’s just a Linux system, like the tilde.club machine, and ready for work. A new computer is the blankest of canvases. You can fill it with files. You can make it into a web server. You can send and receive email, design a building, draw a picture, write 1,000 novels. You could have hundreds of users or one. It used to cost tens of thousands of dollars, and now it costs as much as a fancy bottle of wine.

    I should have said hello to the man in the library. I should have asked my questions on the mailing lists. I should have engaged where I could, when I had the chance. I should have written fan letters to the people at Stanford Research Institute and Xerox PARC who bootstrapped the world I live inside. But what do you say? Thank you for creating a new universe? Sorry we let you down?

    We are all children of Moore’s law. Everyone living has spent the majority of their existence in the shadow of automated computation. It has been a story of joy, of mostly men in California and Seattle inventing a future under the occasional influence of LSD, soldering and hot-tubbing, and underneath it all an extraordinary glut of the most important raw material imaginable—processor cycles, the result of a perfect natural order in which the transistors on the chips kept doubling, speeds in the kilo-, mega-, and eventually gigahertz, as if the camera had zoomed in on an old IBM industrial wall clock that sped up until its minute hand was a blur, and then the hour hand, and then the clock caught fire and melted to the ground, at which point money started shooting out of the hole in the wall.

    There is probably no remaining growth like what we’ve seen. Attempts to force a revolution don’t seem to work. Blockchain has yet to pan out. Quantum computing is a long and uncertain road. Apple, Google, and their peers are poised to get the greatest share of future growth. Meanwhile, Moore’s law is coming to its natural conclusion.

    I have no desire to retreat to the woods and hear the bark of the fox. I like selling, hustling, and making new digital things. I like ordering hard drives in the mail. But I also increasingly enjoy the regular old networks: school, PTA, the neighbors who gave us their kids’ old bikes. The bikes represent a global supply chain; when I touch them, I can feel the hum of enterprise resource planning software, millions of lines of logistics code executed on a global scale, bringing the handlebars together with the brakes and the saddle onto its post. Then two kids ride in circles in the supermarket parking lot, yawping in delight. I have no desire to disrupt these platforms. I owe my neighbors a nice bottle of wine for the bikes. My children don’t seem to love computers as I do, and I doubt they will in the same way, because computers are everywhere, and nearly free. They will ride on different waves. Software has eaten the world, and yet the world remains.

    We’re not done. There are many birthdays to come for the Raspberry Pi. I’m at the office on a Sunday as I write this. My monitor is the only light, and if you could see me I’d be blue.

    I’m not sure if I should be a CEO forever. I miss making things. I miss coding. I liked having power over machines. But power over humans is often awkward and sometimes painful to wield. I wish we’d built a better industry.

    I was exceptionally lucky to be born into this moment. I got to see what happened, to live as a child of acceleration. The mysteries of software caught my eye when I was a boy, and I still see it with the same wonder, even though I’m now an adult. Proudshamed, yes, but I still love it, the mess of it, the code and toolkits, down to the pixels and the processors, and up to the buses and bridges. I love the whole made world. But I can’t deny that the miracle is over, and that there is an unbelievable amount of work left for us to do.

    Getty Images (all photographs)

    Paul Ford (@ftrain) is a programmer and a National Magazine Award–­winning essayist on technology. In 2015 he cofounded Postlight, a digital product studio in New York City.

    This article appears in the June issue. Subscribe now.

    Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.

    When you buy something using the retail links in our stories, we may earn a small affiliate commission. Read more about how this works.


    Why We Love Tech

    Read more: https://www.wired.com/story/why-we-love-tech-defense-difficult-industry/

    Related Articles

    Markets Are Eating The World

    For the last hundred years, individuals have worked for firms, and, by historical standards, large ones.

    That many of us live in suburbs and drive our cars into the city to go to work at a large office building is so normal that it seems like it has always been this way. Of course, it hasn’t. In 1870, almost 50 percent of the U.S. population was employed in agriculture.[1] As of 2008, less than 2 percent of the population is directly employed in agriculture, but many people worked for these relatively new things called “corporations.”[2]

    Many internet pioneers in the 90’s believed that the internet would start to break up corporations by letting people communicate and organize over a vast, open network. This reality has sort-of played out: the “gig economy” and rise in freelancing are persistent, if not explosive, trends. With the re-emergence of blockchain technology, talk of “the death of the firm” has returned. Is there reason to think this time will be different?

    To understand why this time might (or might not) be different, let us first take a brief look back into Coasean economics and mechanical clocks.

    In his 1937 paper, “The Nature of the Firm,” economist R.H. Coase asked “if markets were as efficient as economists believed at the time, why do firms exist at all? Why don’t entrepreneurs just go out and hire contractors for every task they need to get done?”[3]

    If an entrepreneur hires employees, she has to pay them whether they are working or not. Contractors only get paid for the work they actually do. While the firm itself interacts with the market, buying supplies from suppliers and selling products or services to customers, the employees inside of it are insulated. Each employee does not renegotiate their compensation every time they are asked to do something new. But, why not?

    Coase’s answer was transaction costs. Contracting out individual tasks can be more expensive than just keeping someone on the payroll because each task involves transaction costs.

    Imagine if instead of answering every email yourself, you hired a contractor that was better than you at dealing with the particular issue in that email. However, it costs you something to find them. Once you found them you would have to bargain and agree on a price for their services then get them to sign a contract and potentially take them to court if they didn’t answer the email as stipulated in the contract.

    Duke economist Mike Munger calls these three types of transaction costs triangulation, how hard it is to find and measure the quality of a service; transfer, how hard it is to bargain and agree on a contract for the good or service; and trust, whether the counterparty is trustworthy or you have recourse if they aren’t.

    You might as well just answer the email yourself or, as some executives do, hire a full-time executive assistant. Even if the executive assistant isn’t busy all the time, it’s still better than hiring someone one off for every email or even every day.

    Coase’s thesis was that in the presence of these transaction costs, firms will grow larger as long as they can benefit from doing tasks in-house rather than incurring the transaction costs of having to go out and search, bargain and enforce a contract in the market. They will expand or shrink until the cost of making it in the firm equals the cost of buying it on the market.

    The lower the transaction costs are, the more efficient markets will be, and the smaller firms will be.

    In a world where markets were extremely efficient, it would be very easy to find and measure things (low triangulation costs), it would be very easy to bargain and pay (low transfer costs), and it would be easy to trust the counterparty to fulfill the contract (low trust costs).

    In that world, the optimal size of the firm is one person (or a very few people). There’s no reason to have a firm because business owners can just buy anything they need on a one-off basis from the market.[4] Most people wouldn’t have full-time jobs; they would do contract work.

    Consumers would need to own very few things. If you needed a fruit dehydrator to prepare for a camping trip twice a year, you could rent one quickly and cheaply. If you wanted to take your family to the beach twice a year, you could easily rent a place just for the days you were there.

    On the other hand, in a world that was extremely inefficient, it would be hard to find and measure things (high triangulation costs), it would be difficult to bargain and pay (high transfer costs) and it would be difficult to trust the counterparty to fulfill the contract (high trust costs).

    In that world, firms would tend to be large. It would be inefficient to buy things from the market and so entrepreneurs would tend to accumulate large payrolls. Most people would work full-time jobs for large firms. If you wanted to take your family to the beach twice a year, you would need to own the beach house because it would be too inefficient to rent, the reality before online marketplaces like AirBnB showed up.

    Consumers would need to own nearly everything they might conceivably need. Even if they only used their fruit dehydrator twice a year, they’d need to own it because the transaction costs involved in renting it would be too high.

    If the structure of the economy is based on transaction costs, then what determines them?

    Technological Eras and Transaction Costs

    The primary determinant of transaction costs is technology.

    The development of the wheel and domestication of horses and oxes decreased transfer costs by making it possible to move more goods further. Farmers who could bring their crops to market using an ox cart rather than carrying it by hand could charge less and still make the same profit.

    The development of the modern legal system reduced the transaction cost of trust. It was possible to trust that your counterparty would fulfill their contract because they knew you had recourse if they didn’t.

    The list goes on: standardized weights and  measures, the sail, the compass, the printing press, the limited liability corporation, canals, phones, warranties, container ships and, more recently, smartphones and the internet.

    It’s hard to appreciate how impactful many of these technologies has been, because most of them had become so common by the time most of us were born that we take them for granted.

    As the author Douglas Adams said, “Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.”

    To see how technology affects transaction costs, and how that affects the way our society is organized, let’s consider something which we all think of as “normal and ordinary,”  but which has had a huge impact on our lives: the mechanical clock.

    The Unreasonable Effectiveness of the Mechanical Clock

    In 1314, The city of Caen installed a mechanical clock with the following inscription: “I give the hours voice to make the common folk rejoice.” “Rejoice” is a pretty strong reaction to a clock, but it wasn’t overstated, everyone in Caen was pretty jazzed about the mechanical clock. Why?

    A key element of why we have jobs today as opposed to working as slaves or serfs bonded to the land as was common in the Feudal system is a direct result of the clock.

    Time was important before the invention of the clock but was very hard to measure. Rome was full of sundials, and medieval Europe’s bell towers where, time was tolled, were the tallest structures in town.[5]

    This was not cheap. In the larger and more important belfries, two bell-ringers lived full time, each serving as a check on the other. The bells themselves were usually financed by local guilds that relied on the time kept to tell their workers when they had to start working and when they could go home.

    This system was problematic for a few reasons.

    For one, it was expensive. Imagine if you had to pool funds together with your neighbors to hire two guys to sit in the tower down the street full time and ring the bell to wake you up in the morning.

    For another, the bell could only signal a few events per day. If you wanted to organize a lunch meeting with a friend, you couldn’t ask the belltower to toll just for you. Medieval bell towers had not yet developed snooze functionality.

    Finally, sundials suffered from accuracy problems. Something as common as clouds could make it difficult to tell precisely when dawn, dusk, and midday occurred.

    In the 14th and 15th centuries, the expensive bell towers of Europe’s main cities got a snazzy upgrade that dramatically reduced transaction costs: the mechanical clock.

    The key technological breakthrough that allowed the development was the escapement.

    The escapement transfers energy to the clock’s pendulum to replace the energy lost to friction and keep it on time. Each swing of the pendulum releases a tooth of the escapement’s wheel gear, allowing the clock’s gear train to advance or “escape” by a set amount. This moves the clock’s hands forward at a steady rate.[6]

    The accuracy of early mechanical clocks, plus or minus 10-15 minutes per day, was not notably better than late water clocks and less accurate than the sandglass, yet mechanical clocks became widespread. Why?

    1. Its automatic striking feature meant the clock could be struck every hour at lower cost, making it easier to schedule events than only striking at dawn, dusk and noon.
    2. It was more provably fair than the alternatives, which gave all parties greater confidence that the time being struck was accurate. (Workers were often suspicious that employers could bribe or coerce the bell-ringers to extend the workday, which was harder to do with a mechanical clock.)

    Mechanical clocks broadcast by bell towers provided a fair (lower trust costs) and fungible [7] (lower transfer costs) measure of time. Each hour rung on the bell tower could be trusted to be the same length as another hour.

    Most workers in the modern economy earn money based on a time-rate, whether the time period is an hour, a day, a week or a month. This is possible only because we have a measure of time which both employer and employee agree upon. If you hire someone to pressure-wash your garage for an hour, you may argue with them over the quality of the work, but you can both easily agree whether they spent an hour in the garage.

    Prior to the advent of the mechanical clock, slavery and serfdom were the primary economic relationships, in part because the transaction cost of measuring time beyond just sunup and sundown was so high, workers were chained to their masters or lords.[8]

    The employer is then able to use promotions, raises, and firing to incentivize employees to produce quality services during the time they are being paid for.[9]

    In a system based on time-rate wages rather than slavery or serfdom, workers have a choice. If the talented blacksmith can get a higher time-rate wage from a competitor, she’s able to go work for them because there is an objective, fungible measure of time she’s able to trade.

    As history has shown, this was a major productivity and quality-of-life improvement for both parties.[10]

    It gradually became clear that mechanical time opened up entirely new categories of economic organization and productivity that had hitherto been not just impossible, but unimaginable.

    We could look at almost any technology listed abovestandardized weights and measures, the sail, the compass, the printing press, etc.and do a similar analysis of how it affected transaction costs and eventually how it affected society as a result.

    The primary effect is an increase in what we will call coordination scalability.

    Coordination Scalability

    “It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them.”   Alfred North Whitehead

    About 70,000 years ago, there were between six and ten species of the genus homo. Now, of course, there is just one: Homo sapiens. Why did Homo sapiens prevail over the other species, like Homo neanderthalensis?

    Homo sapiens prevailed because of their ability to coordinate. Coordination was made possible by increased neocortical size, which led to an ability to work together in large groups, not just as single individuals. Instead of single individuals hunting, groups could hunt and bring down larger prey more safely and efficiently.[11]

    The brain of Homo sapiens has proven able to invent other, external structures which further increased coordination scalability by expanding the network of other people we could rely on.

    Maybe the most important of these was language, but we have evolved many others since, including the mechanical clock.

    The increased brain size has driven our species through four coordination revolutions: Neolithic, Industrial, Computing, Blockchain.

    Neolithic Era: The Emergence of Division of Labor

    The first economic revolution was a shift from humans as hunter-gatherers to homo sapiens as farmers.

    Coordination scalability among hunter-gatherers was limited to the size of the band, which tended to range from 15 to 150 individuals.[12] The abandonment of a nomadic way of life and move to agriculture changed this by allowing specialization and the formation of cities.

    Agriculture meant that people could, for the first time, accumulate wealth. Farmers could save excess crops to eat later or trade them for farming equipment, baskets or decorations. The problem was that this wealth was suddenly worth stealing and so farmers needed to defend their wealth.

    Neolithic societies typically consisted of groups of farmers protected by what Mancur Olson called “stationary bandits,” basically warlords.[13] This allowed the emergence of much greater specialization. Farmers accumulated wealth and paid some to the warlords for protection, but even then there was still some left over, making it possible for individuals to specialize.

    A city of 10,000 people requires, but also makes possible, specialists.

    The limits of coordination scalability increased from 150 to thousands or, in some cases, tens of thousands. This was not necessarily a boon to human happiness. Anthropologist Jared Diamond called the move to agriculture “the worst mistake in the history of the human race.”[14] The quality of life for individuals declined: lifespans shortened, nutrition was worse leading to smaller stature, and disease was more prevalent.

    But this shift was irresistible because specialization created so much more wealth and power that groups which adopted this shift came to dominate those that didn’t. The economies of scale in military specialization, in particular, were overwhelming. Hunt-gatherers couldn’t compete.

    In the Neolithic era, the State was the limit of coordination scalability.

    Industrial Era: Division of Labor Is Eating the World

    Alongside the city-state, a new technology started to emerge that would further increase the limits of coordination scalability: money. To illustrate, let us take the European case, from ancient Greece to modernity, though the path in other parts of the world was broadly similar. Around 630 B.C., the Lydian kings recognized the need for small, easily transported coins worth no more than a few days’ labor. They made these ingots in a standard sizeabout the size of a thumbnail—and weight, and stamped an emblem of a lion’s head on them.

    This eliminated one of the most time-consuming (and highest transaction cost) steps in commerce: weighing gold and silver ingots each time a transaction was made. Merchants could easily count the number of coins without worrying about cheating.

    Prior to the invention of coins, trade had been limited to big commercial transactions, like buying a herd of cattle. With the reduced transfer cost facilitated by coins, Lydians began trading in the daily necessities of lifegrain, olive oil, beer, wine, and wood.[15]

    The variety and abundance of goods which could suddenly be traded led to another innovation: the retail market.

    Previously, buyers had to go to the home of sellers of whatever they needed. If you needed olive oil, you had to walk over to the olive oil lady’s house to get it. With the amount of trade that began happening after coinage, a central market emerged. Small stalls lined the market where each merchant specialized in (and so could produce more efficiently) a particular goodmeat, grain, jewelry, bread, cloth, etc. Instead of having to go the olive oil lady’s house, you could go to her stall and pick up bread from the baker while you were there.

    From this retail market in Lydia sprang the Greek agora, Medieval market squares in Europe and, the suburban shopping mall and, eventually, the “online shopping malls” Amazon and Google. Though markets were around as early as 7th century BCE Lydia, they really hit their stride in The Industrial Revolution in the 18th century.[16]

    Adam Smith was the first to describe in detail the effect of this marketization of the world. Markets made it possible to promote the division of labor across political units, not just within them. Instead of each city or country manufacturing all the goods they needed, different political entities could further divide labor. Coordination scalability started to stretch across political borders.

    Coming back to Coase, firms will expand or shrink until “making” equals the cost of “buying.” Under this Industrial era, transaction costs made administrative and managerial coordination (making) more efficient than market coordination (buying) for most industries, which led to the rise of large firms.

    The major efficiency gain of Industrial companies over their more “artisanal” forebearers was that using the techniques of mass production, they could produce products of a higher quality at a lower price. This was possible only if they were able to enforce standards throughout the supply chain. The triangulation transaction cost can be broken down into search and measurement: a company needed to find the vendor and to be able to measure the quality of the good or service.

    In the early Industrial era, the supply chain was extremely fragmented. By bringing all the pieces into the firm, a large vertically integrated company could be more efficient.[17]

    As an example, In the 1860s and 1870s, the Carnegie Corporation purchased mines to ensure it had reliable access to the iron ore and coke it needed to make steel. The upstream suppliers were unreliable and non-standardized and Carnegie Corporation could lower the cost of production by simply owning the whole supply chain.

    This was the case in nearly every industry. By bringing many discrete entities under one roof and one system of coordination, greater economic efficiencies were gained and the multi-unit business corporation replaced the small, single-unit enterprise because administrative coordination enabled greater productivity through lower transaction costs per task than was possible before. Economies of scale flourished.

    This system of large firms connected by markets greatly increased coordination scalability. Large multinational firms could stretch across political boundaries and provide goods and services more efficiently.

    In Henry Ford’s world, the point where making equaled the cost of buying was pretty big. Ford built a giant plant at River Rouge just outside Detroit between 1917 and 1928 that took in iron ore and rubber at one end and sent cars out the other. At the factory’s peak, 100,000 people worked there. These economies of scale allowed Ford to dramatically drive down the cost of an automobile, making it possible for the middle class to own a car.[18]

    As with Carnegie, Ford learned that supplier networks take a while to emerge and grow into something reliable. In 1917, doing everything himself was the only way to get the scale he needed to be able to make an affordable car.

    One of the implications of this model was that industrial businesses required huge startup costs.

    The only chance any entrepreneur had to compete required starting out with similarly massive amounts of capital required to build a factory large and efficient enough to compete with Ford.

    For workers, this meant that someone in a specialized role, like an electric engineer or an underwriter, did not freelance or work for small businesses. Because the most efficient way to produce products was in large organizations, specialized workers could earn the most by working inside large organizations, be they Ford, AT&T or Chase Bank.

    At the peak of the Industrial era, there were two dominant institutions: firms and markets.

    Work inside the firm allowed for greater organization and specialization which, in the presence of high transaction costs was more economically efficient.

    Markets were more chaotic and less organized, but also more motivating. Henry Ford engaged with the market and made out just a touch better than any of his workers; there just wasn’t room for many Henry Fords.

    This started to dissolve in the second half of the 20th century. Ford no longer takes iron ore and rubber as the inputs to their factories, but has a vast network of upstream suppliers.[19] The design and manufacturing of car parts now happens over a long supply chain, which the car companies ultimately assemble and sell.

    One reason is that supplier networks became more standardized and reliable. Ford can now buy ball bearings and brake pads more efficiently than he can make them, so he does. Each company in the supply chain focuses on what they know best and competition forces them to constantly improve.

    By the 1880s, it cost Carnegie more to operate the coke ovens in-house than to buy it from an independent source, so he sold off the coke ovens and bought it from the open market. Reduced transaction costs in the form of more standardized and reliable production technology caused both Ford and Carnegie corporation to shrink as Coase’s theory would suggest.

    The second reason is that if you want to make a car using a network of cooperating companies, you have to be able to coordinate their efforts, and you can do that much better with telecommunication technology broadly and computers specifically. Computers reduce the transaction costs that Coase argued are the raison d’etre of corporations. That is a fundamental change.[20]

    The Computing Era: Software Is Eating the World

    Computers, and the software and networks built on top of them, had a new economic logic driven by lower transaction costs.

    Internet aggregators such as Amazon, Facebook, Google, Uber and Airbnb reduced the transaction costs for participants on their platforms. For the industries that these platforms affected, the line between “making” and “buying” shifted toward buying. The line between owning and renting shifted toward renting.

    Primarily, this was done through a reduction in triangulation costs (how hard it is to find and measure the quality of a service), and transfer costs (how hard it is to bargain and agree on a contract for the good or service).

    Triangulation costs came down for two reasons. One was the proliferation of smartphones, which made it possible for services like Uber and Airbnb to exist. The other was the increasing digitization of the economy. Digital goods are both easier to find (think Googling versus going to the library or opening the Yellow Pages) and easier to measure the quality of (I know exactly how many people read my website each day and how many seconds they are there, the local newspaper does not).

    The big improvement in transfer costs was the result of matchmaking: bringing together and facilitating the negotiation of mutually beneficial commercial or retail deals.  

    Take Yelp, the popular restaurant review app. Yelp allows small businesses like restaurants, coffee shops, and bars to advertise to an extremely targeted group: individuals close enough to come to the restaurant and that searched for some relevant term. A barbecue restaurant in Nashville can show ads only to people searching their zip code for terms like “bbq” and “barbecue.” This enables small businesses that couldn’t afford to do radio or television advertising to attract customers.

    The existence of online customer reviews gives consumers a more trusted way to evaluate the restaurant.

    All of the internet aggregators, including Amazon, Facebook, and Google, enabled new service providers by creating a market and standardizing the rules of that market to reduce transaction costs.[21]

    The “sharing economy” is more accurately called the “renting economy” from the perspective of consumers, and the “gig economy” from the perspective of producers. Most of the benefits are the result of new markets enabled by lower transaction costs, which allows consumers to rent rather than own, including “renting” some else’s time rather than employing them full time.

    It’s easier to become an Uber driver than a cab driver, and an Airbnb host than a hotel owner. It’s easier to get your product into Amazon than Walmart. It’s easier to advertise your small business on Yelp, Google or Facebook than on a billboard, radio or TV.

    Prior to the internet, the product designer was faced with the option of selling locally (which was often too small a market), trying to get into Walmart (which was impossible without significant funding and traction), or simply working for a company that already had distribution in Walmart.

    On the internet, they could start distributing nationally or internationally on day one. The “shelf space” of Amazon or Google’s search engine results page was a lot more accessible than the shelf space of Walmart.

    As a result, it became possible for people in certain highly specialized roles to work independently of firms entirely. Product designers and marketers could sell products through the internet and the platforms erected on top of it (mostly Amazon and Alibaba in the case of physical products) and have the potential to make as much or more as they could inside a corporation.

    This group is highly motivated because their pay is directly based on how many products they sell. The aggregators and the internet were able to reduce the transaction costs that had historically made it economically inefficient or impossible for small businesses and individual entrepreneurs to exist.

    The result was that in industries touched by the internet, we saw an industry structure of large aggregators and a long tail [22] of small business which were able to use the aggregators to reach previously unreachable, niche segments of the market. Though there aren’t many cities where a high-end cat furniture retail store makes economic sense, on Google or Amazon, it does.

    source: stratechery.com

    Before


    After (Platform-Enabled Markets)


    Firms


    Platform


    Long Tail



    Walmart and big box retailers
    Amazon Niche product designers and manufacturers

    Cab companies
    Uber Drivers with extra seats

    Hotel chains
    Airbnb Homeowners with extra rooms

    Traditional media outlets
    Google and Facebook Small offline and niche online businesses

    For these industries, coordination scalability was far greater and could be seen in the emergence of micro-multinational businesses. Businesses as small as a half dozen people could manufacture in China, distribute products in North America, and employ people from Europe and Asia. This sort of outsourcing and the economic efficiencies it created had previously been reserved for large corporations.

    As a result, consumers received cheaper, but also more personalized products from the ecosystem of aggregators and small businesses.

    However, the rental economy still represents a tiny fraction of the overall economy. At any given time, only a thin subset of industries are ready to be marketized. What’s been done so far is only a small fraction of what will be done in the next few decades.

    Yet, we can already start to imagine a world which Munger calls “Tomorrow 3.0.” You need a drill to hang some shelves in your new apartment. You open an app on your smartphone and tap “rent drill.” An autonomous car picks up a drill and delivers it outside your apartment in a keypad-protected pod and your phone vibrates “drill delivered.” Once you’re done, you put it back in the pod, which sends a message to another autonomous car nearby to come pick it up. The rental costs $5, much less than buying a commercial quality power drill. This is, of course, not limited to drillsit could have been a saw, fruit dehydrator, bread machine or deep fryer.

    You own almost nothing, but have access to almost everything.

    You, nor your neighbors, have a job, at least in the traditional sense. You pick up shifts or client work as needed and maybe manage a few small side businesses. After you finish drilling the shelves in, you might sit down at your computer and see what work requests are open and work for a few hours on designing a new graphic or finishing up the monthly financial statements for a client.

    This is a world in which triangulation and transfer costs have come down dramatically, resulting in more renting than buying from consumers and more gig work than full-time jobs for producers.

    This is a world we are on our way to already, and there aren’t any big, unexpected breakthroughs that need to happen first.

    But what about the transaction cost of trust?

    In the computer era, the areas that have been affected most are what could be called low-trust industries. If the sleeping mask you order off of Amazon isn’t as high-quality as you thought, that’s not a life or death problem.

    What about areas where trust is essential?

    Enter stage right: blockchains.

    The Blockchain Era: Blockchain Markets Are Eating the World

    One area where trust matters a lot is money. Most of the developed world doesn’t think about the possibility of fiat money [23] not being trustworthy because it hasn’t happened in our lifetimes. For those that have experienced it, including major currency devaluations, trusting that your money will be worth roughly the same tomorrow as it is today is a big deal.

    Citizens of countries like Argentina and particularly Venezuela have been quicker to adopt bitcoin as a savings vehicle because their economic history made the value of censorship resistance more obvious.

    Due to poor governance, the inflation rate in Venezuela averaged 32.42 percent from 1973 until 2017. Argentina was even worse; the inflation rate there averaged 200.80 percent between 1944 and 2017.

    The story of North America and Europe is different. In the second half of the 20th century, monetary policy has been stable.

    The Bretton Woods Agreement, struck in the aftermath of the Second World War, aggregated control of most of the globe’s monetary policy in the hands of the United States. The European powers acceded to this in part because the U.S. dollar was backed by gold, meaning that the U.S. government was subject to the laws of physics and geology of gold mining. They could not expand the money supply any faster than gold could be taken out of the ground.

    With the abandonment of the gold standard under Nixon in 1973, control over money and monetary policy has moved into a historically small group of central bankers and powerful political and financial leaders and is no longer restricted by gold.

    Fundamentally, the value of the U.S. dollar today is based on trust. There is no gold in a vault that backs the dollars in your pocket. Most fiat currencies today have value because the market trusts that the officials in charge of U.S. monetary policy will manage it responsibly.

    It is at this point that the debate around monetary policy devolves into one group that imagines this small group of elitist power brokers sitting in a dark room on large leather couches surrounded by expensive art and mahogany bookshelves filled with copies of The Fountainhead smoking cigars and plotting against humanity using obscure financial maneuvering.

    Another group, quite reasonably, points to the economic prosperity of the last half-century under this system and insists on the quackery of the former group.

    A better way to understand the tension between a monetary system based on gold versus one based on fiat money this has been offered by political science professor Bruce Bueno de Mesquita:  “Democracy is a better form of government than dictatorships, not because presidents are intrinsically better people than dictators, but simply because presidents have less agency and power than dictators.”

    Bueno de Mesquita calls this Selectorate Theory. The selectorate represents the number of people who have influence in a government, and thus the degree to which power is distributed. The selectorate of a dictatorship will tend to be very small: the dictator and a few cronies. The selectorate in democracy tends to be much larger, typically encompassing the Executive, Legislative, and Judicial branches and the voters which elect them.

    Historically, the size of the selectorate involves a tradeoff between the efficiency and the robustness of the governmental system. Let’s call this the “Selectorate Spectrum.”

    Dictatorships can be more efficient than democracies because they don’t have to get many people on board to make a decision. Democracies, by contrast, are more robust, but at the cost of efficiency.

    Conservatives and progressives alike bemoan how little their elected representatives get done but happily observe how little their opponents accomplish. A single individual with unilateral power can accomplish far more (good or bad) than a government of “checks and balances.” The long-run health of a government means balancing the tradeoff between robustness and efficiency. The number of stakeholders cannot be so large that nothing gets done or the country will never adapt nor too small that one or a small group of individuals can hijack the government for personal gain.

    This tension between centralized efficiency and decentralized robustness exists in many other areas. Firms try to balance the size of the selectorate to make it large enough so there is some accountability (e.g. a board and shareholder voting) but not so large as to make it impossible to compete in a marketby centralizing most decisions in the hands of a CEO.

    We can view both the current monetary system and the internet aggregators through the lens of the selectorate. In both areas, the trend over the past few decades is that the robustness of a large selectorate has been traded away for the efficiency of a small one.[24]

    A few individualsheads of central banks, leaders of state, corporate CEOs, and leaders of large financial entities like sovereign wealth funds and pensions fundscan move markets and politics globally with even whispers of significant change. This sort of centralizing in the name of efficiency can sometimes lead to long feedback loops with potentially dramatic consequences.

    Said another way, much of what appears efficient in the short term may not be efficient but hiding risk somewhere, creating the potential for a blow-up. A large selectorate tends to appear to be working less efficiently in the short term, but can be more robust in the long term, making it more efficient in the long term as well. It is a story of the Tortoise and the Hare: slow and steady may lose the first leg, but win the race.

    In the Beginning, There Was Bitcoin

    In October 2008, an anonymous individual or group using the pseudonym Satoshi Nakamoto sent an email to a cypherpunk mailing list, explaining a new system called bitcoin. The opening line of the conclusion summed up the paper:

    “We have proposed a system for electronic transactions without relying on trust”

    When the network went live a few months later in January 2009, Satoshi embedded the headline of a story running that day in The London Times:

    “The Times 03/Jan/2009 Chancellor on brink of second bailout for banks”

    Though we can’t know for sure what was going through Satoshi’s mind at the time, the most likely explanation based is that Satoshi was reacting against the decisions being made in response to the 2008 Global Financial Crisis by the small selectorate in charge of monetary policy.

    Instead of impactful decisions about the monetary system like a bailout being reliant upon a single individual, the chancellor, Satoshi envisioned bitcoin as a more robust monetary system, with a larger selectorate beyond the control of a single individual.

    But why create a new form of money? Throughout history, the most common way for individuals to show their objections to their nation’s monetary policy was by trading their currency for some commodity like gold, silver, or livestock that they believed would hold its value better than the government-issued currency.

    Gold, in particular, has been used as a form of money for nearly 6,000 years for one primary reason: the stock-to-flow ratio. Because of how gold is deposited in the Earth’s crust, it’s very difficult to mine. Despite all the technological changes in the last few hundred years, this has meant that the amount of new gold mined in a given year (the flow) has averaged between 1-2 percent of the total gold supply (stock) with very little variation year to year.

    As a result, the total gold supply has never increased by more than 1-2 percent per year. In comparison to Venezuela’s 32.4 percent inflation and Argentina’s 200.80 percent inflation, gold’s inflation is far lower and more predictable.

    Viewed through the lens of Selectorate Theory, we can say that gold or other commodity forms of money have a larger selectorate and are more robust than government-issued fiat currency. In the same way a larger group of stakeholders in a democracy constrains the actions of any one politician, the geological properties of gold constrained governments and their monetary policy.

    Whether or not these constraints were “good” or “bad” is still a matter of debate. The Keynesian school of economics, which has come to be the view of mainstream economics, emerged out of John Maynard Keynes’s reaction to the Great Depression, which he thought was greatly exacerbated by the commitment to the gold standard and that governments should manage monetary policy to soften the cyclical nature of markets.

    The Austrian and monetarist schools believe that human behavior is too idiosyncratic to model accurately with mathematics and that minimal government intervention is best. Attempts to intervene can be destabilizing and lead to inflation so a commitment to the gold standard is the lesser evil in the long run.

    Taken in good faith, these schools represent different beliefs about the ideal point on the Selectorate Spectrum. Keynesians believe that greater efficiency could be gained by giving government officials greater control over monetary policy without sacrificing much robustness. Austrians and monetarists argue the opposite, that any short-term efficiency gains actually create huge risks to the long-term health of the system.

    Viewed as a money, bitcoin has many gold-like properties, embodying something closer to the Austrian and monetarist view of ideal money. For one, we know exactly how many bitcoin will be created21 millionand the rate at which they will be created. Like gold, the ability to change this is outside of the control of a single or small group of individuals, giving it a predictable stock-to-flow ratio and making it extremely difficult to inflate.

    Similar to gold, the core bitcoin protocol also makes great trade-offs in terms of efficiency in the name of robustness.[25]

    However, bitcoin has two key properties of fiat money which gold lacksit is very easy to divide and transport. Someone in Singapore can send 1/100th of a bitcoin to someone in Canada in less than an hour. Sending 1/100th of a gold bar would be a bit trickier.

    In his 1998 book, Cryptonomicon, science fiction author Neal Stephenson imagined a bitcoin-like money built by the grandchild of Holocaust survivors who wanted to create a way for individuals to escape totalitarian regimes without giving up all their wealth. It was difficult, if not impossible, for Jews to carry gold bars out of Germany, but what if all they had to do was remember a 12-word password phrase? How might history have been different?

    Seen in this way, bitcoin offers a potentially better trade-off between robustness and efficiency. Its programmatically defined supply schedule means the inflation rate will be lower than gold (making it more robust) while it’s digital nature makes it as divisible and transportable as any fiat currency (making it more efficient).

    Using a nifty combination of economic incentives for mining (proof-of-work system) and cryptography (including blockchain), bitcoin allowed individuals to engage in a network that was both open (like a market) and coordinated (like a firm) without needing a single or small group of power brokers to facilitate the coordination.

    Said another way, bitcoin was the first example of money going from being controlled from a small group of firm-like entities (central banks) to being market-driven. What cryptocurrency represents is the technology-enabled possibility that anyone can make their own form of money.

    Whether or not bitcoin survives, that Pandora’s Box is now open. In the same way computing and the internet opened up new areas of the economy to being eaten by markets, blockchain and cryptocurrency technology have opened up a different area to be eaten by markets: money.

    The Future of Public Blockchains

    Bitcoin is unique among forms of electronic money because it is both trustworthy and maintained by a large selectorate rather than a small one.

    There was a group that started to wonder whether the same underlying technology could be used to develop open networks in other areas by reducing the transaction cost of trust.[26]

    One group, the monetary maximalists, thinks not. According to them, public blockchains like bitcoin will only ever be useful as money because it is the area where trust is most important and so you can afford to trade everything else away. The refugee fleeing political chaos does not care that a transaction takes an hour to go through and costs $10 or even $100. They care about having the most difficult to seize, censorship-resistant form of wealth.

    Bitcoin, as it exists today, enhances coordination scalability by allowing any two parties to transact without relying on a centralized intermediary and by allowing individuals in unstable political situations to store their wealth in the most difficult-to-seize form ever created.

    The second school of thought is that bitcoin is the first example of a canonical, trustworthy ledger with a large selectorate and that there could be other types of ledgers which are able to emulate it.

    At its core, money is just a ledger. The amount of money in your personal bank account is a list of all the transactions coming in (paychecks, deposits, etc.) and all the transactions going out (paying rent, groceries, etc.). When you add all those together, you get a balance for your account.

    Historically, this ledger was maintained by a single entity, like your bank. In the case of U.S. dollars, the number in circulation can be figured out by adding up how much money the U.S. government has printed and released into the market and how much it has taken back out of the market.

    What else could be seen as a ledger?

    The answer is “nearly everything.” Governments and firms can be seen just as groups of ledgers. Governments maintain ledgers of citizenship, passports, tax obligations, social security entitlements and property ownership. Firms maintain ledgers of employment, assets, processes, customers and intellectual property.

    Economists sometimes refer to firms as “a nexus of contracts.” The value of the firm comes from those contracts and how they are structured within the “ledger of the firm.” Google has a contract with users to provide search results, with advertisers to display ads to users looking for specific search terms, and with employees to maintain the quality of their search engine. That particular ledger of contracts is worth quite a lot.

    Mechanical time opened up entirely new categories of economic organization. It allowed for trade to be synchronized at great distanceswithout mechanical time, there would have been no railroads (how would you know when to go?) and no Industrial Revolution. Mechanical time allowed for new modes of employment that lifted people out of serfdom and slavery.[27]

    In the same way, it may be that public blockchains make it possible to have ledgers that are trustworthy without requiring a centralized firm to manage them. This would shift the line further in favor of “renting” over “buying” by reducing the transaction cost of trust.

    Entrepreneurs may be able to write a valuable app and release for anyone and everyone who needs that functionality. The entrepreneur would collect micro-payments in their wallet. A product designer could release their design into the wild and consumers could download it to be printed on their 3D printer almost immediately.[28]

    For the first 10 years of bitcoin’s existence, this hasn’t been possible. Using a blockchain has meant minimizing the transaction cost of trust at all costs, but that may not always be the case. Different proposals are already being built out that allow for more transactions to happen without compromising the trust which bitcoin and other crypto-networks offer.

    There are widely differing opinions on what the best way to scale blockchains are. One faction, usually identifying as Web 3/smart contracting platform/Ethereum, believes that scaling quickly at the base layer is essential and can be done with minimal security risk while the other groups believe that scaling should be done slowly and only where it does not sacrifice the censorship-resistant nature of blockchains (bitcoin). Just like the debate between Keynesian and Austrian/monetarist views of monetary policy, these views represent different beliefs about the optimal tradeoff point on the Selectorate Spectrum. But, both groups believe that significant progress can be made on making blockchains more scalable without sacrificing too much trust.

    Public blockchains may allow aggregation without the aggregators. For certain use cases, perhaps few, perhaps many, public blockchains like bitcoin will allow the organization and coordination benefits of firms and the motivation of markets while maintaining a large selectorate.

    Ultimately, what we call society is a series of overlapping and interacting ledgers.

    In order for ledgers to function, they must be organized according to rules. Historically, rules have required rulers to enforce them. Because of network effects, these rulers tend to become the most powerful people in society. In medieval Europe, the Pope enforced the rules of Christianity and so he was among the most powerful.

    Today, Facebook controls the ledger of our social connections. Different groups of elites control the university ledgers and banking ledgers.

    Public blockchains allow people to engage in a coordinated and meritocratic network without requiring a small selectorate.

    Blockchains may introduce markets into corners of society that have never before been reached. In doing so, blockchains have the potential to replace ledgers previously run by kings, corporations, and aristocracies. They could extend the logic of the long tail to new industries and lengthen the tail for suppliers and producers by removing rent-seeking behavior and allowing for permissionless innovation.

    Public blockchains allow for rules without a ruler. It began with money, but they may move on to corporate ledgers, social ledgers and perhaps eventually, the nation-state ledger.[29]

    Acknowledgments: Credit for the phrase “Markets Are Eating the World” to Patri Friedman.


    1. https://www.bls.gov/opub/mlr/1981/11/art2full.pdf
    2. https://www.bls.gov/emp/tables/employment-by-major-industry-sector.htm
    3. http://www3.nccu.edu.tw/~jsfeng/CPEC11.pdf
    4. There are, of course, other types of transaction costs than the ones listed here. A frequent one brought up in response to Coase is company culture, which nearly all entrepreneurs and investors agree is an important factor in a firm’s productivity. This is certainly true, but the broader point about the relationship between firm size and transaction costs hold—culture is just another transaction cost.
    5. http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/synch.html
    6. https://en.wikipedia.org/wiki/Escapement
    7. Fungibility is the property of a good or a commodity whose individual units are interchangeable. For example, one ounce of pure silver is fungible with any other ounce of pure silver. This is not the same for most goods: a dining table chair is not fungible with a fold-out chair.
    8. Piece rates, paying for some measurement of a finished output like bushels of apples or balls of yarn, seems fairer. But they suffer from two issues: For one, the output of the labor depends partially on the skill and effort of the laborer, but also on the vagaries of the work environment. This is particularly true in a society like that of medieval Europe, where nearly everyone worked in agriculture. The best farmer in the world can’t make it rain. The employee wants something like insurance that they will still be compensated for the effort in the case of events outside their control, and the employer who has more wealth and knowledge of market conditions takes on these risks in exchange for increased profit potential.
    9. For the worker, time doesn’t specify costs such as effort, skill or danger. A laborer would want to demand a higher time-rate wage for working in a dangerous mine than in a field. A skilled craftsman might demand a higher time-rate wage than an unskilled craftsman.
    10. The advent of the clock was necessary for the shift from farms to cities. Sunup to sundown worked effectively as a schedule for farmers because summer was typically when the most labor on farms was required, so longer days were useful. For craftsman or others working in cities, their work was not as driven by the seasons and so a trusted measure of time that didn’t vary with the seasons was necessary. The advent of a trusted measure of time led to an increase in the quantity, quality and variety of goods and services because urban, craftsman type work was now more feasible.
    11. https://unenumerated.blogspot.com/2017/02/money-blockchains-and-social-scalability.html. I am using the phrase “coordination scalability” synonymously with how Nick uses “social scalability.” A few readers suggested that social scalability was a confusing term as it made them think of scaling social networks.
    12. 150 is often referred to as Dunbar’s number, referring to a number calculated by University of Oxford anthropologist and psychologist Robin Dunbar using a ratio of neocortical volume to total brain volume and mean group size. For more see  https://www.newyorker.com/science/maria-konnikova/social-media-affect-math-dunbar-number-friendships. The lower band of 15 was cited in Pankaj Ghemawat’s World 3.0
    13. https://www.jstor.org/stable/2938736
    14. http://discovermagazine.com/1987/may/02-the-worst-mistake-in-the-history-of-the-human-race
    15. Because what else would you want to do besides eat bread dipped in fresh olive oil and drink fresh beer and wine?
    16. From The History of Money by Jack Weatherford.
    17. It also allowed them to squeeze out competitors at different places in the supply chain and put them out of business which Standard Oil did many times before finally being broken up by anti-trust legislation.
    18. http://www.paulgraham.com/re.html
    19. Tomorrow 3.0 by Michael Munger
    20. http://www.paulgraham.com/re.html
    21. There were quite a few things, even pre-internet, in the intersection between markets and firms, like approved vendor auction markets for government contracting and bidding, but they were primarily very high ticket items where higher transaction costs could be absorbed. The internet brought down the threshold for these dramatically to something as small as a $5 cab ride.
    22. The Long Tail was a concept WIRED editor Chris Anderson used to describe the proliferation of small, niche businesses that were possible after the end of the “tyranny of geography.” https://www.wired.com/2004/10/tail/
    23. From Wikipedia: “Fiat money is a currency without intrinsic value that has been established as money, often by government regulation. Fiat money does not have use value, and has value only because a government maintains its value, or because parties engaging in exchange agree on its value.” By contrast, “Commodity money is created from a good, often a precious metal such as gold or silver.” Almost all of what we call money today, from dollars to euros to yuan, is fiat.
    24. Small institutions can get both coordination and a larger selectorate by using social norms. This doesn’t enable coordination scalability though as it stops working somewhere around Dunbar’s number of 150.
    25. Visa processes thousands of transactions per second, while the bitcoin network’s decentralized structure processes a mere seven transactions per second. The key difference being that Visa transactions are easily reversed or censored whereas bitcoin’s are not.
    26. https://medium.com/@cdixon/crypto-tokens-a-breakthrough-in-open-network-design-e600975be2ef
    27. https://medium.com/cryptoeconomics-australia/the-blockchain-economy-a-beginners-guide-to-institutional-cryptoeconomics-64bf2f2beec4
    28. https://medium.com/cryptoeconomics-australia/the-blockchain-economy-a-beginners-guide-to-institutional-cryptoeconomics-64bf2f2beec4
    29. https://twitter.com/naval/status/877467629308395521