Not just another decentralized web whitepaper?

Given all the hype and noise swirling around crypto and decentralized network projects, which runs the full gamut from scams and stupidity, to very clever and inspired ideas, the release of yet another whitepaper does not immediately set off an attention klaxon.

But this whitepaper — which details a new protocol for achieving consensus within a decentralized network — is worth paying more attention to than most.

MaidSafe, the team behind it, are also the literal opposite of fly-by-night crypto opportunists. They’ve been working on decentralized networking since long before the space became the hot, hyped thing it is now.

Their overarching mission is to engineer an entirely decentralized Internet which bakes in privacy, security and freedom of expression by design — the ‘Safe’ in their planned ‘Safe Network’ stands for ‘Secure access for everyone’ — meaning it’s encrypted, autonomous, self-organizing, self-healing. And the new consensus protocol is just another piece towards fulfilling that grand vision.

What’s consensus in decentralized networking terms? “Within decentralized networks you must have a way of the network agreeing on a state — such as can somebody access a file or confirming a coin transaction, for example — and the reason you need this is because you don’t have a central server to confirm all this to you,” explains MaidSafe’s COO Nick Lambert, discussing what the protocol is intended to achieve.

“So you need all these decentralized nodes all reaching agreement somehow on a state within the network. Consensus occurs by each of these nodes on the network voting and letting the network as a whole know what it thinks of a transaction.

“It’s almost like consensus could be considered the heart of the networks. It’s required for almost every event in the network.”

We wrote about MaidSafe’s alternative, server-less Internet in 2014. But they actually began work on the project in stealth all the way back in 2006. So they’re over a decade into the R&D at this point.

The network is p2p because it’s being designed so that data is locally encrypted, broken up into pieces and then stored distributed and replicated across the network, relying on the users’ own compute resources to stand in and take the strain. No servers necessary.

The prototype Safe Network is currently in an alpha testing stage (they opened for alpha in 2016). Several more alpha test stages are planned, with a beta release still a distant, undated prospect at this stage. But rearchitecting the entire Internet was clearly never going to be a day’s work.

MaidSafe also ran a multimillion dollar crowdsale in 2014 — for a proxy token of the coin that will eventually be baked into the network — and did so long before ICOs became a crypto-related bandwagon that all sorts of entities were jumping onto. The SafeCoin cryptocurrency is intended to operate as the inventive mechanism for developers to build apps for the Safe Network and users to contribute compute resource and thus bring MaidSafe’s distributed dream alive.

Their timing on the token sale front, coupled with prudent hodling of some of the Bitcoins they’ve raised, means they’re essentially in a position of not having to worry about raising more funds to build the network, according to Lambert.

A rough, back-of-an-envelope calculation on MaidSafe’s original crowdsale suggests, given they raised $2M in Bitcoin in April 2014 when the price for 1BTC was up to around $500, the Bitcoins they obtained then could be worth between ~$30M-$40M by today’s Bitcoin prices — though that would be assuming they held on to most of them. Bitcoin’s price also peaked far higher last year too.

As well as the token sale they also did an equity raise in 2016, via the fintech investment platform bnktothefuture, pulling in around $1.7M from that — in a mixture of cash and “some Bitcoin”.

“It’s gone both ways,” says Lambert, discussing the team’s luck with Bitcoin. “The crowdsale we were on the losing end of Bitcoin price decreasing. We did a raise from bnktothefuture in autumn of 2016… and fortunately we held on to quite a lot of the Bitcoin. So we rode the Bitcoin price up. So I feel like the universe paid us back a little bit for that. So it feels like we’re level now.”

“Fundraising is exceedingly time consuming right through the organization, and it does take a lot of time away from what you wants to be focusing on, and so to be in a position where you’re not desperate for funding is a really nice one to be in,” he adds. “It allows us to focus on the technology and releasing the network.”

The team’s headcount is now up to around 33, with founding members based at the HQ in Ayr, Scotland, and other engineers working remotely or distributed (including in a new dev office they opened in India at the start of this year), even though MaidSafe is still not taking in any revenue.

This April they also made the decision to switch from a dual licensing approach for their software — previously offering both an open source license and a commercial license (which let people close source their code for a fee) — to going only open source, to encourage more developer engagement and contributions to the project, as Lambert tells it.

“We always see the SafeNetwork a bit like a public utility,” he says. “In terms of once we’ve got this thing up and launched we don’t want to control it or own it because if we do nobody will want to use it — it needs to be seen as everyone contributing. So we felt it’s a much more encouraging sign for developers who want to contribute if they see everything is fully open sourced and cannot be closed source.”

MaidSafe’s story so far is reason enough to take note of their whitepaper.

But the consensus issue the paper addresses is also a key challenge for decentralized networks so any proposed solution is potentially a big deal — if indeed it pans out as promised.

Protocol for Asynchronous, Reliable, Secure and Efficient Consensus

MaidSafe reckons they’ve come up with a way of achieving consensus on decentralized networks that’s scalable, robust and efficient. Hence the name of the protocol — ‘Parsec’ — being short for: ‘Protocol for Asynchronous, Reliable, Secure and Efficient Consensus’.

They will be open sourcing the protocol under a GPL v3 license — with a rough timeframe of “months” for that release, according to Lambert.

He says they’ve been working on Parsec for the last 18 months to two years — but also drawing on earlier research the team carried out into areas such as conflict-free replicated data types, synchronous and asynchronous consensus, and topics such as threshold signatures and common coin.

More specifically, the research underpinning Parsec is based on the following five papers: 1. Baird L. The Swirlds Hashgraph Consensus Algorithm: Fair, Fast, Byzantine Fault Tolerance, Swirlds Tech Report SWIRLDS-TR-2016-01 (2016); 2. Mostefaoui A., Hamouna M., Raynal M. Signature-Free Asynchronous Byzantine Consensus with t <n/3 and O(n 2 ) Messages, ACM PODC (2014); 3. Micali S. Byzantine Agreement, Made Trivial, (2018); 4. Miller A., Xia Y., Croman K., Shi E., Song D. The Honey Badger of BFT Protocols, CCS (2016); 5. Team Rocket Snowflake to Avalanche: A Novel Metastable Consensus Protocol Family for Cryptocurrencies, (2018).

One tweet responding to the protocol’s unveiling just over a week ago wonders whether it’s too good to be true. Time will tell — but the potential is certainly enticing.

Bitcoin’s use of a drastically energy-inefficient ‘proof of work’ method to achieve consensus and write each transaction to its blockchain very clearly doesn’t scale. It’s slow, cumbersome and wasteful. And how to get blockchain-based networks to support the billions of transactions per second that might be needed to sustain the various envisaged applications remains an essential work in progress — with projects investigating various ideas and approaches to try to overcome the limitation.

MaidSafe’s network is not blockchain-based. It’s engineered to function with asynchronous voting of nodes, rather than synchronous voting, which should avoid the bottleneck problems associated with blockchain. But it’s still decentralized. So it needs a consensus mechanism to enable operations and transactions to be carried out autonomously and robustly. That’s where Parsec is intended to slot in.

The protocol does not use proof of work. And is able, so the whitepaper claims, to achieve consensus even if a third of the network is comprised of malicious nodes — i.e. nodes which are attempting to disrupt network operations or otherwise attack the network.

Another claimed advantage is that decisions made via the protocol are both mathematically guaranteed and irreversible.

“What Parsec does is it can reach consensus even with malicious nodes. And up to a third of the nodes being malicious is what the maths proofs suggest,” says Lambert. “This ability to provide mathematical guarantees that all parts of the network will come to the same agreement at a point in time, even with some fault in the network or bad actors — that’s what Byzantine Fault Tolerance is.”

In theory a blockchain using proof of work could be hacked if any one entity controlled 51% of the nodes on the network (although in reality it’s likely that such a large amount of energy would be required it’s pretty much impractical).

So on the surface MaidSafe’s decentralized network — which ‘only’ needs 33% of its nodes to be compromised for its consensus decisions to be attacked — sounds rather less robust. But Lambert says it’s more nuanced than the numbers suggest. And in fact the malicious third would also need to be nodes that have the authority to vote. “So it is a third but it’s a third of well reputed nodes,” as he puts it.

So there’s an element of proof of stake involved too, bound up with additional planned characteristics of the Safe Network — related to dynamic membership and sharding (Lambert says MaidSafe has additional whitepapers on both those elements coming soon).

“Those two papers, particularly the one around dynamic membership, will explain why having a third of malicious nodes is actually harder than just having 33% of malicious nodes. Because the nodes that can vote have to have a reputation as well. So it’s not just purely you can flood the Safe Network with lots and lots of malicious nodes and override it only using a third of the nodes. What we’re saying is the nodes that can vote and actually have a say must have a good reputation in the network,” he says.

“The other thing is proof of stake… Everyone is desperate to move away from proof of work because of its environmental impact. So proof of stake — I liken it to the Scottish landowners, where people with a lot of power have more say. In the cryptocurrency field, proof of stake might be if you have, let’s say, 10 coins and I have one coin your vote might be worth 10x as much authority as what my one coin would be. So any of these mechanisms that they come up with it has that weighting to it… So the people with the most vested interests in the network are also given the more votes.”

Sharding refers to closed groups that allow for consensus votes to be reached by a subset of nodes on a decentralized network. By splitting the network into small sections for consensus voting purposes the idea is you avoid the inefficiencies of having to poll all the nodes on the network — yet can still retain robustness, at least so long as subgroups are carefully structured and secured.

“If you do that correctly you can make it more secure and you can make things much more efficient and faster,” says Lambert. “Because rather than polling, let’s say 6,000 nodes, you might be polling eight nodes. So you can get that information back quickly.

“Obviously you need to be careful about how you do that because with much less nodes you can potentially game the network so you need to be careful how you secure those smaller closed groups or shards. So that will be quite a big thing because pretty much every crypto project is looking at sharding to make, certainly, blockchains more efficient. And so the fact that we’ll have something coming out in that, after we have the dynamic membership stuff coming out, is going to be quite exciting to see the reaction to that as well.”

Voting authority on the Safe Network might be based on a node’s longevity, quality and historical activity — so a sort of ‘reputation’ score (or ledger) that can yield voting rights over time.

“If you’re like that then you will have a vote in these closed groups. And so a third of those votes — and that then becomes quite hard to game because somebody who’s then trying to be malicious would need to have their nodes act as good corporate citizens for a time period. And then all of a sudden become malicious, by which time they’ve probably got a vested stake in the network. So it wouldn’t be possible for someone to just come and flood the network with new nodes and then be malicious because it would not impact upon the network,” Lambert suggests.

The computing power that would be required to attack the Safe Network once it’s public and at scale would also be “really, really significant”, he adds. “Once it gets to scale it would be really hard to co-ordinate anything against it because you’re always having to be several hundred percent bigger than the network and then have a co-ordinated attack on it itself. And all of that work might get you to impact the decision within one closed group. So it’s not even network wide… And that decision could be on who accesses one piece of encrypted shard of data for example… Even the thing you might be able to steal is only an encrypted shard of something — it’s not even the whole thing.”

Other distributed ledger projects are similarly working on Asynchronous Byzantine Fault Tolerant (AFBT) consensus models, including those using directed acrylic graphs (DAGs) — another nascent decentralization technology that’s been suggested as an alternative to blockchain.

And indeed AFBT techniques predate Bitcoin, though MaidSafe says these kind of models have only more recently become viable thanks to research and the relative maturing of decentralized computing and data types, itself as a consequence of increased interest and investment in the space.

However in the case of Hashgraph — the DAG project which has probably attracted the most attention so far — it’s closed source, not open. So that’s one major difference with MaidSafe’s approach. 

Another difference that Lambert points to is that Parsec has been built to work in a dynamic, permissionless network environment (essential for the intended use-case, as the Safe Network is intended as a public network). Whereas he claims Hashgraph has only demonstrated its algorithms working on a permissioned (and therefore private) network “where all the nodes are known”.

He also suggests there’s a question mark over whether Hashgraph’s algorithm can achieve consensus when there are malicious nodes operating on the network. Which — if true — would limit what it can be used for.

“The Hashgraph algorithm is only proven to reach agreement if there’s no adversaries within the network,” Lambert claims. “So if everything’s running well then happy days, but if there’s any maliciousness or any failure within that network then — certainly on the basis of what’s been published — it would suggest that that algorithm was not going to hold up to that.”

“I think being able to do all of these things asynchronously with all of the mathematical guarantees is very difficult,” he continues, returning to the core consensus challenge. “So at the moment we see that we have come out with something that is unique, that covers a lot of these bases, and is a very good use for our use-case. And I think will be useful for others — so I think we like to think that we’ve made a paradigm shift or a vast improvement over the state of the art.”

Paradigm shift vs marginal innovation

Despite the team’s conviction that, with Parsec, they’ve come up with something very notable, early feedback includes some very vocal Twitter doubters.

For example there’s a lengthy back-and-forth between several MaidSafe engineers and Ethereum researcher Vlad Zamfir — who dubs the Parsec protocol “overhyped” and a “marginal innovation if that”… so, er, ouch.

Lambert is, if not entirely sanguine, then solidly phlegmatic in the face of a bit of initial Twitter blowback — saying he reckons it will take more time for more detailed responses to come, i.e. allowing for people to properly digest the whitepaper.

“In the world of async BFT algorithms, any advance is huge,” MaidSafe CEO David Irvine also tells us when we ask for a response to Zamfir’s critique. “How huge is subjective, but any advance has to be great for the world. We hope others will advance Parsec like we have built on others (as we clearly state and thank them for their work).  So even if it was a marginal development (which it certainly is not) then I would take that.”

“All in all, though, nothing was said that took away from the fact Parsec moves the industry forward,” he adds. “I felt the comments were a bit juvenile at times and a bit defensive (probably due to us not agreeing with POS in our Medium post) but in terms of the only part commented on (the coin flip) we as a team feel that part could be much more concrete in terms of defining exactly how small such random (finite) delays could be. We know they do not stop the network and a delaying node would be killed, but for completeness, it would be nice to be that detailed.”

A developer source of our own in the crypto/blockchain space — who’s not connected to the MaidSafe or Ethereum projects — also points out that Parsec “getting objective review will take some time given that so many potential reviewers have vested interest in their own project/coin”.

It’s certainly fair to say the space excels at public spats and disagreements. Researchers pouring effort into one project can be less than kind to rivals’ efforts. (And, well, given all the crypto Lambos at stake it’s not hard to see why there can be no love lost — and, ironically, zero trust — between competing champions of trustless tech.)

Another fundamental truth of these projects is they’re all busily experimenting right now, with lots of ideas in play to try and fix core issues like scalability, efficiency and robustness — often having different ideas over implementation even if rival projects are circling and/or converging on similar approaches and techniques.

“Certainly other projects are looking at sharding,” says Lambert. “So I know that Ethereum are looking at sharding. And I think Bitcoin are looking at that as well, but I think everyone probably has quite different ideas about how to implement it. And of course we’re not using a blockchain which makes that another different use-case where Ethereum and Bitcoin obviously are. But everyone has — as with anything — these different approaches and different ideas.”

“Every network will have its own different ways of doing [consensus],” he adds when asked whether he believes Parsec could be adopted by other projects wrestling with the consensus challenge. “So it’s not like some could lift [Parsec] out and just put it in. Ethereum is blockchain-based — I think they’re looking at something around proof of stake, but maybe they could take some ideas or concepts from the work that we’re open sourcing for their specific case.

“If you get other blockchain-less networks like IOTA, Byteball, I think POA is another one as well. These other projects it might be easier for them to implement something like Parsec with them because they’re not using blockchain. So maybe less of that adaption required.”

Whether other projects will deem Parsec worthy of their attention remains to be seen at this point with so much still to play for. Some may prefer to expend effort trying to rubbish a rival approach, whose open source tech could, if it stands up to scrutiny and operational performance, reduce the commercial value of proprietary and patented mechanisms also intended to grease the wheels of decentralized networks — for a fee.

And of course MaidSafe’s developed-in-stealth consensus protocol may also turn out to be a relatively minor development. But finding a non-vested expert to give an impartial assessment of complex network routing algorithms conjoined to such a self-interested and, frankly, anarchical industry is another characteristic challenge of the space.

Irvine’s view is that DAG based projects which are using a centralized component will have to move on or adopt what he dubs “state of art” asynchronous consensus algorithms — as MaidSafe believes Parsec is — aka, algorithms which are “more widely accepted and proven”.

“So these projects should contribute to the research, but more importantly, they will have to adopt better algorithms than they use,” he suggests. “So they can play an important part, upgrades! How to upgrade a running DAG based network? How to had fork a graph? etc. We know how to hard fork blockchains, but upgrading DAG based networks may not be so simple when they are used as ledgers.

“Projects like Hashgraph, Algorand etc will probably use an ABFT algorithm like this as their whole network with a little work for a currency; IOTA, NANO, Bytball etc should. That is entirely possible with advances like Parsec. However adding dynamic membership, sharding, a data layer then a currency is a much larger proposition, which is why Parsec has been in stealth mode while it is being developed.

“We hope that by being open about the algorithm, and making the code open source when complete, we will help all the other projects working on similar problems.”

Of course MaidSafe’s team might be misguided in terms of the breakthrough they think they’ve made with Parsec. But it’s pretty hard to stand up the idea they’re being intentionally misleading.

Because, well, what would be the point of that? While the exact depth of MaidSafe’s funding reserves isn’t clear, Lambert doesn’t sound like a startup guy with money worries. And the team’s staying power cannot be in doubt — over a decade into the R&D needed to underpin their alt network.

It’s true that being around for so long does have some downsides, though. Especially, perhaps, given how hyped the decentralized space has now become. “Because we’ve been working on it for so long, and it’s been such a big project, you can see some negative feedback about that,” as Lambert admits.

And with such intense attention now on the space, injecting energy which in turn accelerates ideas and activity, there’s perhaps extra pressure on a veteran player like MaidSafe to be seen making a meaningful contribution — ergo, it might be tempting for the team to believe the consensus protocol they’ve engineered really is a big deal.

To stand up and be counted amid all the noise, as it were. And to draw attention to their own project — which needs lots of external developers to buy into the vision if it’s to succeed, yet, here in 2018, it’s just one decentralization project among so many. 

The Safe Network roadmap

Consensus aside, MaidSafe’s biggest challenge is still turning the sizable amount of funding and resources the team’s ideas have attracted to date into a bona fide alternative network that anyone really can use. And there’s a very long road to travel still on that front, clearly.

The Safe Network is in alpha 2 testing incarnation (which has been up and running since September last year) — consisting of around a hundred nodes that MaidSafe is maintaining itself.

The core decentralization proposition of anyone being able to supply storage resource to the network via lending their own spare capacity is not yet live — and won’t come fully until alpha 4.

“People are starting to create different apps against that network. So we’ve seen Jams — a decentralized music player… There are a couple of storage style apps… There is encrypted email running as well, and also that is running on Android,” says Lambert. “And we have a forked version of the Beaker browser — that’s the browser that we use right now. So if you can create websites on the Safe Network, which has its own protocol, and if you want to go and view those sites you need a Safe browser to do that, so we’ve also been working on our own browser from scratch that we’ll be releasing later this year… So there’s a number of apps that are running against that alpha 2 network.

“What alpha 3 will bring is it will run in parallel with alpha 2 but it will effectively be a decentralized routing network. What that means is it will be one for more technical people to run, and it will enable data to be passed around a network where anyone can contribute their resources to it but it will not facilitate data storage. So it’ll be a command line app, which is probably why it’ll suit technical people more because there’ll be no user interface for it, and they will contribute their resources to enable messages to be passed around the network. So secure messaging would be a use-case for that.

“And then alpha 4 is effectively bringing together alpha 2 and alpha 3. So it adds a storage layer on top of the alpha 3 network — and at that point it gives you the fully decentralized network where users are contributing their resources from home and they will be able to store data, send messages and things of that nature. Potentially during alpha 4, or a later alpha, we’ll introduce test SafeCoin. Which is the final piece of the initial puzzle to provide incentives for users to provide resources and for developers to make apps. So that’s probably what the immediate roadmap looks like.”

On the timeline front Lambert won’t be coaxed into fixing any deadlines to all these planned alphas. They’ve long ago learnt not to try and predict the pace of progress, he says with a laugh. Though he does not question that progress is being made.

“These big infrastructure projects are typically only government funded because the payback is too slow for venture capitalists,” he adds. “So in the past you had things like Arpanet, the precursor to the Internet — that was obviously a US government funded project — and so we’ve taken on a project which has, not grown arms and legs, but certainly there’s more to it than what was initially thought about.

“So we are almost privately funding this infrastructure. Which is quite a big scope, and I will say why it’s taking a bit of time. But we definitely do seem to be making lots of progress.”

Read more: https://techcrunch.com/2018/06/02/not-just-another-decentralized-web-whitepaper/

Related Articles

Markets Are Eating The World

For the last hundred years, individuals have worked for firms, and, by historical standards, large ones.

That many of us live in suburbs and drive our cars into the city to go to work at a large office building is so normal that it seems like it has always been this way. Of course, it hasn’t. In 1870, almost 50 percent of the U.S. population was employed in agriculture.[1] As of 2008, less than 2 percent of the population is directly employed in agriculture, but many people worked for these relatively new things called “corporations.”[2]

Many internet pioneers in the 90’s believed that the internet would start to break up corporations by letting people communicate and organize over a vast, open network. This reality has sort-of played out: the “gig economy” and rise in freelancing are persistent, if not explosive, trends. With the re-emergence of blockchain technology, talk of “the death of the firm” has returned. Is there reason to think this time will be different?

To understand why this time might (or might not) be different, let us first take a brief look back into Coasean economics and mechanical clocks.

In his 1937 paper, “The Nature of the Firm,” economist R.H. Coase asked “if markets were as efficient as economists believed at the time, why do firms exist at all? Why don’t entrepreneurs just go out and hire contractors for every task they need to get done?”[3]

If an entrepreneur hires employees, she has to pay them whether they are working or not. Contractors only get paid for the work they actually do. While the firm itself interacts with the market, buying supplies from suppliers and selling products or services to customers, the employees inside of it are insulated. Each employee does not renegotiate their compensation every time they are asked to do something new. But, why not?

Coase’s answer was transaction costs. Contracting out individual tasks can be more expensive than just keeping someone on the payroll because each task involves transaction costs.

Imagine if instead of answering every email yourself, you hired a contractor that was better than you at dealing with the particular issue in that email. However, it costs you something to find them. Once you found them you would have to bargain and agree on a price for their services then get them to sign a contract and potentially take them to court if they didn’t answer the email as stipulated in the contract.

Duke economist Mike Munger calls these three types of transaction costs triangulation, how hard it is to find and measure the quality of a service; transfer, how hard it is to bargain and agree on a contract for the good or service; and trust, whether the counterparty is trustworthy or you have recourse if they aren’t.

You might as well just answer the email yourself or, as some executives do, hire a full-time executive assistant. Even if the executive assistant isn’t busy all the time, it’s still better than hiring someone one off for every email or even every day.

Coase’s thesis was that in the presence of these transaction costs, firms will grow larger as long as they can benefit from doing tasks in-house rather than incurring the transaction costs of having to go out and search, bargain and enforce a contract in the market. They will expand or shrink until the cost of making it in the firm equals the cost of buying it on the market.

The lower the transaction costs are, the more efficient markets will be, and the smaller firms will be.

In a world where markets were extremely efficient, it would be very easy to find and measure things (low triangulation costs), it would be very easy to bargain and pay (low transfer costs), and it would be easy to trust the counterparty to fulfill the contract (low trust costs).

In that world, the optimal size of the firm is one person (or a very few people). There’s no reason to have a firm because business owners can just buy anything they need on a one-off basis from the market.[4] Most people wouldn’t have full-time jobs; they would do contract work.

Consumers would need to own very few things. If you needed a fruit dehydrator to prepare for a camping trip twice a year, you could rent one quickly and cheaply. If you wanted to take your family to the beach twice a year, you could easily rent a place just for the days you were there.

On the other hand, in a world that was extremely inefficient, it would be hard to find and measure things (high triangulation costs), it would be difficult to bargain and pay (high transfer costs) and it would be difficult to trust the counterparty to fulfill the contract (high trust costs).

In that world, firms would tend to be large. It would be inefficient to buy things from the market and so entrepreneurs would tend to accumulate large payrolls. Most people would work full-time jobs for large firms. If you wanted to take your family to the beach twice a year, you would need to own the beach house because it would be too inefficient to rent, the reality before online marketplaces like AirBnB showed up.

Consumers would need to own nearly everything they might conceivably need. Even if they only used their fruit dehydrator twice a year, they’d need to own it because the transaction costs involved in renting it would be too high.

If the structure of the economy is based on transaction costs, then what determines them?

Technological Eras and Transaction Costs

The primary determinant of transaction costs is technology.

The development of the wheel and domestication of horses and oxes decreased transfer costs by making it possible to move more goods further. Farmers who could bring their crops to market using an ox cart rather than carrying it by hand could charge less and still make the same profit.

The development of the modern legal system reduced the transaction cost of trust. It was possible to trust that your counterparty would fulfill their contract because they knew you had recourse if they didn’t.

The list goes on: standardized weights and  measures, the sail, the compass, the printing press, the limited liability corporation, canals, phones, warranties, container ships and, more recently, smartphones and the internet.

It’s hard to appreciate how impactful many of these technologies has been, because most of them had become so common by the time most of us were born that we take them for granted.

As the author Douglas Adams said, “Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.”

To see how technology affects transaction costs, and how that affects the way our society is organized, let’s consider something which we all think of as “normal and ordinary,”  but which has had a huge impact on our lives: the mechanical clock.

The Unreasonable Effectiveness of the Mechanical Clock

In 1314, The city of Caen installed a mechanical clock with the following inscription: “I give the hours voice to make the common folk rejoice.” “Rejoice” is a pretty strong reaction to a clock, but it wasn’t overstated, everyone in Caen was pretty jazzed about the mechanical clock. Why?

A key element of why we have jobs today as opposed to working as slaves or serfs bonded to the land as was common in the Feudal system is a direct result of the clock.

Time was important before the invention of the clock but was very hard to measure. Rome was full of sundials, and medieval Europe’s bell towers where, time was tolled, were the tallest structures in town.[5]

This was not cheap. In the larger and more important belfries, two bell-ringers lived full time, each serving as a check on the other. The bells themselves were usually financed by local guilds that relied on the time kept to tell their workers when they had to start working and when they could go home.

This system was problematic for a few reasons.

For one, it was expensive. Imagine if you had to pool funds together with your neighbors to hire two guys to sit in the tower down the street full time and ring the bell to wake you up in the morning.

For another, the bell could only signal a few events per day. If you wanted to organize a lunch meeting with a friend, you couldn’t ask the belltower to toll just for you. Medieval bell towers had not yet developed snooze functionality.

Finally, sundials suffered from accuracy problems. Something as common as clouds could make it difficult to tell precisely when dawn, dusk, and midday occurred.

In the 14th and 15th centuries, the expensive bell towers of Europe’s main cities got a snazzy upgrade that dramatically reduced transaction costs: the mechanical clock.

The key technological breakthrough that allowed the development was the escapement.

The escapement transfers energy to the clock’s pendulum to replace the energy lost to friction and keep it on time. Each swing of the pendulum releases a tooth of the escapement’s wheel gear, allowing the clock’s gear train to advance or “escape” by a set amount. This moves the clock’s hands forward at a steady rate.[6]

The accuracy of early mechanical clocks, plus or minus 10-15 minutes per day, was not notably better than late water clocks and less accurate than the sandglass, yet mechanical clocks became widespread. Why?

  1. Its automatic striking feature meant the clock could be struck every hour at lower cost, making it easier to schedule events than only striking at dawn, dusk and noon.
  2. It was more provably fair than the alternatives, which gave all parties greater confidence that the time being struck was accurate. (Workers were often suspicious that employers could bribe or coerce the bell-ringers to extend the workday, which was harder to do with a mechanical clock.)

Mechanical clocks broadcast by bell towers provided a fair (lower trust costs) and fungible [7] (lower transfer costs) measure of time. Each hour rung on the bell tower could be trusted to be the same length as another hour.

Most workers in the modern economy earn money based on a time-rate, whether the time period is an hour, a day, a week or a month. This is possible only because we have a measure of time which both employer and employee agree upon. If you hire someone to pressure-wash your garage for an hour, you may argue with them over the quality of the work, but you can both easily agree whether they spent an hour in the garage.

Prior to the advent of the mechanical clock, slavery and serfdom were the primary economic relationships, in part because the transaction cost of measuring time beyond just sunup and sundown was so high, workers were chained to their masters or lords.[8]

The employer is then able to use promotions, raises, and firing to incentivize employees to produce quality services during the time they are being paid for.[9]

In a system based on time-rate wages rather than slavery or serfdom, workers have a choice. If the talented blacksmith can get a higher time-rate wage from a competitor, she’s able to go work for them because there is an objective, fungible measure of time she’s able to trade.

As history has shown, this was a major productivity and quality-of-life improvement for both parties.[10]

It gradually became clear that mechanical time opened up entirely new categories of economic organization and productivity that had hitherto been not just impossible, but unimaginable.

We could look at almost any technology listed abovestandardized weights and measures, the sail, the compass, the printing press, etc.and do a similar analysis of how it affected transaction costs and eventually how it affected society as a result.

The primary effect is an increase in what we will call coordination scalability.

Coordination Scalability

“It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them.”   Alfred North Whitehead

About 70,000 years ago, there were between six and ten species of the genus homo. Now, of course, there is just one: Homo sapiens. Why did Homo sapiens prevail over the other species, like Homo neanderthalensis?

Homo sapiens prevailed because of their ability to coordinate. Coordination was made possible by increased neocortical size, which led to an ability to work together in large groups, not just as single individuals. Instead of single individuals hunting, groups could hunt and bring down larger prey more safely and efficiently.[11]

The brain of Homo sapiens has proven able to invent other, external structures which further increased coordination scalability by expanding the network of other people we could rely on.

Maybe the most important of these was language, but we have evolved many others since, including the mechanical clock.

The increased brain size has driven our species through four coordination revolutions: Neolithic, Industrial, Computing, Blockchain.

Neolithic Era: The Emergence of Division of Labor

The first economic revolution was a shift from humans as hunter-gatherers to homo sapiens as farmers.

Coordination scalability among hunter-gatherers was limited to the size of the band, which tended to range from 15 to 150 individuals.[12] The abandonment of a nomadic way of life and move to agriculture changed this by allowing specialization and the formation of cities.

Agriculture meant that people could, for the first time, accumulate wealth. Farmers could save excess crops to eat later or trade them for farming equipment, baskets or decorations. The problem was that this wealth was suddenly worth stealing and so farmers needed to defend their wealth.

Neolithic societies typically consisted of groups of farmers protected by what Mancur Olson called “stationary bandits,” basically warlords.[13] This allowed the emergence of much greater specialization. Farmers accumulated wealth and paid some to the warlords for protection, but even then there was still some left over, making it possible for individuals to specialize.

A city of 10,000 people requires, but also makes possible, specialists.

The limits of coordination scalability increased from 150 to thousands or, in some cases, tens of thousands. This was not necessarily a boon to human happiness. Anthropologist Jared Diamond called the move to agriculture “the worst mistake in the history of the human race.”[14] The quality of life for individuals declined: lifespans shortened, nutrition was worse leading to smaller stature, and disease was more prevalent.

But this shift was irresistible because specialization created so much more wealth and power that groups which adopted this shift came to dominate those that didn’t. The economies of scale in military specialization, in particular, were overwhelming. Hunt-gatherers couldn’t compete.

In the Neolithic era, the State was the limit of coordination scalability.

Industrial Era: Division of Labor Is Eating the World

Alongside the city-state, a new technology started to emerge that would further increase the limits of coordination scalability: money. To illustrate, let us take the European case, from ancient Greece to modernity, though the path in other parts of the world was broadly similar. Around 630 B.C., the Lydian kings recognized the need for small, easily transported coins worth no more than a few days’ labor. They made these ingots in a standard sizeabout the size of a thumbnail—and weight, and stamped an emblem of a lion’s head on them.

This eliminated one of the most time-consuming (and highest transaction cost) steps in commerce: weighing gold and silver ingots each time a transaction was made. Merchants could easily count the number of coins without worrying about cheating.

Prior to the invention of coins, trade had been limited to big commercial transactions, like buying a herd of cattle. With the reduced transfer cost facilitated by coins, Lydians began trading in the daily necessities of lifegrain, olive oil, beer, wine, and wood.[15]

The variety and abundance of goods which could suddenly be traded led to another innovation: the retail market.

Previously, buyers had to go to the home of sellers of whatever they needed. If you needed olive oil, you had to walk over to the olive oil lady’s house to get it. With the amount of trade that began happening after coinage, a central market emerged. Small stalls lined the market where each merchant specialized in (and so could produce more efficiently) a particular goodmeat, grain, jewelry, bread, cloth, etc. Instead of having to go the olive oil lady’s house, you could go to her stall and pick up bread from the baker while you were there.

From this retail market in Lydia sprang the Greek agora, Medieval market squares in Europe and, the suburban shopping mall and, eventually, the “online shopping malls” Amazon and Google. Though markets were around as early as 7th century BCE Lydia, they really hit their stride in The Industrial Revolution in the 18th century.[16]

Adam Smith was the first to describe in detail the effect of this marketization of the world. Markets made it possible to promote the division of labor across political units, not just within them. Instead of each city or country manufacturing all the goods they needed, different political entities could further divide labor. Coordination scalability started to stretch across political borders.

Coming back to Coase, firms will expand or shrink until “making” equals the cost of “buying.” Under this Industrial era, transaction costs made administrative and managerial coordination (making) more efficient than market coordination (buying) for most industries, which led to the rise of large firms.

The major efficiency gain of Industrial companies over their more “artisanal” forebearers was that using the techniques of mass production, they could produce products of a higher quality at a lower price. This was possible only if they were able to enforce standards throughout the supply chain. The triangulation transaction cost can be broken down into search and measurement: a company needed to find the vendor and to be able to measure the quality of the good or service.

In the early Industrial era, the supply chain was extremely fragmented. By bringing all the pieces into the firm, a large vertically integrated company could be more efficient.[17]

As an example, In the 1860s and 1870s, the Carnegie Corporation purchased mines to ensure it had reliable access to the iron ore and coke it needed to make steel. The upstream suppliers were unreliable and non-standardized and Carnegie Corporation could lower the cost of production by simply owning the whole supply chain.

This was the case in nearly every industry. By bringing many discrete entities under one roof and one system of coordination, greater economic efficiencies were gained and the multi-unit business corporation replaced the small, single-unit enterprise because administrative coordination enabled greater productivity through lower transaction costs per task than was possible before. Economies of scale flourished.

This system of large firms connected by markets greatly increased coordination scalability. Large multinational firms could stretch across political boundaries and provide goods and services more efficiently.

In Henry Ford’s world, the point where making equaled the cost of buying was pretty big. Ford built a giant plant at River Rouge just outside Detroit between 1917 and 1928 that took in iron ore and rubber at one end and sent cars out the other. At the factory’s peak, 100,000 people worked there. These economies of scale allowed Ford to dramatically drive down the cost of an automobile, making it possible for the middle class to own a car.[18]

As with Carnegie, Ford learned that supplier networks take a while to emerge and grow into something reliable. In 1917, doing everything himself was the only way to get the scale he needed to be able to make an affordable car.

One of the implications of this model was that industrial businesses required huge startup costs.

The only chance any entrepreneur had to compete required starting out with similarly massive amounts of capital required to build a factory large and efficient enough to compete with Ford.

For workers, this meant that someone in a specialized role, like an electric engineer or an underwriter, did not freelance or work for small businesses. Because the most efficient way to produce products was in large organizations, specialized workers could earn the most by working inside large organizations, be they Ford, AT&T or Chase Bank.

At the peak of the Industrial era, there were two dominant institutions: firms and markets.

Work inside the firm allowed for greater organization and specialization which, in the presence of high transaction costs was more economically efficient.

Markets were more chaotic and less organized, but also more motivating. Henry Ford engaged with the market and made out just a touch better than any of his workers; there just wasn’t room for many Henry Fords.

This started to dissolve in the second half of the 20th century. Ford no longer takes iron ore and rubber as the inputs to their factories, but has a vast network of upstream suppliers.[19] The design and manufacturing of car parts now happens over a long supply chain, which the car companies ultimately assemble and sell.

One reason is that supplier networks became more standardized and reliable. Ford can now buy ball bearings and brake pads more efficiently than he can make them, so he does. Each company in the supply chain focuses on what they know best and competition forces them to constantly improve.

By the 1880s, it cost Carnegie more to operate the coke ovens in-house than to buy it from an independent source, so he sold off the coke ovens and bought it from the open market. Reduced transaction costs in the form of more standardized and reliable production technology caused both Ford and Carnegie corporation to shrink as Coase’s theory would suggest.

The second reason is that if you want to make a car using a network of cooperating companies, you have to be able to coordinate their efforts, and you can do that much better with telecommunication technology broadly and computers specifically. Computers reduce the transaction costs that Coase argued are the raison d’etre of corporations. That is a fundamental change.[20]

The Computing Era: Software Is Eating the World

Computers, and the software and networks built on top of them, had a new economic logic driven by lower transaction costs.

Internet aggregators such as Amazon, Facebook, Google, Uber and Airbnb reduced the transaction costs for participants on their platforms. For the industries that these platforms affected, the line between “making” and “buying” shifted toward buying. The line between owning and renting shifted toward renting.

Primarily, this was done through a reduction in triangulation costs (how hard it is to find and measure the quality of a service), and transfer costs (how hard it is to bargain and agree on a contract for the good or service).

Triangulation costs came down for two reasons. One was the proliferation of smartphones, which made it possible for services like Uber and Airbnb to exist. The other was the increasing digitization of the economy. Digital goods are both easier to find (think Googling versus going to the library or opening the Yellow Pages) and easier to measure the quality of (I know exactly how many people read my website each day and how many seconds they are there, the local newspaper does not).

The big improvement in transfer costs was the result of matchmaking: bringing together and facilitating the negotiation of mutually beneficial commercial or retail deals.  

Take Yelp, the popular restaurant review app. Yelp allows small businesses like restaurants, coffee shops, and bars to advertise to an extremely targeted group: individuals close enough to come to the restaurant and that searched for some relevant term. A barbecue restaurant in Nashville can show ads only to people searching their zip code for terms like “bbq” and “barbecue.” This enables small businesses that couldn’t afford to do radio or television advertising to attract customers.

The existence of online customer reviews gives consumers a more trusted way to evaluate the restaurant.

All of the internet aggregators, including Amazon, Facebook, and Google, enabled new service providers by creating a market and standardizing the rules of that market to reduce transaction costs.[21]

The “sharing economy” is more accurately called the “renting economy” from the perspective of consumers, and the “gig economy” from the perspective of producers. Most of the benefits are the result of new markets enabled by lower transaction costs, which allows consumers to rent rather than own, including “renting” some else’s time rather than employing them full time.

It’s easier to become an Uber driver than a cab driver, and an Airbnb host than a hotel owner. It’s easier to get your product into Amazon than Walmart. It’s easier to advertise your small business on Yelp, Google or Facebook than on a billboard, radio or TV.

Prior to the internet, the product designer was faced with the option of selling locally (which was often too small a market), trying to get into Walmart (which was impossible without significant funding and traction), or simply working for a company that already had distribution in Walmart.

On the internet, they could start distributing nationally or internationally on day one. The “shelf space” of Amazon or Google’s search engine results page was a lot more accessible than the shelf space of Walmart.

As a result, it became possible for people in certain highly specialized roles to work independently of firms entirely. Product designers and marketers could sell products through the internet and the platforms erected on top of it (mostly Amazon and Alibaba in the case of physical products) and have the potential to make as much or more as they could inside a corporation.

This group is highly motivated because their pay is directly based on how many products they sell. The aggregators and the internet were able to reduce the transaction costs that had historically made it economically inefficient or impossible for small businesses and individual entrepreneurs to exist.

The result was that in industries touched by the internet, we saw an industry structure of large aggregators and a long tail [22] of small business which were able to use the aggregators to reach previously unreachable, niche segments of the market. Though there aren’t many cities where a high-end cat furniture retail store makes economic sense, on Google or Amazon, it does.

source: stratechery.com

Before


After (Platform-Enabled Markets)


Firms


Platform


Long Tail



Walmart and big box retailers
Amazon Niche product designers and manufacturers

Cab companies
Uber Drivers with extra seats

Hotel chains
Airbnb Homeowners with extra rooms

Traditional media outlets
Google and Facebook Small offline and niche online businesses

For these industries, coordination scalability was far greater and could be seen in the emergence of micro-multinational businesses. Businesses as small as a half dozen people could manufacture in China, distribute products in North America, and employ people from Europe and Asia. This sort of outsourcing and the economic efficiencies it created had previously been reserved for large corporations.

As a result, consumers received cheaper, but also more personalized products from the ecosystem of aggregators and small businesses.

However, the rental economy still represents a tiny fraction of the overall economy. At any given time, only a thin subset of industries are ready to be marketized. What’s been done so far is only a small fraction of what will be done in the next few decades.

Yet, we can already start to imagine a world which Munger calls “Tomorrow 3.0.” You need a drill to hang some shelves in your new apartment. You open an app on your smartphone and tap “rent drill.” An autonomous car picks up a drill and delivers it outside your apartment in a keypad-protected pod and your phone vibrates “drill delivered.” Once you’re done, you put it back in the pod, which sends a message to another autonomous car nearby to come pick it up. The rental costs $5, much less than buying a commercial quality power drill. This is, of course, not limited to drillsit could have been a saw, fruit dehydrator, bread machine or deep fryer.

You own almost nothing, but have access to almost everything.

You, nor your neighbors, have a job, at least in the traditional sense. You pick up shifts or client work as needed and maybe manage a few small side businesses. After you finish drilling the shelves in, you might sit down at your computer and see what work requests are open and work for a few hours on designing a new graphic or finishing up the monthly financial statements for a client.

This is a world in which triangulation and transfer costs have come down dramatically, resulting in more renting than buying from consumers and more gig work than full-time jobs for producers.

This is a world we are on our way to already, and there aren’t any big, unexpected breakthroughs that need to happen first.

But what about the transaction cost of trust?

In the computer era, the areas that have been affected most are what could be called low-trust industries. If the sleeping mask you order off of Amazon isn’t as high-quality as you thought, that’s not a life or death problem.

What about areas where trust is essential?

Enter stage right: blockchains.

The Blockchain Era: Blockchain Markets Are Eating the World

One area where trust matters a lot is money. Most of the developed world doesn’t think about the possibility of fiat money [23] not being trustworthy because it hasn’t happened in our lifetimes. For those that have experienced it, including major currency devaluations, trusting that your money will be worth roughly the same tomorrow as it is today is a big deal.

Citizens of countries like Argentina and particularly Venezuela have been quicker to adopt bitcoin as a savings vehicle because their economic history made the value of censorship resistance more obvious.

Due to poor governance, the inflation rate in Venezuela averaged 32.42 percent from 1973 until 2017. Argentina was even worse; the inflation rate there averaged 200.80 percent between 1944 and 2017.

The story of North America and Europe is different. In the second half of the 20th century, monetary policy has been stable.

The Bretton Woods Agreement, struck in the aftermath of the Second World War, aggregated control of most of the globe’s monetary policy in the hands of the United States. The European powers acceded to this in part because the U.S. dollar was backed by gold, meaning that the U.S. government was subject to the laws of physics and geology of gold mining. They could not expand the money supply any faster than gold could be taken out of the ground.

With the abandonment of the gold standard under Nixon in 1973, control over money and monetary policy has moved into a historically small group of central bankers and powerful political and financial leaders and is no longer restricted by gold.

Fundamentally, the value of the U.S. dollar today is based on trust. There is no gold in a vault that backs the dollars in your pocket. Most fiat currencies today have value because the market trusts that the officials in charge of U.S. monetary policy will manage it responsibly.

It is at this point that the debate around monetary policy devolves into one group that imagines this small group of elitist power brokers sitting in a dark room on large leather couches surrounded by expensive art and mahogany bookshelves filled with copies of The Fountainhead smoking cigars and plotting against humanity using obscure financial maneuvering.

Another group, quite reasonably, points to the economic prosperity of the last half-century under this system and insists on the quackery of the former group.

A better way to understand the tension between a monetary system based on gold versus one based on fiat money this has been offered by political science professor Bruce Bueno de Mesquita:  “Democracy is a better form of government than dictatorships, not because presidents are intrinsically better people than dictators, but simply because presidents have less agency and power than dictators.”

Bueno de Mesquita calls this Selectorate Theory. The selectorate represents the number of people who have influence in a government, and thus the degree to which power is distributed. The selectorate of a dictatorship will tend to be very small: the dictator and a few cronies. The selectorate in democracy tends to be much larger, typically encompassing the Executive, Legislative, and Judicial branches and the voters which elect them.

Historically, the size of the selectorate involves a tradeoff between the efficiency and the robustness of the governmental system. Let’s call this the “Selectorate Spectrum.”

Dictatorships can be more efficient than democracies because they don’t have to get many people on board to make a decision. Democracies, by contrast, are more robust, but at the cost of efficiency.

Conservatives and progressives alike bemoan how little their elected representatives get done but happily observe how little their opponents accomplish. A single individual with unilateral power can accomplish far more (good or bad) than a government of “checks and balances.” The long-run health of a government means balancing the tradeoff between robustness and efficiency. The number of stakeholders cannot be so large that nothing gets done or the country will never adapt nor too small that one or a small group of individuals can hijack the government for personal gain.

This tension between centralized efficiency and decentralized robustness exists in many other areas. Firms try to balance the size of the selectorate to make it large enough so there is some accountability (e.g. a board and shareholder voting) but not so large as to make it impossible to compete in a marketby centralizing most decisions in the hands of a CEO.

We can view both the current monetary system and the internet aggregators through the lens of the selectorate. In both areas, the trend over the past few decades is that the robustness of a large selectorate has been traded away for the efficiency of a small one.[24]

A few individualsheads of central banks, leaders of state, corporate CEOs, and leaders of large financial entities like sovereign wealth funds and pensions fundscan move markets and politics globally with even whispers of significant change. This sort of centralizing in the name of efficiency can sometimes lead to long feedback loops with potentially dramatic consequences.

Said another way, much of what appears efficient in the short term may not be efficient but hiding risk somewhere, creating the potential for a blow-up. A large selectorate tends to appear to be working less efficiently in the short term, but can be more robust in the long term, making it more efficient in the long term as well. It is a story of the Tortoise and the Hare: slow and steady may lose the first leg, but win the race.

In the Beginning, There Was Bitcoin

In October 2008, an anonymous individual or group using the pseudonym Satoshi Nakamoto sent an email to a cypherpunk mailing list, explaining a new system called bitcoin. The opening line of the conclusion summed up the paper:

“We have proposed a system for electronic transactions without relying on trust”

When the network went live a few months later in January 2009, Satoshi embedded the headline of a story running that day in The London Times:

“The Times 03/Jan/2009 Chancellor on brink of second bailout for banks”

Though we can’t know for sure what was going through Satoshi’s mind at the time, the most likely explanation based is that Satoshi was reacting against the decisions being made in response to the 2008 Global Financial Crisis by the small selectorate in charge of monetary policy.

Instead of impactful decisions about the monetary system like a bailout being reliant upon a single individual, the chancellor, Satoshi envisioned bitcoin as a more robust monetary system, with a larger selectorate beyond the control of a single individual.

But why create a new form of money? Throughout history, the most common way for individuals to show their objections to their nation’s monetary policy was by trading their currency for some commodity like gold, silver, or livestock that they believed would hold its value better than the government-issued currency.

Gold, in particular, has been used as a form of money for nearly 6,000 years for one primary reason: the stock-to-flow ratio. Because of how gold is deposited in the Earth’s crust, it’s very difficult to mine. Despite all the technological changes in the last few hundred years, this has meant that the amount of new gold mined in a given year (the flow) has averaged between 1-2 percent of the total gold supply (stock) with very little variation year to year.

As a result, the total gold supply has never increased by more than 1-2 percent per year. In comparison to Venezuela’s 32.4 percent inflation and Argentina’s 200.80 percent inflation, gold’s inflation is far lower and more predictable.

Viewed through the lens of Selectorate Theory, we can say that gold or other commodity forms of money have a larger selectorate and are more robust than government-issued fiat currency. In the same way a larger group of stakeholders in a democracy constrains the actions of any one politician, the geological properties of gold constrained governments and their monetary policy.

Whether or not these constraints were “good” or “bad” is still a matter of debate. The Keynesian school of economics, which has come to be the view of mainstream economics, emerged out of John Maynard Keynes’s reaction to the Great Depression, which he thought was greatly exacerbated by the commitment to the gold standard and that governments should manage monetary policy to soften the cyclical nature of markets.

The Austrian and monetarist schools believe that human behavior is too idiosyncratic to model accurately with mathematics and that minimal government intervention is best. Attempts to intervene can be destabilizing and lead to inflation so a commitment to the gold standard is the lesser evil in the long run.

Taken in good faith, these schools represent different beliefs about the ideal point on the Selectorate Spectrum. Keynesians believe that greater efficiency could be gained by giving government officials greater control over monetary policy without sacrificing much robustness. Austrians and monetarists argue the opposite, that any short-term efficiency gains actually create huge risks to the long-term health of the system.

Viewed as a money, bitcoin has many gold-like properties, embodying something closer to the Austrian and monetarist view of ideal money. For one, we know exactly how many bitcoin will be created21 millionand the rate at which they will be created. Like gold, the ability to change this is outside of the control of a single or small group of individuals, giving it a predictable stock-to-flow ratio and making it extremely difficult to inflate.

Similar to gold, the core bitcoin protocol also makes great trade-offs in terms of efficiency in the name of robustness.[25]

However, bitcoin has two key properties of fiat money which gold lacksit is very easy to divide and transport. Someone in Singapore can send 1/100th of a bitcoin to someone in Canada in less than an hour. Sending 1/100th of a gold bar would be a bit trickier.

In his 1998 book, Cryptonomicon, science fiction author Neal Stephenson imagined a bitcoin-like money built by the grandchild of Holocaust survivors who wanted to create a way for individuals to escape totalitarian regimes without giving up all their wealth. It was difficult, if not impossible, for Jews to carry gold bars out of Germany, but what if all they had to do was remember a 12-word password phrase? How might history have been different?

Seen in this way, bitcoin offers a potentially better trade-off between robustness and efficiency. Its programmatically defined supply schedule means the inflation rate will be lower than gold (making it more robust) while it’s digital nature makes it as divisible and transportable as any fiat currency (making it more efficient).

Using a nifty combination of economic incentives for mining (proof-of-work system) and cryptography (including blockchain), bitcoin allowed individuals to engage in a network that was both open (like a market) and coordinated (like a firm) without needing a single or small group of power brokers to facilitate the coordination.

Said another way, bitcoin was the first example of money going from being controlled from a small group of firm-like entities (central banks) to being market-driven. What cryptocurrency represents is the technology-enabled possibility that anyone can make their own form of money.

Whether or not bitcoin survives, that Pandora’s Box is now open. In the same way computing and the internet opened up new areas of the economy to being eaten by markets, blockchain and cryptocurrency technology have opened up a different area to be eaten by markets: money.

The Future of Public Blockchains

Bitcoin is unique among forms of electronic money because it is both trustworthy and maintained by a large selectorate rather than a small one.

There was a group that started to wonder whether the same underlying technology could be used to develop open networks in other areas by reducing the transaction cost of trust.[26]

One group, the monetary maximalists, thinks not. According to them, public blockchains like bitcoin will only ever be useful as money because it is the area where trust is most important and so you can afford to trade everything else away. The refugee fleeing political chaos does not care that a transaction takes an hour to go through and costs $10 or even $100. They care about having the most difficult to seize, censorship-resistant form of wealth.

Bitcoin, as it exists today, enhances coordination scalability by allowing any two parties to transact without relying on a centralized intermediary and by allowing individuals in unstable political situations to store their wealth in the most difficult-to-seize form ever created.

The second school of thought is that bitcoin is the first example of a canonical, trustworthy ledger with a large selectorate and that there could be other types of ledgers which are able to emulate it.

At its core, money is just a ledger. The amount of money in your personal bank account is a list of all the transactions coming in (paychecks, deposits, etc.) and all the transactions going out (paying rent, groceries, etc.). When you add all those together, you get a balance for your account.

Historically, this ledger was maintained by a single entity, like your bank. In the case of U.S. dollars, the number in circulation can be figured out by adding up how much money the U.S. government has printed and released into the market and how much it has taken back out of the market.

What else could be seen as a ledger?

The answer is “nearly everything.” Governments and firms can be seen just as groups of ledgers. Governments maintain ledgers of citizenship, passports, tax obligations, social security entitlements and property ownership. Firms maintain ledgers of employment, assets, processes, customers and intellectual property.

Economists sometimes refer to firms as “a nexus of contracts.” The value of the firm comes from those contracts and how they are structured within the “ledger of the firm.” Google has a contract with users to provide search results, with advertisers to display ads to users looking for specific search terms, and with employees to maintain the quality of their search engine. That particular ledger of contracts is worth quite a lot.

Mechanical time opened up entirely new categories of economic organization. It allowed for trade to be synchronized at great distanceswithout mechanical time, there would have been no railroads (how would you know when to go?) and no Industrial Revolution. Mechanical time allowed for new modes of employment that lifted people out of serfdom and slavery.[27]

In the same way, it may be that public blockchains make it possible to have ledgers that are trustworthy without requiring a centralized firm to manage them. This would shift the line further in favor of “renting” over “buying” by reducing the transaction cost of trust.

Entrepreneurs may be able to write a valuable app and release for anyone and everyone who needs that functionality. The entrepreneur would collect micro-payments in their wallet. A product designer could release their design into the wild and consumers could download it to be printed on their 3D printer almost immediately.[28]

For the first 10 years of bitcoin’s existence, this hasn’t been possible. Using a blockchain has meant minimizing the transaction cost of trust at all costs, but that may not always be the case. Different proposals are already being built out that allow for more transactions to happen without compromising the trust which bitcoin and other crypto-networks offer.

There are widely differing opinions on what the best way to scale blockchains are. One faction, usually identifying as Web 3/smart contracting platform/Ethereum, believes that scaling quickly at the base layer is essential and can be done with minimal security risk while the other groups believe that scaling should be done slowly and only where it does not sacrifice the censorship-resistant nature of blockchains (bitcoin). Just like the debate between Keynesian and Austrian/monetarist views of monetary policy, these views represent different beliefs about the optimal tradeoff point on the Selectorate Spectrum. But, both groups believe that significant progress can be made on making blockchains more scalable without sacrificing too much trust.

Public blockchains may allow aggregation without the aggregators. For certain use cases, perhaps few, perhaps many, public blockchains like bitcoin will allow the organization and coordination benefits of firms and the motivation of markets while maintaining a large selectorate.

Ultimately, what we call society is a series of overlapping and interacting ledgers.

In order for ledgers to function, they must be organized according to rules. Historically, rules have required rulers to enforce them. Because of network effects, these rulers tend to become the most powerful people in society. In medieval Europe, the Pope enforced the rules of Christianity and so he was among the most powerful.

Today, Facebook controls the ledger of our social connections. Different groups of elites control the university ledgers and banking ledgers.

Public blockchains allow people to engage in a coordinated and meritocratic network without requiring a small selectorate.

Blockchains may introduce markets into corners of society that have never before been reached. In doing so, blockchains have the potential to replace ledgers previously run by kings, corporations, and aristocracies. They could extend the logic of the long tail to new industries and lengthen the tail for suppliers and producers by removing rent-seeking behavior and allowing for permissionless innovation.

Public blockchains allow for rules without a ruler. It began with money, but they may move on to corporate ledgers, social ledgers and perhaps eventually, the nation-state ledger.[29]

Acknowledgments: Credit for the phrase “Markets Are Eating the World” to Patri Friedman.


  1. https://www.bls.gov/opub/mlr/1981/11/art2full.pdf
  2. https://www.bls.gov/emp/tables/employment-by-major-industry-sector.htm
  3. http://www3.nccu.edu.tw/~jsfeng/CPEC11.pdf
  4. There are, of course, other types of transaction costs than the ones listed here. A frequent one brought up in response to Coase is company culture, which nearly all entrepreneurs and investors agree is an important factor in a firm’s productivity. This is certainly true, but the broader point about the relationship between firm size and transaction costs hold—culture is just another transaction cost.
  5. http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/synch.html
  6. https://en.wikipedia.org/wiki/Escapement
  7. Fungibility is the property of a good or a commodity whose individual units are interchangeable. For example, one ounce of pure silver is fungible with any other ounce of pure silver. This is not the same for most goods: a dining table chair is not fungible with a fold-out chair.
  8. Piece rates, paying for some measurement of a finished output like bushels of apples or balls of yarn, seems fairer. But they suffer from two issues: For one, the output of the labor depends partially on the skill and effort of the laborer, but also on the vagaries of the work environment. This is particularly true in a society like that of medieval Europe, where nearly everyone worked in agriculture. The best farmer in the world can’t make it rain. The employee wants something like insurance that they will still be compensated for the effort in the case of events outside their control, and the employer who has more wealth and knowledge of market conditions takes on these risks in exchange for increased profit potential.
  9. For the worker, time doesn’t specify costs such as effort, skill or danger. A laborer would want to demand a higher time-rate wage for working in a dangerous mine than in a field. A skilled craftsman might demand a higher time-rate wage than an unskilled craftsman.
  10. The advent of the clock was necessary for the shift from farms to cities. Sunup to sundown worked effectively as a schedule for farmers because summer was typically when the most labor on farms was required, so longer days were useful. For craftsman or others working in cities, their work was not as driven by the seasons and so a trusted measure of time that didn’t vary with the seasons was necessary. The advent of a trusted measure of time led to an increase in the quantity, quality and variety of goods and services because urban, craftsman type work was now more feasible.
  11. https://unenumerated.blogspot.com/2017/02/money-blockchains-and-social-scalability.html. I am using the phrase “coordination scalability” synonymously with how Nick uses “social scalability.” A few readers suggested that social scalability was a confusing term as it made them think of scaling social networks.
  12. 150 is often referred to as Dunbar’s number, referring to a number calculated by University of Oxford anthropologist and psychologist Robin Dunbar using a ratio of neocortical volume to total brain volume and mean group size. For more see  https://www.newyorker.com/science/maria-konnikova/social-media-affect-math-dunbar-number-friendships. The lower band of 15 was cited in Pankaj Ghemawat’s World 3.0
  13. https://www.jstor.org/stable/2938736
  14. http://discovermagazine.com/1987/may/02-the-worst-mistake-in-the-history-of-the-human-race
  15. Because what else would you want to do besides eat bread dipped in fresh olive oil and drink fresh beer and wine?
  16. From The History of Money by Jack Weatherford.
  17. It also allowed them to squeeze out competitors at different places in the supply chain and put them out of business which Standard Oil did many times before finally being broken up by anti-trust legislation.
  18. http://www.paulgraham.com/re.html
  19. Tomorrow 3.0 by Michael Munger
  20. http://www.paulgraham.com/re.html
  21. There were quite a few things, even pre-internet, in the intersection between markets and firms, like approved vendor auction markets for government contracting and bidding, but they were primarily very high ticket items where higher transaction costs could be absorbed. The internet brought down the threshold for these dramatically to something as small as a $5 cab ride.
  22. The Long Tail was a concept WIRED editor Chris Anderson used to describe the proliferation of small, niche businesses that were possible after the end of the “tyranny of geography.” https://www.wired.com/2004/10/tail/
  23. From Wikipedia: “Fiat money is a currency without intrinsic value that has been established as money, often by government regulation. Fiat money does not have use value, and has value only because a government maintains its value, or because parties engaging in exchange agree on its value.” By contrast, “Commodity money is created from a good, often a precious metal such as gold or silver.” Almost all of what we call money today, from dollars to euros to yuan, is fiat.
  24. Small institutions can get both coordination and a larger selectorate by using social norms. This doesn’t enable coordination scalability though as it stops working somewhere around Dunbar’s number of 150.
  25. Visa processes thousands of transactions per second, while the bitcoin network’s decentralized structure processes a mere seven transactions per second. The key difference being that Visa transactions are easily reversed or censored whereas bitcoin’s are not.
  26. https://medium.com/@cdixon/crypto-tokens-a-breakthrough-in-open-network-design-e600975be2ef
  27. https://medium.com/cryptoeconomics-australia/the-blockchain-economy-a-beginners-guide-to-institutional-cryptoeconomics-64bf2f2beec4
  28. https://medium.com/cryptoeconomics-australia/the-blockchain-economy-a-beginners-guide-to-institutional-cryptoeconomics-64bf2f2beec4
  29. https://twitter.com/naval/status/877467629308395521