Mark Warner Takes on Big Tech and Russian Spies

In mid-November, as his House colleagues on Capitol Hill were consumed with questions about Ukraine and impeachment, Senator Mark Warner took to CNBC’s Squawk Box to discuss what he saw as one of the most important problems facing the country: Fitbit.

Or, more specifically, the danger of allowing Google to swallow up the personal fitness and health-monitoring gadget and its terabytes of consumer data. “The Fitbit deal needs a high, high level of scrutiny,” the Democratic senator from Virginia told anchor Andrew Ross Sorkin. “Large platform companies have not had a very good record of protecting the data or being transparent with consumers. I can’t totally blame them. If Congress doesn’t set rules of the road, asking them to self-regulate is, frankly, just not a viable option.”

Across the board, we as a country need to be hitting pause on the advances of Big Tech, Warner argued, channeling his inner trust-busting Teddy Roosevelt. He sees warning signs flashing all around, like Facebook’s fledgling attempt to launch a digital currency, Libra, and Google’s move into banking. Too much is happening without competition and government oversight, he said. “We have these giant tech platforms entering into new fields before there are some regulatory rules of the road,” Warner told CNBC’s viewers. “Once they get in, the ability to extract them out is going to be virtually impossible.”

The words coming out of Mark Warner’s mouth throughout 2019 would surely have stunned the Mark Warner who joined the Senate in 2009, amid the wave of techno-optimism that marked Barack Obama’s presidential victory and the early years of his administration. Back then it seemed everyone in Washington, Warner chief among them, thought tech was the solution, not the problem.

Yet more recently, in Donald Trump’s Washington, Warner has evolved into Capitol Hill’s most reluctant and thoughtful tech critic, grilling Facebook, Twitter, and Google executives, lashing out in private and public over their intransigence, and pressing the companies to confront the role their platforms have played in undermining democracy.

SUBSCRIBE
Subscribe to WIRED and stay smart with more of your favorite writers.

As the vice-chair of the Senate Intelligence Committee, he’s also become one of Capitol Hill’s most vocal advocates urging the country to take foreign technology threats seriously, both the possibility of kinetic real-world cyberattacks (such as disabling power plants or water systems) and already-underway information influence operations like the ones that upended the 2016 presidential election, as well as the looming challenges next-generation technologies pose to national security.

Earlier this month, even as the president’s impeachment trial loomed for the Senate, he introduced—along with the chair of the Senate Intelligence Committee, North Carolina’s Richard Burr—new legislation aimed at closing the United States’ gap in 5G technologies with China by investing in Western alternatives to Huawei.

“Every month that the US does nothing, Huawei stands poised to become the cheapest, fastest, most ubiquitous global provider of 5G,” Warner said in announcing the new bill. “Widespread adoption of 5G technology has the potential to unleash sweeping effects for the future of internet-connected devices, individual data security, and national security.

Together, his views, advocacy, and legislative work over the past three years have put Warner at the intersection of the biggest stories in American politics—foreign interference in US elections, the evolving consensus that Big Tech is out of control, and the growing tech rift between the US and China.

It’s an unexpected role for a onetime venture capitalist who made a nine-figure fortune helping to usher in the technological age in which we all now live. And the 65-year-old senator remains enmeshed in the culture and politics of technology. His state is one of the top destinations for tech companies outside of the Bay Area (Amazon’s new HQ2 is being built in Arlington). He wears Allbirds—the official sneaker of startup bros—sports an Apple Watch, and dabbles in winemaking. Warner’s dotcom-billionaire friends compare their new Teslas and their private helicopters even as Warner’s political career has thrived thanks to his commitment to the same rural Americans who supported Trump.

Advertisement

Now, as the man who represents one of the country’s most defense-heavy states—home to the Pentagon and the headquarters of 10 of the nation’s 17 intelligence agencies—Warner is pressing his colleagues and the US government to reckon with a new age of asymmetric warfare and information operations, a geopolitical landscape where America’s massive Air Force wings, naval fleets, and Army tanks are of little help against Twitter trolls, Facebook bots, deep fakes, and all manner of emerging threats.

As Warner reminds people in almost every set of public remarks, Russia surely spent less on its 2016 election attack than the cost of a single US F-35 fighter jet. “We’re buying 20th-century military stuff, when the conflict in the 21st century is going to be disproportionately in the realm of cyber and misinformation, disinformation, the ability to take down someone’s water system,” he says.

While his current role certainly isn’t where he expected to end up, there’s a deeply familiar aspect to his evolution as someone who has invested in hundreds of startups as a venture capitalist: His experience in the tech world, after all, taught him the art of the pivot. Warner is the first to say that he’s never invested in an entrepreneur who succeeded with their original business plan. “It’s the ones who can shift that succeed,” he told students at his alma mater, George Washington University, while recounting his days in the business world.

He’s followed his own advice in politics too: His recent legislative success and leadership on Capitol Hill occurred only after several aborted attempts to carve out his future in politics. After all, it wasn’t too long ago that Mark Warner—who lately is spending his days as one of the jurors sitting in President Trump’s historic impeachment trial—probably ranked as the most miserable and frustrated man in the Senate.

Warner says the US is fighting a new age of warfare with the wrong weapons: "We’re buying 20th-century military stuff, when the conflict in the 21st century is going to be disproportionately in the realm of cyber and misinformation, disinformation."

Photograph: Jared Soares


Warner has long been an intensely political—and social—animal. Raised in a working-class family in Indiana, he came to Washington to attend GWU; he interned on Capitol Hill, and became valedictorian and the first in his family to graduate college. He attended Harvard Law School, where he excelled more outside the classroom—as his class’ unofficial social coordinator and the official women’s intramural basketball coach—than in.

Politics, not law, was always his goal. But first he opted to try to make a financial success in business. Working at the Democratic National Committee in 1980, he found himself haunted by the plight of an unsuccessful Connecticut congressional candidate who finished his race $300,000 in debt. He promised himself that he wouldn’t enter politics until he could afford to lose. Amazingly, after two ventures that failed quickly and brought him to near-ruin—friends recall him couchsurfing with just a 1963 Buick to his name—he succeeded wildly.

Advertisement

With help from a former Atlanta Hawks player, Tom McMillen, Warner realized in the early 1980s that the government was all but giving away radio spectrum that would prove key to the emerging technology of cellular phones. At the time, the licenses were distributed by lottery—but many winners had no real ability to use the spectrum they’d won, so Warner positioned himself as a crucial connector and middleman in what emerged as a market for licenses, perfecting a model where he assembled teams of investors and navigated the license bureaucracy, keeping parts of each deal for himself. In 1987 he helped a business associate found a company called Fleet Call, which grew into Nextel.

Money would never be a concern again.

Freed from financial worry, he turned back to politics, helping to run then Virginia lieutenant governor Doug Wilder’s campaign for the state’s top office, a race Wilder won narrowly, becoming the nation's first elected African American governor. Wilder made Warner the head of the state Democratic Party, and by 1996 he felt ready to challenge the state’s veteran senator, Capitol Hill powerhouse John Warner (no relation). He poured $10 million of his own money into the challenge. Campaign signs that year read “Mark, Not John,” a message that didn’t resonate with every voter: One driver stopped on the side of the road and asked candidate Mark: “Excuse me, sir, is that a biblical reference?”

Mark lost the election, leading to his second stroke of business luck: He was unemployed just as the dotcom boom started. Closely tied into the Northern Virginia tech world—his friends were busy founding a company called America Online—Warner helmed a venture capital fund and was a key figure in an elite social group known as Capital Investors, which brought together the area’s tech leaders for monthly startup pitches. (As much an adult frat as an investment club, one club social at Warner’s house featured AOL’s Steve Case jumping on his bed.) Warner also kept an eye on his political profile, using his tech money to help build job-training and computer skills programs for southwest and Southside Virginia, the commonwealth’s rural regions that had been hit hard by the collapse of the tobacco and textile industries.

That commitment to rural Virginia—which has long been part political strategy and part genuine, sincere cause—proved key to his 2001 run for governor, which he undertook at a moment when Democrats did not control a single statewide office in the commonwealth. Warner, though, bonded with a redneck political consultant named David “Mudcat” Saunders, who built a campaign that paved a path for Democrats in a red state—recruiting sportsmen and hunters in rural Virginia, sponsoring a NASCAR truck, and even penning a bluegrass song that emphasized how Warner, who is rarely seen out of a button-down shirt and khakis, “understands our people, the folks up in the hills.”

The song promoted Warner’s determination “to keep our children home,” a message that he would deliver jobs to rural communities used to seeing their most promising graduates leave for opportunities in the big city. He even convinced the NRA to stay out of the race, promising he’d take care of gun rights. Warner won, handily. “We had a good horse,” Saunders told me, years ago. “You can’t win the Kentucky Derby with a mule.”

Warner, though, actually did deliver for rural Virginia—building more than 700 miles of broadband cable that brought the internet to 700,000 Virginians (and, for good measure, closing the $3.8 billion budget deficit he inherited). He pitched “farm-shoring,” or the idea that rural America could be cost-competitive to emerging offshore tech hubs like Bangalore, and brought the state’s jobless rate to the second lowest in the country.

Advertisement

I first met Warner in 2005, as he was preparing to leave office as Virginia’s governor—the commonwealth has a unique, single four-year term limit—and mulling a run for the presidency, a job he had already coveted for a while. On our first day together—the first of many, as I followed him for months on the campaign trail, to New Hampshire, Iowa, Nevada, and a host of other stops—I accompanied him as he took a victory lap through rural Virginia, helping to lay broadband cable outside Appomattox.

Warner—wherever he is—is a talker, the life of the party, and that day he was only supposed to ride the cable-laying bulldozer a few yards. But he got to chatting with the ’dozer crew, and as the press, local officials, and assembled schoolkids watched, the governor and the bulldozer got farther and farther away, eventually disappearing over the hill, cable steadily unspooling behind. When he finally returned, he sighed. “The truth is, if I had another go-round [at being governor], I’d take it. There’s a lot of unfinished business.”

Warner set out on the presidential campaign trail in 2006 with a message ahead of its time, talking about how globalization was reshaping work and how towns and cities skipped by broadband would be more economically disadvantaged in the 21st century than those bypassed by the railroads in the 1800s. He said the social safety net needed to be reimagined for an age when workers hopped across jobs and professions.

As he explained at every stop, America’s political differences weren’t between Democrats and Republicans; they were between those who wanted to reclaim the glories of the industrial 1950s versus those who understood the coming technological upheaval: “It’s not about left versus right,” he was fond of saying. “It’s about future versus past.” His hopeful and full-throated embrace of the future was, in many respects, the precise opposite of the backward-looking “Make America Great Again” message that would propel Donald Trump to the presidency a decade later.

As he criss-crossed the country testing the presidential waters, Warner brought along a built-in party—and not the political kind. By nature gregarious and fun, he almost always had one of his wealthy tech friends along for campaign swings. At the end of a long day on the trail, in whatever small place he found himself after another day of “future versus past” speechifying, he was ready to party.

The entourage would drop their bags at the hotel and then find a nearby bar, rolling in for a night of drinking and pool. The evening excursions were a reporter’s dream, as his wealthy friends would compete to pick up the tab. Warner seems to view almost every human interaction as a chance to make a new friend. “Some days I say, ‘Aren’t our current friends enough?’” his wife, Lisa Collis, told me years ago. (Apropos for the party-loving Warner, the couple met at a keg bash in 1984.)

His cell phone fortune became a stump speech punch line on the campaign trail—he’d joke with crowds that he was fine if people left their phones on while he spoke. “Most people consider them an annoyance, but I just hear ‘cha-ching, cha-ching,’” he’d tease. (Today, he’s the fourth-wealthiest member of the Senate—behind Georgia’s Kelly Loeffler, Utah’s Mitt Romney, and Florida’s Rick Scott—with a net worth of about $90 million.)

Ultimately, though, Warner passed on the presidential race. Over dinner at a Virginia restaurant with friends in fall 2006, he walked through the pros and cons and decided he’d rather spend the next years present with his family; his three daughters were growing up fast, and he didn’t want to miss their childhood.

Advertisement

So instead of the White House, Warner pivoted and set his eyes on Virginia’s 2008 US Senate race; a red-state success story, he delivered the keynote at the 2008 Democratic convention, the same slot that Barack Obama had used in 2004 to catapult himself to national notice. Warner trounced his opponent, helping to deliver Virginia’s electoral votes to Obama along the way.

Office of Senator Mark Warner in the Hart Senate Office Building in Washington, D.C.Photograph: Jared Soares

Among the keepsakes in his Senate office, Warner treasures a hat commemorating the USS John Warner.

Photograph: Jared Soares


Warner became a member of the institution known as the world’s greatest deliberative body in January 2009—and quickly discovered that he hated almost every single minute of being a senator. By experience and predisposition he was an executive, not a legislator. He liked disruptive ideas, sweeping change, and quick action—not long negotiations marked by tiny advances. “One of the conclusions that I’ve unfortunately come to is that so many of the issues have been litigated and relitigated and relitigated—from tax policy to the deficit to health care to education. One party or another might make some incremental progress, but short of some massive election swing, we’re still fighting in the same place,” he says.

At first, Warner’s interests trended toward finance, but he didn’t get the slot he coveted on the Finance Committee and he chafed under the Senate led by Nevada’s Harry Reid. In the summer of 2010, he and Georgian Saxby Chambliss, a Republican, got to chatting on the Senate floor and saw an opportunity for a big breakthrough on fiscal issues: They gathered a group of moderates into what became known as the “Gang of Six” to attempt a grand compromise on the nation’s debt and deficits.

By negotiating on taxes, spending cuts, Social Security, and deficits, they saw a path to saving the treasury $3.7 trillion. The grand bargain failed. Only later did Warner realize that entrenched leadership on both sides of the aisle had little interest in such an effort. “The forces of the status quo on both sides came crashing down,” he says.

It was a dark time for Warner, who felt he was suffering through what should have been a dream job. “I was frustrated, but I was also self-aware enough to know that I get to do this job on terms very few people get to do,” he says. “I needed a better attitude.”

So the one-time entrepreneur turned his attention to the gig economy, championing legislation known as the Startup Act, and confronting questions he summarized as “Can you make capitalism work a better way? What’s the new social contract?”

Changes in the US economy and on Wall Street, he saw, meant that workplace instability was rising. He talked about how business incentives now favored short-term results over long-term investments—in both humans and capital. “I’m not sure the American post–World War II business environment could have been created if it had all started in the year ’95 or 2000,” he says. He notes that celebrated companies like Google and Facebook featured different classes of stock that have protected them from short-term-ism.

He was confronting what he calls “a growing feeling that modern American capitalism is not working for enough people.” As he says, “That is a pretty radical statement from somebody who’s been an entrepreneur.” He helped launch a new “Future of Work” initiative, housed at the Aspen Institute (where I also work on a separate, unrelated cybersecurity initiative), and proselytized about how to reshape the employer-employee relationship for the 21st century. In almost every conversation on the subject, he cites research from the Kauffman Foundation that found that since 1990 almost all net new jobs in the US have come from startups.

Until 2016, Warner thought that reforming the gig economy through legislation would be his life’s new cause. “I had found something I thought could suck up my energy, time, and curiosity,” he says. “As an old venture capitalist, I felt like I was in a brand new space.”

Advertisement

A couple of years ago, I ran into Warner at a glitzy party at the French Embassy in DC and teased him about how he was still using the same stump speech talking points—not left or right but future versus past. He argued, forcefully, that the world was finally catching up to where he knew things were heading. The theoretical problems he’d been talking about in 2006, the looming problems of automation, the workforce, and the gig economy, were now all coming clear a decade later. “Come on now, there’s some meat on those bones,” he told me.

But Warner’s career was set for one more big pivot.

Warner says he’s confronting “a growing feeling that modern American capitalism is not working for enough people.”

Photograph: Jared Soares

Another memento in Warner’s Senate office: a custom walking stick he received as a gift.

Photograph: Jared Soares


In 2016, Donald Trump won the office Warner had long coveted for himself, helped along by a Russian cyberattack and campaigning on a message about economic insecurity that raised many of the issues Warner had been talking about for a decade.

As the nation reeled—both from Trump’s surprise electoral college victory and the unprecedented attack by Russia on the foundations of American democracy—Warner, through a reshuffling of committee assignments, found himself the new vice-chair of Senate Intelligence, the top representative of the Democratic minority on the committee that would lead the body’s inquiry into Russia’s efforts.

Warner had never meant to end up on the Intelligence Committee. But his old Gang of Six partner Chambliss had chaired the committee and recruited him into it during the previous congress. Warner was encouraged by his old campaign adversary, John Warner, to embrace the new assignment. (The two one-time opponents have become good friends and Mark proudly keeps a USS John Warner hat in his Senate office.)

Chambliss says he had wanted Warner’s expertise on the committee in the wake of the Edward Snowden revelations, as telecommunications policy moved to the center of the intelligence community. The body needed someone who knew telecoms, Chambliss thought, especially the constellation of communications satellites that represent the committee’s single priciest line item in an annual intelligence budget of $60 billion. “He understood satellites, and nobody else on the committee understood them beyond them being very expensive,” Chambliss recalls. “In the intelligence world, we deal with the telecom industry every day.”

After the 2016 election, as Warner and the new Republican chair, North Carolina’s Richard Burr, started their own investigation into Russia’s attack, the two senators made a pact: We’re not going to agree on everything, but no surprises. As the parallel investigation by the House Intelligence Committee devolved in a circuslike partisan farce under its Republican chair, Devin Nunes, the team behind the Senate inquiry worked tirelessly to maintain at least the veneer of bipartisanship. “It was obvious they were going to be the adults in the room,” says Chambliss, explaining that he gave Burr and Warner a firm message together at the start of their probe: “At the end of the day, it’s too important for the country for y’all to do this together and have a document that you can both sign.”

The committee’s bipartisan we do everything together approach wasn’t always easy, but, incredibly, Warner and Burr’s deal held. Whereas Nunes parroted Trump talking points and ultimately published a “final report” that Democrats refused to accept, exonerating the administration while ignoring and never examining large swaths of the swirling questions about Russia’s role in the attacks, Burr and Warner forged ahead with an in-depth examinations of the information influence operations by the Internet Research Agency and GRU. In the end they published two massive reports detailing precisely how Facebook trolls and Twitter bots amplified divisive messages, spread propaganda, and seeded disinformation into social media platforms.

Advertisement

Those reports remain a touchstone of the reality-based political establishment today, even as the president and his supporters continue to cast doubt on Russia’s involvement in election interference, preferring instead to cast blame on Ukraine—an obsession that led directly to Trump’s impeachment.

“My Republican colleagues knew I was not going to be a partisan flamethrower,” Warner says. “I’m proud of that traditional behavior in a world where there’s very little traditional behavior. The fact that that looks so good—that the bar had been set so low—was rewarding but a little surprising.”

As the Russia probe continued, the problem the country faced expanded in Warner’s mind from “just” a Russia problem to broader questions about the roles and power of the big tech platforms. “The Russian disinformation efforts opened the door to a whole host of Warner concerns about social media,” says Rachel Cohen, one of his senior staff. “People were telling him this wasn’t a disinformation problem—this was a platform problem.”

A particular turning point for Warner came when he hired as his senior policy adviser Rafi Martina, a one-time corporate tech lawyer who was beginning to argue for a radical rebalancing of tech’s power. As Warner explains, “Before all this unsavory behavior was starting to take place, he was already pointing out to me behavior by Google and Facebook and Twitter that was not great policy, not being fair to users. He was opening my eyes.” Then came the scandals over Cambridge Analytica’s use of Facebook data, broadening Warner’s concerns to not just the platforms and their algorithms but their use and retention of private user data too.

It was a moment of reckoning for someone who had long championed the new economy. “I was probably pretty naive,” Warner says. “I bought the story that these are only forces for good and are going to help everyone communicate better and build new communities. But in retrospect I think I'm probably pretty naive to not have thought through that anything that’s this big, there’s going to be a dark underbelly.”

Warner’s growing sense that major reforms and new legislation were needed to govern the tech landscape only grew last spring when Mark Zuckerberg’s Senate hearing horrified him—both because of the lack of contrition from the Facebook cofounder and because his Capitol Hill colleagues fumbled even basic tech questions.

By last summer, he had come to believe that both Facebook and Twitter had been less than forthcoming to his committee, downplaying the extent of Russian efforts in the election—efforts made all too clear in special counsel Robert Mueller’s 2018 indictments of the Internet Research Agency and Russian military intelligence officers from the GRU. “They were just not straight with me for a long time,” Warner says.

Warner’s frustration with the tendency of the platforms to misuse and abuse their growing clout was evident in a groundbreaking 20-point white-paper released by Warner’s office in 2018. It decried the power amassed by Twitter, Facebook, Google, and other tech platforms and aimed at reforms to combat disinformation, protect user privacy, and promote competition.

Blandly titled “Potential Policy Proposals for Regulation of Social Media and Technology Firms,” the document actually represented one of the most serious attempts to outline a regulation regime for tech ever to come off Capitol Hill. As the 23-page paper—drafted by Warner and Martina—argued, “The speed with which these products have grown and come to dominate nearly every aspect of our social, political, and economic lives has in many ways obscured the shortcomings of their creators in anticipating the harmful effects of their use.”

The paper reflected Warner’s rising concern that there’s a fundamental rot at the center of the major sites dominating today’s online life: “These maybe were not only forces for good,” he says. Facebook, Twitter, Google, and other big players may trumpet how they’re changing the world, but, Warner argues, they don’t operate in the public interest—to inform people, to protect users’ privacy, to further our freedoms. They’re engineered to be addictive.

Advertisement

As he says, “You don’t follow a story about a bloody car wreck with a story about how somebody’s promoting good driving techniques. You follow it with something that’s slightly even more gruesome. That's what’s happening.”

In his paper, Warner speaks about the “duty” of tech platforms to police bots, calls for new disclosure requirements on online political advertisements and audit-able algorithms, proposes new powers for the Federal Trade Commission to regulate privacy, and calls for comprehensive European-like legislation in the US.

He cites tech thinkers like Tristan Harris, Wael Ghonim, and Tom Wheeler, all of whom are part of the informal network of advisers Warner regularly consults. He endorses a concept floated by Yale Law professor Jack Balkin for tech platforms to become “information fiduciaries,” service providers with a special duty to protect and manage user data.

The irony of Warner leading the crackdown on the tech platforms is that he’s really a free-market capitalist at heart, viewing policy challenges often as market opportunities. During a visit to Norfolk, Virginia, he once pivoted from talking about the danger of flooding caused by climate change to suggest that the market for sump pumps looked bright. Similarly, he talks excitedly about how giving people stronger ownership over their own data online might open the way to new “data middlemen” who help negotiate prices and access with the platforms and advertisers.

The commitment to these wider questions, from deepfakes to quantum computing, are part of what has made Warner’s efforts in tech stand apart in a body that too often seems populated by Luddites. (In one particularly egregious example a year ago, Google’s CEO Sundar Pichai had to explain to one congressmember that Google didn’t make iPhones.)

When I tagged along on one of Warner’s visits to a home state tech company in Arlington, Warner enthusiastically lectured me on the insanity of the federal government’s internet of things procurement policies.

The senator, who seemingly can’t contain his own energy even when he wants to, mixed the tech talk with turn-by-turn directions for the aide driving our car. He’s been pushing the government to raise the security bar for incorporating IoT devices into federal networks and installations—he’s worried that too many technologies and devices are racing ahead of the government’s rules.

The federal government, he said, has proven it can’t even get many of the basics of cybersecurity right when it comes to securing databases and personnel information, so why, he asks, is it racing to incorporate IoT devices into government infrastructure? “If I worked at a rational place, we wouldn’t be increasing the attack surface exponentially,” he said, then he interrupted himself to tell the aide to move over to the middle lane.

It’s no surprise he knows the road better than the aide’s Google Maps. Warner, after all, is a creature of the capital—he’d never call it a swamp—and has made the region home nearly since he attended GW. His annual Pilgrim’s Lunch—a rowdy day-before-Thanksgiving gathering of the region’s elite at DC’s fading power lunch spot, The Palm, where the meal stretches to five hours or more—has been a tradition in the capital for decades. He watched the Pentagon burn on 9/11 from the roof of his gubernatorial campaign headquarters, and unlike most of his congressional colleagues who commute in from their districts on Mondays and race home on Thursdays, his house in Old Town Alexandria isn’t far from his workplace. “To misquote Sarah Palin, I can see the Capitol from the third floor of my house,” he jokes.


Advertisement

Warner’s own evolving views on technology have proven to be in the vanguard of a sweeping sea change in the way tech is seen on Capitol Hill and on the presidential campaign trail. The idea—all but unthinkable just a few years ago—that Big Tech is, well, too big, dangerous to our democracy, and dangerous for our health as consumers, has spread rapidly.

In May, one of Warner’s Senate colleagues, Missouri’s Josh Hawley, labeled platforms like Facebook, Instagram, and Twitter a “digital drug” in a USA Today op-ed, arguing, “Maybe social media’s innovations do our country more harm than good. Maybe social media is best understood as a parasite on productive investment, on meaningful relationships, on a healthy society. Maybe we’d be better off if Facebook disappeared.”

Other measures on Capitol Hill have tried to rein in just-around-the-corner technologies like facial recognition. The current 116th Congress has seen Senate efforts both to regulate corporate use of facial recognition as well as to put limits on its use by police.

In some ways, Warner’s views—as unexpected as they are for someone of his background and as radical as they would have sounded even three years ago—now represent the moderate view of his party.

As she’s campaigned for president, his Senate colleague Elizabeth Warren has gone even further, calling for the outright breakup of Amazon, Facebook, and Google. As she wrote in a post on Medium in March, “Today’s big tech companies have too much power—too much power over our economy, our society, and our democracy. They’ve bulldozed competition, used our private information for profit, and tilted the playing field against everyone else. And in the process, they have hurt small businesses and stifled innovation.” Senator Bernie Sanders has backed similar ideas, and one of the main lines of attack on Pete Buttigieg has been his friendship with Harvard classmate Mark Zuckerberg.

Yet even as it seemed that most of his fellow Democratic senate colleagues—Warren, Sanders, Michael Bennet, and, until recently, Kamala Harris and Cory Booker—are running for president and even as Mike Bloomberg and Deval Patrick leapt into an ever-shifting field of candidates, Warner’s name never surfaced for president in 2020. He has long harbored presidential ambitions, but today they appear all but on ice.

Instead, Warner has committed himself to remaining in the Senate, where he’s happier than he’s ever been, even amid the administration’s daily chaos and the negativity that pervades the capital.

“What’s given me the new lease on energy and enthusiasm about the job has clearly been the Russian issue,” he says. “Even if I’m not successful in these other areas, getting [the Russia probe] right, at least for the time being, is probably the most important thing I’ve done, which I don’t say lightly considering all the aspirations I’ve had my whole career.”

It’s those big geopolitical questions—and their intersection with technology and its intersection with national security—that most animate Warner. One of the oddities of work on the Senate Intelligence Committee is that much of the heavy lifting is done by the members themselves—the classified nature of the work severely limits how much work can be delegated to staff—and so he’s spent many hours sifting through the evidence of Russia’s attack in 2016, listening to briefings from government officials on emerging threats, and examining how and where the US government is spending its resources. He’s clearly not happy with what’s he’s learning.

As much as his work publicly has focused on the Russia probe, he says that what he’s hearing behind the closed doors of the Intelligence Committee’s workspace makes him worry as much, or even more, about China. “Russia is a more malicious actor. China is a more insidious actor,” he tells me. “My views on China are radically different than what they were three years ago.”

Advertisement

It’s another area where he’s found common bipartisan ground, working closely on China and 5G issues with Florida’s Marco Rubio. Together, they asked the intelligence community in early March for a report on how China was exerting pressure and influence on international standard-setting bodies related to 5G, noting “anecdotal concerns” that China is undermining what have long been “technological meritocracies.” He’s also been cohosting, with Rubio, classified briefings for tech leaders and venture capitalists to hear about the threat from China and discuss how the US should counter China’s efforts in areas like artificial intelligence and quantum computing.

He fears that the US is moving too slowly to counter China’s march in tech, repeating mistakes the government made in failing to recognize legitimate fears about embracing Russian technology. “We would sit in the intel committee and hear for years about Kaspersky Labs. It took us three or four years to push the intelligence community to say, ‘You can’t just tell us. You’ve got to get the stuff off the damn GSA acquisition list.’ You take that times 20 with the Chinese,” he says. “If we don’t do more of this, people will look back on Congress and the intel community and certain business leadership and say, ‘What in the hell were you people thinking?’”

He sees his new 5G legislation to combat Huawei—known in the always-acronymized style of Capitol Hill as the Utilizing Strategic Allied (USA) Telecommunications Act—as a step on that path. The proposed bill also comes with the backing of Republican senators Rubio, Bob Menendez, and John Cornyn (plus Colorado’s moderate Democrat Michael Bennett), all of whom serve on the body’ss intelligence or foreign relations committees. It attempts to counter Huawei’s perceived lead on 5G by earmarking at least a $1 billion in investments in Western alternatives and encouraging the development of an open-architecture model to allow companies to bite off smaller pieces of the 5G network.

Not even the heated, partisan impeachment trial can distract Warner from raising the alarm: The day after Chief Justice John Roberts swore Warner and 98 other senators in as jurors, Warner again took to the airwaves to push his effort to confront China’s technological advances. His message was clear: “5G and the issue of Huawei has been over the last year been a bipartisan issue,” he told Bloomberg TV in the rotunda of the Russell Senate Office Building. “This is one area where there are a lot of us who are in agreement with the administration.”

One reason Warner says he’s so committed to regulating Big Tech is, paradoxically, the need to preserve tech as a uniquely American strength. As Warner sees it, America’s failure to act has ceded its traditional leadership role to others—to Europe on consumer privacy and to the UK and Australia on content restrictions.

In the absence of federal action, individual states like California are now taking the lead on regulating tech, a potentially troublesome precedent that Warner fears could lead to a patchwork of laws that slow innovation and retard growth. Letting others—whether Europe or China—set the rules of the road for technology is dangerous, he says, both in terms of American values and economic growth. In Warner’s mind, saving tech as an economic driver for the United States might mean blowing up Big Tech as we know it.

He hopes that his work will help lead the nation forward into the next tech age. Warner tells me that he’s already seen the conversation shift, dramatically even, as the country has reckoned with the twin scandals of the Russia attack and general abuse of the tech platforms. “On the Hill, there’s very much a mind change. We can’t continue to be victims all the time online,” he says. “We can’t keep getting pummeled.” Or, to put it another way, the senior senator from Virginia may have found his next bulldozer to ride over the hill.


Read more: https://www.wired.com/story/mark-warner-takes-on-big-tech-and-russian-spies/

Related Articles

Markets Are Eating The World

For the last hundred years, individuals have worked for firms, and, by historical standards, large ones.

That many of us live in suburbs and drive our cars into the city to go to work at a large office building is so normal that it seems like it has always been this way. Of course, it hasn’t. In 1870, almost 50 percent of the U.S. population was employed in agriculture.[1] As of 2008, less than 2 percent of the population is directly employed in agriculture, but many people worked for these relatively new things called “corporations.”[2]

Many internet pioneers in the 90’s believed that the internet would start to break up corporations by letting people communicate and organize over a vast, open network. This reality has sort-of played out: the “gig economy” and rise in freelancing are persistent, if not explosive, trends. With the re-emergence of blockchain technology, talk of “the death of the firm” has returned. Is there reason to think this time will be different?

To understand why this time might (or might not) be different, let us first take a brief look back into Coasean economics and mechanical clocks.

In his 1937 paper, “The Nature of the Firm,” economist R.H. Coase asked “if markets were as efficient as economists believed at the time, why do firms exist at all? Why don’t entrepreneurs just go out and hire contractors for every task they need to get done?”[3]

If an entrepreneur hires employees, she has to pay them whether they are working or not. Contractors only get paid for the work they actually do. While the firm itself interacts with the market, buying supplies from suppliers and selling products or services to customers, the employees inside of it are insulated. Each employee does not renegotiate their compensation every time they are asked to do something new. But, why not?

Coase’s answer was transaction costs. Contracting out individual tasks can be more expensive than just keeping someone on the payroll because each task involves transaction costs.

Imagine if instead of answering every email yourself, you hired a contractor that was better than you at dealing with the particular issue in that email. However, it costs you something to find them. Once you found them you would have to bargain and agree on a price for their services then get them to sign a contract and potentially take them to court if they didn’t answer the email as stipulated in the contract.

Duke economist Mike Munger calls these three types of transaction costs triangulation, how hard it is to find and measure the quality of a service; transfer, how hard it is to bargain and agree on a contract for the good or service; and trust, whether the counterparty is trustworthy or you have recourse if they aren’t.

You might as well just answer the email yourself or, as some executives do, hire a full-time executive assistant. Even if the executive assistant isn’t busy all the time, it’s still better than hiring someone one off for every email or even every day.

Coase’s thesis was that in the presence of these transaction costs, firms will grow larger as long as they can benefit from doing tasks in-house rather than incurring the transaction costs of having to go out and search, bargain and enforce a contract in the market. They will expand or shrink until the cost of making it in the firm equals the cost of buying it on the market.

The lower the transaction costs are, the more efficient markets will be, and the smaller firms will be.

In a world where markets were extremely efficient, it would be very easy to find and measure things (low triangulation costs), it would be very easy to bargain and pay (low transfer costs), and it would be easy to trust the counterparty to fulfill the contract (low trust costs).

In that world, the optimal size of the firm is one person (or a very few people). There’s no reason to have a firm because business owners can just buy anything they need on a one-off basis from the market.[4] Most people wouldn’t have full-time jobs; they would do contract work.

Consumers would need to own very few things. If you needed a fruit dehydrator to prepare for a camping trip twice a year, you could rent one quickly and cheaply. If you wanted to take your family to the beach twice a year, you could easily rent a place just for the days you were there.

On the other hand, in a world that was extremely inefficient, it would be hard to find and measure things (high triangulation costs), it would be difficult to bargain and pay (high transfer costs) and it would be difficult to trust the counterparty to fulfill the contract (high trust costs).

In that world, firms would tend to be large. It would be inefficient to buy things from the market and so entrepreneurs would tend to accumulate large payrolls. Most people would work full-time jobs for large firms. If you wanted to take your family to the beach twice a year, you would need to own the beach house because it would be too inefficient to rent, the reality before online marketplaces like AirBnB showed up.

Consumers would need to own nearly everything they might conceivably need. Even if they only used their fruit dehydrator twice a year, they’d need to own it because the transaction costs involved in renting it would be too high.

If the structure of the economy is based on transaction costs, then what determines them?

Technological Eras and Transaction Costs

The primary determinant of transaction costs is technology.

The development of the wheel and domestication of horses and oxes decreased transfer costs by making it possible to move more goods further. Farmers who could bring their crops to market using an ox cart rather than carrying it by hand could charge less and still make the same profit.

The development of the modern legal system reduced the transaction cost of trust. It was possible to trust that your counterparty would fulfill their contract because they knew you had recourse if they didn’t.

The list goes on: standardized weights and  measures, the sail, the compass, the printing press, the limited liability corporation, canals, phones, warranties, container ships and, more recently, smartphones and the internet.

It’s hard to appreciate how impactful many of these technologies has been, because most of them had become so common by the time most of us were born that we take them for granted.

As the author Douglas Adams said, “Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.”

To see how technology affects transaction costs, and how that affects the way our society is organized, let’s consider something which we all think of as “normal and ordinary,”  but which has had a huge impact on our lives: the mechanical clock.

The Unreasonable Effectiveness of the Mechanical Clock

In 1314, The city of Caen installed a mechanical clock with the following inscription: “I give the hours voice to make the common folk rejoice.” “Rejoice” is a pretty strong reaction to a clock, but it wasn’t overstated, everyone in Caen was pretty jazzed about the mechanical clock. Why?

A key element of why we have jobs today as opposed to working as slaves or serfs bonded to the land as was common in the Feudal system is a direct result of the clock.

Time was important before the invention of the clock but was very hard to measure. Rome was full of sundials, and medieval Europe’s bell towers where, time was tolled, were the tallest structures in town.[5]

This was not cheap. In the larger and more important belfries, two bell-ringers lived full time, each serving as a check on the other. The bells themselves were usually financed by local guilds that relied on the time kept to tell their workers when they had to start working and when they could go home.

This system was problematic for a few reasons.

For one, it was expensive. Imagine if you had to pool funds together with your neighbors to hire two guys to sit in the tower down the street full time and ring the bell to wake you up in the morning.

For another, the bell could only signal a few events per day. If you wanted to organize a lunch meeting with a friend, you couldn’t ask the belltower to toll just for you. Medieval bell towers had not yet developed snooze functionality.

Finally, sundials suffered from accuracy problems. Something as common as clouds could make it difficult to tell precisely when dawn, dusk, and midday occurred.

In the 14th and 15th centuries, the expensive bell towers of Europe’s main cities got a snazzy upgrade that dramatically reduced transaction costs: the mechanical clock.

The key technological breakthrough that allowed the development was the escapement.

The escapement transfers energy to the clock’s pendulum to replace the energy lost to friction and keep it on time. Each swing of the pendulum releases a tooth of the escapement’s wheel gear, allowing the clock’s gear train to advance or “escape” by a set amount. This moves the clock’s hands forward at a steady rate.[6]

The accuracy of early mechanical clocks, plus or minus 10-15 minutes per day, was not notably better than late water clocks and less accurate than the sandglass, yet mechanical clocks became widespread. Why?

  1. Its automatic striking feature meant the clock could be struck every hour at lower cost, making it easier to schedule events than only striking at dawn, dusk and noon.
  2. It was more provably fair than the alternatives, which gave all parties greater confidence that the time being struck was accurate. (Workers were often suspicious that employers could bribe or coerce the bell-ringers to extend the workday, which was harder to do with a mechanical clock.)

Mechanical clocks broadcast by bell towers provided a fair (lower trust costs) and fungible [7] (lower transfer costs) measure of time. Each hour rung on the bell tower could be trusted to be the same length as another hour.

Most workers in the modern economy earn money based on a time-rate, whether the time period is an hour, a day, a week or a month. This is possible only because we have a measure of time which both employer and employee agree upon. If you hire someone to pressure-wash your garage for an hour, you may argue with them over the quality of the work, but you can both easily agree whether they spent an hour in the garage.

Prior to the advent of the mechanical clock, slavery and serfdom were the primary economic relationships, in part because the transaction cost of measuring time beyond just sunup and sundown was so high, workers were chained to their masters or lords.[8]

The employer is then able to use promotions, raises, and firing to incentivize employees to produce quality services during the time they are being paid for.[9]

In a system based on time-rate wages rather than slavery or serfdom, workers have a choice. If the talented blacksmith can get a higher time-rate wage from a competitor, she’s able to go work for them because there is an objective, fungible measure of time she’s able to trade.

As history has shown, this was a major productivity and quality-of-life improvement for both parties.[10]

It gradually became clear that mechanical time opened up entirely new categories of economic organization and productivity that had hitherto been not just impossible, but unimaginable.

We could look at almost any technology listed abovestandardized weights and measures, the sail, the compass, the printing press, etc.and do a similar analysis of how it affected transaction costs and eventually how it affected society as a result.

The primary effect is an increase in what we will call coordination scalability.

Coordination Scalability

“It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them.”   Alfred North Whitehead

About 70,000 years ago, there were between six and ten species of the genus homo. Now, of course, there is just one: Homo sapiens. Why did Homo sapiens prevail over the other species, like Homo neanderthalensis?

Homo sapiens prevailed because of their ability to coordinate. Coordination was made possible by increased neocortical size, which led to an ability to work together in large groups, not just as single individuals. Instead of single individuals hunting, groups could hunt and bring down larger prey more safely and efficiently.[11]

The brain of Homo sapiens has proven able to invent other, external structures which further increased coordination scalability by expanding the network of other people we could rely on.

Maybe the most important of these was language, but we have evolved many others since, including the mechanical clock.

The increased brain size has driven our species through four coordination revolutions: Neolithic, Industrial, Computing, Blockchain.

Neolithic Era: The Emergence of Division of Labor

The first economic revolution was a shift from humans as hunter-gatherers to homo sapiens as farmers.

Coordination scalability among hunter-gatherers was limited to the size of the band, which tended to range from 15 to 150 individuals.[12] The abandonment of a nomadic way of life and move to agriculture changed this by allowing specialization and the formation of cities.

Agriculture meant that people could, for the first time, accumulate wealth. Farmers could save excess crops to eat later or trade them for farming equipment, baskets or decorations. The problem was that this wealth was suddenly worth stealing and so farmers needed to defend their wealth.

Neolithic societies typically consisted of groups of farmers protected by what Mancur Olson called “stationary bandits,” basically warlords.[13] This allowed the emergence of much greater specialization. Farmers accumulated wealth and paid some to the warlords for protection, but even then there was still some left over, making it possible for individuals to specialize.

A city of 10,000 people requires, but also makes possible, specialists.

The limits of coordination scalability increased from 150 to thousands or, in some cases, tens of thousands. This was not necessarily a boon to human happiness. Anthropologist Jared Diamond called the move to agriculture “the worst mistake in the history of the human race.”[14] The quality of life for individuals declined: lifespans shortened, nutrition was worse leading to smaller stature, and disease was more prevalent.

But this shift was irresistible because specialization created so much more wealth and power that groups which adopted this shift came to dominate those that didn’t. The economies of scale in military specialization, in particular, were overwhelming. Hunt-gatherers couldn’t compete.

In the Neolithic era, the State was the limit of coordination scalability.

Industrial Era: Division of Labor Is Eating the World

Alongside the city-state, a new technology started to emerge that would further increase the limits of coordination scalability: money. To illustrate, let us take the European case, from ancient Greece to modernity, though the path in other parts of the world was broadly similar. Around 630 B.C., the Lydian kings recognized the need for small, easily transported coins worth no more than a few days’ labor. They made these ingots in a standard sizeabout the size of a thumbnail—and weight, and stamped an emblem of a lion’s head on them.

This eliminated one of the most time-consuming (and highest transaction cost) steps in commerce: weighing gold and silver ingots each time a transaction was made. Merchants could easily count the number of coins without worrying about cheating.

Prior to the invention of coins, trade had been limited to big commercial transactions, like buying a herd of cattle. With the reduced transfer cost facilitated by coins, Lydians began trading in the daily necessities of lifegrain, olive oil, beer, wine, and wood.[15]

The variety and abundance of goods which could suddenly be traded led to another innovation: the retail market.

Previously, buyers had to go to the home of sellers of whatever they needed. If you needed olive oil, you had to walk over to the olive oil lady’s house to get it. With the amount of trade that began happening after coinage, a central market emerged. Small stalls lined the market where each merchant specialized in (and so could produce more efficiently) a particular goodmeat, grain, jewelry, bread, cloth, etc. Instead of having to go the olive oil lady’s house, you could go to her stall and pick up bread from the baker while you were there.

From this retail market in Lydia sprang the Greek agora, Medieval market squares in Europe and, the suburban shopping mall and, eventually, the “online shopping malls” Amazon and Google. Though markets were around as early as 7th century BCE Lydia, they really hit their stride in The Industrial Revolution in the 18th century.[16]

Adam Smith was the first to describe in detail the effect of this marketization of the world. Markets made it possible to promote the division of labor across political units, not just within them. Instead of each city or country manufacturing all the goods they needed, different political entities could further divide labor. Coordination scalability started to stretch across political borders.

Coming back to Coase, firms will expand or shrink until “making” equals the cost of “buying.” Under this Industrial era, transaction costs made administrative and managerial coordination (making) more efficient than market coordination (buying) for most industries, which led to the rise of large firms.

The major efficiency gain of Industrial companies over their more “artisanal” forebearers was that using the techniques of mass production, they could produce products of a higher quality at a lower price. This was possible only if they were able to enforce standards throughout the supply chain. The triangulation transaction cost can be broken down into search and measurement: a company needed to find the vendor and to be able to measure the quality of the good or service.

In the early Industrial era, the supply chain was extremely fragmented. By bringing all the pieces into the firm, a large vertically integrated company could be more efficient.[17]

As an example, In the 1860s and 1870s, the Carnegie Corporation purchased mines to ensure it had reliable access to the iron ore and coke it needed to make steel. The upstream suppliers were unreliable and non-standardized and Carnegie Corporation could lower the cost of production by simply owning the whole supply chain.

This was the case in nearly every industry. By bringing many discrete entities under one roof and one system of coordination, greater economic efficiencies were gained and the multi-unit business corporation replaced the small, single-unit enterprise because administrative coordination enabled greater productivity through lower transaction costs per task than was possible before. Economies of scale flourished.

This system of large firms connected by markets greatly increased coordination scalability. Large multinational firms could stretch across political boundaries and provide goods and services more efficiently.

In Henry Ford’s world, the point where making equaled the cost of buying was pretty big. Ford built a giant plant at River Rouge just outside Detroit between 1917 and 1928 that took in iron ore and rubber at one end and sent cars out the other. At the factory’s peak, 100,000 people worked there. These economies of scale allowed Ford to dramatically drive down the cost of an automobile, making it possible for the middle class to own a car.[18]

As with Carnegie, Ford learned that supplier networks take a while to emerge and grow into something reliable. In 1917, doing everything himself was the only way to get the scale he needed to be able to make an affordable car.

One of the implications of this model was that industrial businesses required huge startup costs.

The only chance any entrepreneur had to compete required starting out with similarly massive amounts of capital required to build a factory large and efficient enough to compete with Ford.

For workers, this meant that someone in a specialized role, like an electric engineer or an underwriter, did not freelance or work for small businesses. Because the most efficient way to produce products was in large organizations, specialized workers could earn the most by working inside large organizations, be they Ford, AT&T or Chase Bank.

At the peak of the Industrial era, there were two dominant institutions: firms and markets.

Work inside the firm allowed for greater organization and specialization which, in the presence of high transaction costs was more economically efficient.

Markets were more chaotic and less organized, but also more motivating. Henry Ford engaged with the market and made out just a touch better than any of his workers; there just wasn’t room for many Henry Fords.

This started to dissolve in the second half of the 20th century. Ford no longer takes iron ore and rubber as the inputs to their factories, but has a vast network of upstream suppliers.[19] The design and manufacturing of car parts now happens over a long supply chain, which the car companies ultimately assemble and sell.

One reason is that supplier networks became more standardized and reliable. Ford can now buy ball bearings and brake pads more efficiently than he can make them, so he does. Each company in the supply chain focuses on what they know best and competition forces them to constantly improve.

By the 1880s, it cost Carnegie more to operate the coke ovens in-house than to buy it from an independent source, so he sold off the coke ovens and bought it from the open market. Reduced transaction costs in the form of more standardized and reliable production technology caused both Ford and Carnegie corporation to shrink as Coase’s theory would suggest.

The second reason is that if you want to make a car using a network of cooperating companies, you have to be able to coordinate their efforts, and you can do that much better with telecommunication technology broadly and computers specifically. Computers reduce the transaction costs that Coase argued are the raison d’etre of corporations. That is a fundamental change.[20]

The Computing Era: Software Is Eating the World

Computers, and the software and networks built on top of them, had a new economic logic driven by lower transaction costs.

Internet aggregators such as Amazon, Facebook, Google, Uber and Airbnb reduced the transaction costs for participants on their platforms. For the industries that these platforms affected, the line between “making” and “buying” shifted toward buying. The line between owning and renting shifted toward renting.

Primarily, this was done through a reduction in triangulation costs (how hard it is to find and measure the quality of a service), and transfer costs (how hard it is to bargain and agree on a contract for the good or service).

Triangulation costs came down for two reasons. One was the proliferation of smartphones, which made it possible for services like Uber and Airbnb to exist. The other was the increasing digitization of the economy. Digital goods are both easier to find (think Googling versus going to the library or opening the Yellow Pages) and easier to measure the quality of (I know exactly how many people read my website each day and how many seconds they are there, the local newspaper does not).

The big improvement in transfer costs was the result of matchmaking: bringing together and facilitating the negotiation of mutually beneficial commercial or retail deals.  

Take Yelp, the popular restaurant review app. Yelp allows small businesses like restaurants, coffee shops, and bars to advertise to an extremely targeted group: individuals close enough to come to the restaurant and that searched for some relevant term. A barbecue restaurant in Nashville can show ads only to people searching their zip code for terms like “bbq” and “barbecue.” This enables small businesses that couldn’t afford to do radio or television advertising to attract customers.

The existence of online customer reviews gives consumers a more trusted way to evaluate the restaurant.

All of the internet aggregators, including Amazon, Facebook, and Google, enabled new service providers by creating a market and standardizing the rules of that market to reduce transaction costs.[21]

The “sharing economy” is more accurately called the “renting economy” from the perspective of consumers, and the “gig economy” from the perspective of producers. Most of the benefits are the result of new markets enabled by lower transaction costs, which allows consumers to rent rather than own, including “renting” some else’s time rather than employing them full time.

It’s easier to become an Uber driver than a cab driver, and an Airbnb host than a hotel owner. It’s easier to get your product into Amazon than Walmart. It’s easier to advertise your small business on Yelp, Google or Facebook than on a billboard, radio or TV.

Prior to the internet, the product designer was faced with the option of selling locally (which was often too small a market), trying to get into Walmart (which was impossible without significant funding and traction), or simply working for a company that already had distribution in Walmart.

On the internet, they could start distributing nationally or internationally on day one. The “shelf space” of Amazon or Google’s search engine results page was a lot more accessible than the shelf space of Walmart.

As a result, it became possible for people in certain highly specialized roles to work independently of firms entirely. Product designers and marketers could sell products through the internet and the platforms erected on top of it (mostly Amazon and Alibaba in the case of physical products) and have the potential to make as much or more as they could inside a corporation.

This group is highly motivated because their pay is directly based on how many products they sell. The aggregators and the internet were able to reduce the transaction costs that had historically made it economically inefficient or impossible for small businesses and individual entrepreneurs to exist.

The result was that in industries touched by the internet, we saw an industry structure of large aggregators and a long tail [22] of small business which were able to use the aggregators to reach previously unreachable, niche segments of the market. Though there aren’t many cities where a high-end cat furniture retail store makes economic sense, on Google or Amazon, it does.

source: stratechery.com

Before


After (Platform-Enabled Markets)


Firms


Platform


Long Tail



Walmart and big box retailers
Amazon Niche product designers and manufacturers

Cab companies
Uber Drivers with extra seats

Hotel chains
Airbnb Homeowners with extra rooms

Traditional media outlets
Google and Facebook Small offline and niche online businesses

For these industries, coordination scalability was far greater and could be seen in the emergence of micro-multinational businesses. Businesses as small as a half dozen people could manufacture in China, distribute products in North America, and employ people from Europe and Asia. This sort of outsourcing and the economic efficiencies it created had previously been reserved for large corporations.

As a result, consumers received cheaper, but also more personalized products from the ecosystem of aggregators and small businesses.

However, the rental economy still represents a tiny fraction of the overall economy. At any given time, only a thin subset of industries are ready to be marketized. What’s been done so far is only a small fraction of what will be done in the next few decades.

Yet, we can already start to imagine a world which Munger calls “Tomorrow 3.0.” You need a drill to hang some shelves in your new apartment. You open an app on your smartphone and tap “rent drill.” An autonomous car picks up a drill and delivers it outside your apartment in a keypad-protected pod and your phone vibrates “drill delivered.” Once you’re done, you put it back in the pod, which sends a message to another autonomous car nearby to come pick it up. The rental costs $5, much less than buying a commercial quality power drill. This is, of course, not limited to drillsit could have been a saw, fruit dehydrator, bread machine or deep fryer.

You own almost nothing, but have access to almost everything.

You, nor your neighbors, have a job, at least in the traditional sense. You pick up shifts or client work as needed and maybe manage a few small side businesses. After you finish drilling the shelves in, you might sit down at your computer and see what work requests are open and work for a few hours on designing a new graphic or finishing up the monthly financial statements for a client.

This is a world in which triangulation and transfer costs have come down dramatically, resulting in more renting than buying from consumers and more gig work than full-time jobs for producers.

This is a world we are on our way to already, and there aren’t any big, unexpected breakthroughs that need to happen first.

But what about the transaction cost of trust?

In the computer era, the areas that have been affected most are what could be called low-trust industries. If the sleeping mask you order off of Amazon isn’t as high-quality as you thought, that’s not a life or death problem.

What about areas where trust is essential?

Enter stage right: blockchains.

The Blockchain Era: Blockchain Markets Are Eating the World

One area where trust matters a lot is money. Most of the developed world doesn’t think about the possibility of fiat money [23] not being trustworthy because it hasn’t happened in our lifetimes. For those that have experienced it, including major currency devaluations, trusting that your money will be worth roughly the same tomorrow as it is today is a big deal.

Citizens of countries like Argentina and particularly Venezuela have been quicker to adopt bitcoin as a savings vehicle because their economic history made the value of censorship resistance more obvious.

Due to poor governance, the inflation rate in Venezuela averaged 32.42 percent from 1973 until 2017. Argentina was even worse; the inflation rate there averaged 200.80 percent between 1944 and 2017.

The story of North America and Europe is different. In the second half of the 20th century, monetary policy has been stable.

The Bretton Woods Agreement, struck in the aftermath of the Second World War, aggregated control of most of the globe’s monetary policy in the hands of the United States. The European powers acceded to this in part because the U.S. dollar was backed by gold, meaning that the U.S. government was subject to the laws of physics and geology of gold mining. They could not expand the money supply any faster than gold could be taken out of the ground.

With the abandonment of the gold standard under Nixon in 1973, control over money and monetary policy has moved into a historically small group of central bankers and powerful political and financial leaders and is no longer restricted by gold.

Fundamentally, the value of the U.S. dollar today is based on trust. There is no gold in a vault that backs the dollars in your pocket. Most fiat currencies today have value because the market trusts that the officials in charge of U.S. monetary policy will manage it responsibly.

It is at this point that the debate around monetary policy devolves into one group that imagines this small group of elitist power brokers sitting in a dark room on large leather couches surrounded by expensive art and mahogany bookshelves filled with copies of The Fountainhead smoking cigars and plotting against humanity using obscure financial maneuvering.

Another group, quite reasonably, points to the economic prosperity of the last half-century under this system and insists on the quackery of the former group.

A better way to understand the tension between a monetary system based on gold versus one based on fiat money this has been offered by political science professor Bruce Bueno de Mesquita:  “Democracy is a better form of government than dictatorships, not because presidents are intrinsically better people than dictators, but simply because presidents have less agency and power than dictators.”

Bueno de Mesquita calls this Selectorate Theory. The selectorate represents the number of people who have influence in a government, and thus the degree to which power is distributed. The selectorate of a dictatorship will tend to be very small: the dictator and a few cronies. The selectorate in democracy tends to be much larger, typically encompassing the Executive, Legislative, and Judicial branches and the voters which elect them.

Historically, the size of the selectorate involves a tradeoff between the efficiency and the robustness of the governmental system. Let’s call this the “Selectorate Spectrum.”

Dictatorships can be more efficient than democracies because they don’t have to get many people on board to make a decision. Democracies, by contrast, are more robust, but at the cost of efficiency.

Conservatives and progressives alike bemoan how little their elected representatives get done but happily observe how little their opponents accomplish. A single individual with unilateral power can accomplish far more (good or bad) than a government of “checks and balances.” The long-run health of a government means balancing the tradeoff between robustness and efficiency. The number of stakeholders cannot be so large that nothing gets done or the country will never adapt nor too small that one or a small group of individuals can hijack the government for personal gain.

This tension between centralized efficiency and decentralized robustness exists in many other areas. Firms try to balance the size of the selectorate to make it large enough so there is some accountability (e.g. a board and shareholder voting) but not so large as to make it impossible to compete in a marketby centralizing most decisions in the hands of a CEO.

We can view both the current monetary system and the internet aggregators through the lens of the selectorate. In both areas, the trend over the past few decades is that the robustness of a large selectorate has been traded away for the efficiency of a small one.[24]

A few individualsheads of central banks, leaders of state, corporate CEOs, and leaders of large financial entities like sovereign wealth funds and pensions fundscan move markets and politics globally with even whispers of significant change. This sort of centralizing in the name of efficiency can sometimes lead to long feedback loops with potentially dramatic consequences.

Said another way, much of what appears efficient in the short term may not be efficient but hiding risk somewhere, creating the potential for a blow-up. A large selectorate tends to appear to be working less efficiently in the short term, but can be more robust in the long term, making it more efficient in the long term as well. It is a story of the Tortoise and the Hare: slow and steady may lose the first leg, but win the race.

In the Beginning, There Was Bitcoin

In October 2008, an anonymous individual or group using the pseudonym Satoshi Nakamoto sent an email to a cypherpunk mailing list, explaining a new system called bitcoin. The opening line of the conclusion summed up the paper:

“We have proposed a system for electronic transactions without relying on trust”

When the network went live a few months later in January 2009, Satoshi embedded the headline of a story running that day in The London Times:

“The Times 03/Jan/2009 Chancellor on brink of second bailout for banks”

Though we can’t know for sure what was going through Satoshi’s mind at the time, the most likely explanation based is that Satoshi was reacting against the decisions being made in response to the 2008 Global Financial Crisis by the small selectorate in charge of monetary policy.

Instead of impactful decisions about the monetary system like a bailout being reliant upon a single individual, the chancellor, Satoshi envisioned bitcoin as a more robust monetary system, with a larger selectorate beyond the control of a single individual.

But why create a new form of money? Throughout history, the most common way for individuals to show their objections to their nation’s monetary policy was by trading their currency for some commodity like gold, silver, or livestock that they believed would hold its value better than the government-issued currency.

Gold, in particular, has been used as a form of money for nearly 6,000 years for one primary reason: the stock-to-flow ratio. Because of how gold is deposited in the Earth’s crust, it’s very difficult to mine. Despite all the technological changes in the last few hundred years, this has meant that the amount of new gold mined in a given year (the flow) has averaged between 1-2 percent of the total gold supply (stock) with very little variation year to year.

As a result, the total gold supply has never increased by more than 1-2 percent per year. In comparison to Venezuela’s 32.4 percent inflation and Argentina’s 200.80 percent inflation, gold’s inflation is far lower and more predictable.

Viewed through the lens of Selectorate Theory, we can say that gold or other commodity forms of money have a larger selectorate and are more robust than government-issued fiat currency. In the same way a larger group of stakeholders in a democracy constrains the actions of any one politician, the geological properties of gold constrained governments and their monetary policy.

Whether or not these constraints were “good” or “bad” is still a matter of debate. The Keynesian school of economics, which has come to be the view of mainstream economics, emerged out of John Maynard Keynes’s reaction to the Great Depression, which he thought was greatly exacerbated by the commitment to the gold standard and that governments should manage monetary policy to soften the cyclical nature of markets.

The Austrian and monetarist schools believe that human behavior is too idiosyncratic to model accurately with mathematics and that minimal government intervention is best. Attempts to intervene can be destabilizing and lead to inflation so a commitment to the gold standard is the lesser evil in the long run.

Taken in good faith, these schools represent different beliefs about the ideal point on the Selectorate Spectrum. Keynesians believe that greater efficiency could be gained by giving government officials greater control over monetary policy without sacrificing much robustness. Austrians and monetarists argue the opposite, that any short-term efficiency gains actually create huge risks to the long-term health of the system.

Viewed as a money, bitcoin has many gold-like properties, embodying something closer to the Austrian and monetarist view of ideal money. For one, we know exactly how many bitcoin will be created21 millionand the rate at which they will be created. Like gold, the ability to change this is outside of the control of a single or small group of individuals, giving it a predictable stock-to-flow ratio and making it extremely difficult to inflate.

Similar to gold, the core bitcoin protocol also makes great trade-offs in terms of efficiency in the name of robustness.[25]

However, bitcoin has two key properties of fiat money which gold lacksit is very easy to divide and transport. Someone in Singapore can send 1/100th of a bitcoin to someone in Canada in less than an hour. Sending 1/100th of a gold bar would be a bit trickier.

In his 1998 book, Cryptonomicon, science fiction author Neal Stephenson imagined a bitcoin-like money built by the grandchild of Holocaust survivors who wanted to create a way for individuals to escape totalitarian regimes without giving up all their wealth. It was difficult, if not impossible, for Jews to carry gold bars out of Germany, but what if all they had to do was remember a 12-word password phrase? How might history have been different?

Seen in this way, bitcoin offers a potentially better trade-off between robustness and efficiency. Its programmatically defined supply schedule means the inflation rate will be lower than gold (making it more robust) while it’s digital nature makes it as divisible and transportable as any fiat currency (making it more efficient).

Using a nifty combination of economic incentives for mining (proof-of-work system) and cryptography (including blockchain), bitcoin allowed individuals to engage in a network that was both open (like a market) and coordinated (like a firm) without needing a single or small group of power brokers to facilitate the coordination.

Said another way, bitcoin was the first example of money going from being controlled from a small group of firm-like entities (central banks) to being market-driven. What cryptocurrency represents is the technology-enabled possibility that anyone can make their own form of money.

Whether or not bitcoin survives, that Pandora’s Box is now open. In the same way computing and the internet opened up new areas of the economy to being eaten by markets, blockchain and cryptocurrency technology have opened up a different area to be eaten by markets: money.

The Future of Public Blockchains

Bitcoin is unique among forms of electronic money because it is both trustworthy and maintained by a large selectorate rather than a small one.

There was a group that started to wonder whether the same underlying technology could be used to develop open networks in other areas by reducing the transaction cost of trust.[26]

One group, the monetary maximalists, thinks not. According to them, public blockchains like bitcoin will only ever be useful as money because it is the area where trust is most important and so you can afford to trade everything else away. The refugee fleeing political chaos does not care that a transaction takes an hour to go through and costs $10 or even $100. They care about having the most difficult to seize, censorship-resistant form of wealth.

Bitcoin, as it exists today, enhances coordination scalability by allowing any two parties to transact without relying on a centralized intermediary and by allowing individuals in unstable political situations to store their wealth in the most difficult-to-seize form ever created.

The second school of thought is that bitcoin is the first example of a canonical, trustworthy ledger with a large selectorate and that there could be other types of ledgers which are able to emulate it.

At its core, money is just a ledger. The amount of money in your personal bank account is a list of all the transactions coming in (paychecks, deposits, etc.) and all the transactions going out (paying rent, groceries, etc.). When you add all those together, you get a balance for your account.

Historically, this ledger was maintained by a single entity, like your bank. In the case of U.S. dollars, the number in circulation can be figured out by adding up how much money the U.S. government has printed and released into the market and how much it has taken back out of the market.

What else could be seen as a ledger?

The answer is “nearly everything.” Governments and firms can be seen just as groups of ledgers. Governments maintain ledgers of citizenship, passports, tax obligations, social security entitlements and property ownership. Firms maintain ledgers of employment, assets, processes, customers and intellectual property.

Economists sometimes refer to firms as “a nexus of contracts.” The value of the firm comes from those contracts and how they are structured within the “ledger of the firm.” Google has a contract with users to provide search results, with advertisers to display ads to users looking for specific search terms, and with employees to maintain the quality of their search engine. That particular ledger of contracts is worth quite a lot.

Mechanical time opened up entirely new categories of economic organization. It allowed for trade to be synchronized at great distanceswithout mechanical time, there would have been no railroads (how would you know when to go?) and no Industrial Revolution. Mechanical time allowed for new modes of employment that lifted people out of serfdom and slavery.[27]

In the same way, it may be that public blockchains make it possible to have ledgers that are trustworthy without requiring a centralized firm to manage them. This would shift the line further in favor of “renting” over “buying” by reducing the transaction cost of trust.

Entrepreneurs may be able to write a valuable app and release for anyone and everyone who needs that functionality. The entrepreneur would collect micro-payments in their wallet. A product designer could release their design into the wild and consumers could download it to be printed on their 3D printer almost immediately.[28]

For the first 10 years of bitcoin’s existence, this hasn’t been possible. Using a blockchain has meant minimizing the transaction cost of trust at all costs, but that may not always be the case. Different proposals are already being built out that allow for more transactions to happen without compromising the trust which bitcoin and other crypto-networks offer.

There are widely differing opinions on what the best way to scale blockchains are. One faction, usually identifying as Web 3/smart contracting platform/Ethereum, believes that scaling quickly at the base layer is essential and can be done with minimal security risk while the other groups believe that scaling should be done slowly and only where it does not sacrifice the censorship-resistant nature of blockchains (bitcoin). Just like the debate between Keynesian and Austrian/monetarist views of monetary policy, these views represent different beliefs about the optimal tradeoff point on the Selectorate Spectrum. But, both groups believe that significant progress can be made on making blockchains more scalable without sacrificing too much trust.

Public blockchains may allow aggregation without the aggregators. For certain use cases, perhaps few, perhaps many, public blockchains like bitcoin will allow the organization and coordination benefits of firms and the motivation of markets while maintaining a large selectorate.

Ultimately, what we call society is a series of overlapping and interacting ledgers.

In order for ledgers to function, they must be organized according to rules. Historically, rules have required rulers to enforce them. Because of network effects, these rulers tend to become the most powerful people in society. In medieval Europe, the Pope enforced the rules of Christianity and so he was among the most powerful.

Today, Facebook controls the ledger of our social connections. Different groups of elites control the university ledgers and banking ledgers.

Public blockchains allow people to engage in a coordinated and meritocratic network without requiring a small selectorate.

Blockchains may introduce markets into corners of society that have never before been reached. In doing so, blockchains have the potential to replace ledgers previously run by kings, corporations, and aristocracies. They could extend the logic of the long tail to new industries and lengthen the tail for suppliers and producers by removing rent-seeking behavior and allowing for permissionless innovation.

Public blockchains allow for rules without a ruler. It began with money, but they may move on to corporate ledgers, social ledgers and perhaps eventually, the nation-state ledger.[29]

Acknowledgments: Credit for the phrase “Markets Are Eating the World” to Patri Friedman.


  1. https://www.bls.gov/opub/mlr/1981/11/art2full.pdf
  2. https://www.bls.gov/emp/tables/employment-by-major-industry-sector.htm
  3. http://www3.nccu.edu.tw/~jsfeng/CPEC11.pdf
  4. There are, of course, other types of transaction costs than the ones listed here. A frequent one brought up in response to Coase is company culture, which nearly all entrepreneurs and investors agree is an important factor in a firm’s productivity. This is certainly true, but the broader point about the relationship between firm size and transaction costs hold—culture is just another transaction cost.
  5. http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/synch.html
  6. https://en.wikipedia.org/wiki/Escapement
  7. Fungibility is the property of a good or a commodity whose individual units are interchangeable. For example, one ounce of pure silver is fungible with any other ounce of pure silver. This is not the same for most goods: a dining table chair is not fungible with a fold-out chair.
  8. Piece rates, paying for some measurement of a finished output like bushels of apples or balls of yarn, seems fairer. But they suffer from two issues: For one, the output of the labor depends partially on the skill and effort of the laborer, but also on the vagaries of the work environment. This is particularly true in a society like that of medieval Europe, where nearly everyone worked in agriculture. The best farmer in the world can’t make it rain. The employee wants something like insurance that they will still be compensated for the effort in the case of events outside their control, and the employer who has more wealth and knowledge of market conditions takes on these risks in exchange for increased profit potential.
  9. For the worker, time doesn’t specify costs such as effort, skill or danger. A laborer would want to demand a higher time-rate wage for working in a dangerous mine than in a field. A skilled craftsman might demand a higher time-rate wage than an unskilled craftsman.
  10. The advent of the clock was necessary for the shift from farms to cities. Sunup to sundown worked effectively as a schedule for farmers because summer was typically when the most labor on farms was required, so longer days were useful. For craftsman or others working in cities, their work was not as driven by the seasons and so a trusted measure of time that didn’t vary with the seasons was necessary. The advent of a trusted measure of time led to an increase in the quantity, quality and variety of goods and services because urban, craftsman type work was now more feasible.
  11. https://unenumerated.blogspot.com/2017/02/money-blockchains-and-social-scalability.html. I am using the phrase “coordination scalability” synonymously with how Nick uses “social scalability.” A few readers suggested that social scalability was a confusing term as it made them think of scaling social networks.
  12. 150 is often referred to as Dunbar’s number, referring to a number calculated by University of Oxford anthropologist and psychologist Robin Dunbar using a ratio of neocortical volume to total brain volume and mean group size. For more see  https://www.newyorker.com/science/maria-konnikova/social-media-affect-math-dunbar-number-friendships. The lower band of 15 was cited in Pankaj Ghemawat’s World 3.0
  13. https://www.jstor.org/stable/2938736
  14. http://discovermagazine.com/1987/may/02-the-worst-mistake-in-the-history-of-the-human-race
  15. Because what else would you want to do besides eat bread dipped in fresh olive oil and drink fresh beer and wine?
  16. From The History of Money by Jack Weatherford.
  17. It also allowed them to squeeze out competitors at different places in the supply chain and put them out of business which Standard Oil did many times before finally being broken up by anti-trust legislation.
  18. http://www.paulgraham.com/re.html
  19. Tomorrow 3.0 by Michael Munger
  20. http://www.paulgraham.com/re.html
  21. There were quite a few things, even pre-internet, in the intersection between markets and firms, like approved vendor auction markets for government contracting and bidding, but they were primarily very high ticket items where higher transaction costs could be absorbed. The internet brought down the threshold for these dramatically to something as small as a $5 cab ride.
  22. The Long Tail was a concept WIRED editor Chris Anderson used to describe the proliferation of small, niche businesses that were possible after the end of the “tyranny of geography.” https://www.wired.com/2004/10/tail/
  23. From Wikipedia: “Fiat money is a currency without intrinsic value that has been established as money, often by government regulation. Fiat money does not have use value, and has value only because a government maintains its value, or because parties engaging in exchange agree on its value.” By contrast, “Commodity money is created from a good, often a precious metal such as gold or silver.” Almost all of what we call money today, from dollars to euros to yuan, is fiat.
  24. Small institutions can get both coordination and a larger selectorate by using social norms. This doesn’t enable coordination scalability though as it stops working somewhere around Dunbar’s number of 150.
  25. Visa processes thousands of transactions per second, while the bitcoin network’s decentralized structure processes a mere seven transactions per second. The key difference being that Visa transactions are easily reversed or censored whereas bitcoin’s are not.
  26. https://medium.com/@cdixon/crypto-tokens-a-breakthrough-in-open-network-design-e600975be2ef
  27. https://medium.com/cryptoeconomics-australia/the-blockchain-economy-a-beginners-guide-to-institutional-cryptoeconomics-64bf2f2beec4
  28. https://medium.com/cryptoeconomics-australia/the-blockchain-economy-a-beginners-guide-to-institutional-cryptoeconomics-64bf2f2beec4
  29. https://twitter.com/naval/status/877467629308395521