Last week, a global gaggle of billionaires, academics, thought leaders, and other power brokers gathered in Davos, Switzerland, for the World Economic Forum’s signature annual event. Climate change! The global economy! Health! The agenda was packed with discussion of the most pressing issues of our time. True to form, much of the musing ventured away from root causes.
Climate change—barring strong words from Greta Thunberg and other activists—was customarily discussed in the context of financial markets and the economy. Rising inequality was predictably repackaged as a threat to “already-fragile economic growth.” It was an echo of Davos 2019, when in a viral clip Dutch historian Rutger Bregman compared the event’s silence on corporate tax avoidance to a firefighting conference with a gag rule about water.
Justin Sherman (@jshermcyber) is a Fellow at the Atlantic Council’s Cyber Statecraft Initiative.
Davos is hardly known for its modesty—nor its favoring of government solutions to publicly shared problems. Hence why it might seem unexpected, even counterintuitive, that companies like Microsoft and IBM spoke at this year’s summit about the need for technology regulation.
But this perfectly captures the US tech industry’s shift toward talking regulation—just in a way that benefits itself—and the related risks of allowing private corporations to set the American (or even global) agenda on technology governance.
Microsoft CEO Satya Nadella caught the media’s attention when he said at Davos, “I think we should be thinking a lot harder about regulation” of facial recognition and object recognition technology. IBM CEO Ginni Rometty hosted a Davos panel on precision regulation of AI, in line with IBM’s push to “guide regulation” in the space. Palantir CEO Alex Karp even joined the fray, criticizing at once Silicon Valley’s aversion to regulation and its reluctance to work with the US government.
It’s possible some of these stances are well-intentioned. Tech firms have felt the heat for the harm their technologies and behaviors have inflicted; that may have kicked slight attitude shifts into gear. At the same time, however, these calls for tech regulation from private tech firms have a sharp corporate twist.
First, when a corporation calls for regulation, they can nudge the public to over-focus on a technology itself and under-focus on the nature of the technology’s development and use. This gives them leverage. Take facial recognition, for example.
Certain things about facial recognition are unique, like its use of a face as the mechanism of identification—not so easily changed as a password, a phone number, or even a home address. In this way, bans on facial recognition technology could have positive effects. As Google’s Pichai said recently, artificial intelligence shouldn’t be used to “support mass surveillance or violate human rights,” and facial recognition could certainly play a role in those practices.
Yet, as Bruce Schneier laid out in The New York Times, “focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building.” Facial recognition is “just one identification technology among many.” In other words, prohibitions on using facial recognition are one thing for a bottom line. They target a specific identification method or technology (depending on how you want to define it).
But regulating the underlying data collection and analysis? That’s an entirely different animal, one that could challenge core business models of major search engines, social media platforms, and AI product developers. Effects would be far more disruptive for those firms—albeit welcomed by citizens wanting legally protected data privacy. Zeroing in on the singular technology thus pivots regulation dialogue in the corporate favor, away from talk of more fundamental, government-driven change.
Second, tech companies rhetorically pushing for tech regulation can also be a way for those firms to further shape outcomes. There are many culprits here, but let’s look at Facebook as a telling example.
For years the company was averse to any privacy legislation. During a 2010 interview at The Wall Street Journal’s D8 conference—when Kara Swisher famously made Mark Zuckerberg sweat on stage—the Facebook CEO said that “privacy is a really important issue for us.” But Zuckerberg pushed back strongly against questions on privacy concerns and touted the user “experience.” That same year, responding to public discontent, he claimed that privacy was a dying social norm.
Times have changed, and so too have the pressures on the company to take responsibility for problems. Hate speech, disinformation, algorithmic discrimination—the list goes on. Included in these pressures is growing public support, and Congressional effort, around US federal data privacy legislation. And when Facebook realized the inevitability of some laws on this front, the company changed its tune.
In 2018, Zuckerberg expressed an openness to data regulation, caveated quickly by arguing for only limited-scope rules that allow Facebook to compete with Chinese firms. (This was based on a flawed argument.) Last year, Zuckerberg again called for certain kinds of regulation on Big Tech, while essentially arguing that Facebook isn’t a social networking monopoly and veering far away from proposals that would seriously alter its data collection and microtargeting.
Recasting US-China geopolitical tensions (once again) in a self-serving light, Zuckerberg is now advancing the argument that his firm champions free-and-open internet values. To keep doing so—in contrast to values advanced by Beijing—there mustn’t be more than just a light dusting of regulation. Some public oversight is OK, we hear again, but not that much, and not of all kinds.
Fears about Chinese censorship, surveillance, and surging geopolitical influence are pronounced in Washington. With full knowledge of this reality, Facebook can paint itself as pro-regulation while simultaneously steering policymakers away from some of the public reforms most urgently needed. It’s tech regulation with a corporate twist—propelled at Davos but hardly originating there.
The risks of allowing such firms to capture public conversation on tech regulation are serious. For all the comparisons one could draw between, say, a private social media or search engine company and a government, the fact remains that corporations answer to a bottom line. They all have shareholders; they lobby enormously in Washington, just like in every other industry. Everyday citizens can’t vote tech executives out of office.
The fact also remains that many of the problems of today’s technology sphere—from easy and large-scale foreign influence in democratic elections to the ever-expanding behemoth of corporate data collection—were in large part caused by the pursuit of growth at all costs, absent the requisite government interventions to ensure the protection of privacy and other rights.
Is this really what we’d call democratic accountability?
Congressmembers and policy wonks in Washington, to be clear, do need outside experts to help them solve technology issues. Firms might also legitimately want governments (or, at least, certain governments) to direct their actions in areas like political advertising—where compliance with government action is an easy-out reply to complaints about unethical or black-box decisionmaking.
Policy positions like robust commercial encryption may also share consensus on the part of companies, consumers, and citizens. It’s not always a clash of interests.
But Marietje Schaake, a former member of European Parliament, said it best when she warned, “beware of tech companies playing government.” At a time when countries like China, Russia, and Iran are pursuing techno-enhanced authoritarianism, and when countries like the US and the UK need to curb a growing domestic surveillance state, address algorithmic discrimination, and prevent companies from engaging in harms like selling surveillance tools to human-rights abusers, citizens and policymakers alike should desire democratic technology regulation more than ever—establishing strong norms at home and abroad. And that includes a public conversation about technology governance not captured by those who’d be regulated in the first place.
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at email@example.com.