Part One: “They Were on Our Side”
Once upon a time in 2003, in a dorm room at Harvard, an ambitious undergraduate named Mark Zuckerberg created a platform that would transform the world by bringing people together in a digital public square where sharing and communication and cute cat videos could thrive.
That’s the origin story Zuckerberg likes to tell. In fact, Zuckerberg’s initial creation was “Facemash,” a site filled with purloined photos of Harvard women whom users were encouraged to rank for hotness. An online comparison site of female undergraduates might briefly have seemed like a viable business model to Zuckerberg, but he quickly figured out that if he let users decide for themselves what they wanted to post, and offered them a place to do that for free, his platform would have much broader appeal. Facemash became Facebook in 2004, and proved that people were indeed eager to share a lot of information about themselves: photos, what they ate for lunch, the shoes they bought, birthday wishes for friends and family members.
In Release 2.0, published in 1997, the futurist Ester Dyson described the Internet as a place that would allow ever greater numbers of people “to design a world that is more open, more accessible to everyone and just a nicer place to live in.” At first, Facebook seemed to be just that sort of virtual space. It was a benign, perpetual high-school reunion—albeit one where, like high school, people measured one another’s worth by the number of “friends” they accrued, and where the free data they provided to Facebook was monetized by the company in the form of directed advertising.
People loved being on Facebook, and they didn’t seem to mind that the site’s optimistic message about connection was married to the hubristic vision of its founder. As Chris Hughes, one of Facebook’s co-founders and Zuckerberg’s college roommate, wrote in the New York Times, “from our earliest days, Mark used the word ‘domination’ to describe our ambitions, with no hint of irony or humility.”
Thanks in part to Section 230 of the Communications Decency Act of 1996, which protects online platforms from being sued indiscriminately for content their users post to their sites, Facebook grew rapidly. Six million people were signed up for Facebook in 2005. The popularization of the smartphone a few years later vastly increased Facebook’s reach, and by 2010, Facebook had 500 million users.
Politicians loved Facebook, too. In 2008, Barack Obama’s presidential campaign heavily mined Facebook user data through its Facebook page and app (with users’ permission), and his quants were praised for their technological savvy. The New York Times described the Obama for America data-analytics team as “digital masterminds.” In 2012, Obama’s campaign manager boasted that the skillful use of micro-targeted ads and data-mining to persuade undecided voters made it “the most data-driven campaign ever.”
Facebook encouraged politicians’ engagement with the platform. As Carol Davidsen, former director of integration for media analytics for Obama for America, tweeted about the 2012 campaign, “Facebook was surprised we were able to suck out the whole social graph, but they didn’t stop us once they realized that was what we were doing. They came to [the] office in the days following election recruiting & were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.”
Media outlets also bought into Facebook’s message that the platform was on their side. The company encouraged major print and broadcast media companies to “partner” with Facebook and integrate its tools on their sites as a way to gain new and more engaged audiences. Soon, editors were tailoring story assignments and choosing headlines with an eye to making them more Facebook-friendly and, they hoped, more appealing to readers—a necessity at a time of declining print-advertising revenue.
Part Two: “More Open and Connected”
By 2010, Zuckerberg had become even fonder of the idea that Facebook was not merely a successful, growing business but an existential force for good in the world. “If people share more, the world will become more open and connected,” he wrote in an op-ed in the Washington Post. “And a world that’s more open and connected is a better world.” But it was connection untethered from the traditional responsibilities of moderation or editorial oversight. Facebook, its leaders frequently reminded us, was a neutral platform, not a publisher or media outlet, and so not ultimately responsible for managing what its connected “community” of users posted on the site.
What was heavily managed by Facebook was the experience of its users. Building on the work of scholars in persuasive technology and behavioral science, Facebook added features to increase user “engagement”—that is, users’ reactions to something they had seen—and data about whether or not they shared it. As the technology critic Jaron Lanier puts it, engagement “is not meant to serve any particular purpose other than its own enhancement.” It didn’t matter whether the content was true, false, happy, or sad, so long as it elicited strong reactions among users.
Increasing “engagement” is the reason Facebook in 2006 launched News Feed, a constantly updated list of posts by a user’s Facebook friends. It didn’t go over well at first. Many users complained that because News Feed included dates and time stamps, it violated their privacy. Others asked why there was no off switch for the Feed. A 284,000-member Facebook group calling itself “Students Against Facebook News Feed” quickly formed and petitioned Facebook to end News Feed and threatened to boycott the platform. Zuckerberg responded to complaints with a blog post titled, “Calm Down. Breathe. We Hear You,” before stating that there was nothing wrong with News Feed and it wouldn’t be changed.
In fact, Facebook engineers constantly finessed the platform’s proprietary algorithms to improve the experience of users’ News Feeds while simultaneously harvesting ever more granular data about each user—the better to serve the uniquely targeted ads, called “featured posts,” it had started showing in those same News Feeds. Like network television, Facebook is “free” to users because it makes money by selling users’ attention to advertisers. News Feed became an important way to serve up ads for eyeballs; it is the first thing users of the Facebook mobile app see when they go to Facebook, for example.
Efforts to boost revenue were also behind the creation of the Facebook Like button in 2009. The button not only increased the time people spent on the site but also, as more websites included Like buttons on their own pages, allowed Facebook to track its users all across the Internet, generating even more data about users’ emotional reactions to what they were seeing. As Roger McNamee, an early Facebook investor, wrote in his book Zucked, “where Facebook asserts that users control their experience by picking the friends and sources that populate their News Feed, in reality an artificial intelligence, algorithms, and menus created by Facebook engineers control every aspect of that experience.”
The Like button did not spring fully formed from the head of Mark Zuckerberg. Rather, he copied it from a smaller social network, FriendFeed. It wasn’t the only thing Facebook copied. As Mashable reported, 2009 was “the year of no-holds barred FriendFeed emulation on Facebook’s part. First Facebook cloned FriendFeed’s comments feature, then the like feature, and eventually they decided to roll out a FriendFeed-like real-time homepage. In fact, the Facebook we know today is very similar to the first iterations of the real-time aggregator. . . . Clearly FriendFeed is innovating, and Facebook is following.”
In fact, Facebook was acquiring. It bought FriendFeed in 2009. It tried but failed to buy Twitter around the same time. In 2010, Facebook bought a smaller photo-sharing site called Divvyshot. It was the beginning of what would become an established pattern for the company: First, Facebook copied the most popular features of smaller, rival social-media sites; then it moved to acquire them. Zuckerberg was putting his early talk of market domination into action.
Part Three: “We Never Meant to Upset You”
The company’s reputation took a small hit in 2014, when Facebook admitted that, in a bid to come to a better understanding of whether emotions were “contagious” across its platform, it had run a massive and undisclosed behavioral experiment on its users by manipulating what they saw in their News Feeds. The company had deliberately placed content that was either positive or negative and then tracked users’ responses. Even after a loud public outcry, Facebook COO Sheryl Sandberg was not moved by claims that the company had overreached, calling the experiment a common industry practice and saying, “We never meant to upset you.”
At the same time, other Facebook users had noticed that the range of what they saw on their News Feeds had slowly narrowed to include mainly like-minded people and groups. As Eli Pariser argued in The Filter Bubble, the gatekeeping architecture of Facebook’s algorithms, which were entirely opaque to outsiders because they were proprietary, tended to serve up more of what you had already demonstrated you liked. As a result, Facebook users existed in a kind of “filter bubble” free from exposure to contrary or disagreeable opinions or people. In 2014, Facebook introduced “Groups,” billed as the “new public square” by the company, with the claim that they would offer users even more of what they already knew they liked. Every time a Facebook user went on the site, Facebook promised, they would find “groups suggested to you based on Pages you’ve liked, groups your friends are in, and where you live.”
By now 10 years old, Facebook also diversified its portfolio of technologies and ramped up its anti-competitive behavior, buying more potential rivals and deepening its commitment to the development of better facial-recognition and virtual-reality technologies. Over the grumblings of anti-trust advocates, Facebook purchased Instagram in 2012 and reportedly tried, but failed, to purchase Snapchat in 2013. It bought Israeli analytics company Onavo in 2013 and the messaging service WhatsApp in 2014, as well as virtual-reality company Oculus VR. Company executives announced that, with Oculus’s help, Facebook had reached “near-human accuracy” in the development of its facial-recognition software, a useful tool for a platform that has access to the 350 million photos per day uploaded by its billions of users.
Perhaps as a nod to its users’ complaints about growing polarization on the site, Facebook executives began paying lip service to the ideal of the deliberative, democratic, digital public square—while doing nothing substantive to change the platform in a way that might threaten profits. During a town-hall-style Q&A at Facebook’s headquarters at the end of 2014, Zuckerberg tried to assuage concerns about echo chambers and filter bubbles by claiming that Facebook took “diversity of opinion” seriously. “One thing we care a lot about is making sure people get exposed to a diversity of opinion,” he said. “That builds a stronger community. One thing we’re proud of is basically on Facebook, even if you’re Republican or Democrat, you have some friends in the other camp.”
Part Four: “The Single Most Important Platform”
It turns out Facebook users in the U.S. had an unexpected bounty of friends in other countries, too.
As scholar Siva Vaidhyanathan describes in his book Anti-Social Media, in 2016, “Russian agents targeted content using the Facebook advertising system to mess with American democracy. They created bogus Facebook groups and pages devoted to such issues as opposing gun control, [and] opposing immigration.” They effectively used Facebook’s architecture to disseminate propaganda, and their efforts reached an estimated 126 million people in the lead-up to the 2016 election.
Liberal fantasies to the contrary, Russian-sponsored fake news did not hand Trump the election; a study in the journal Science Advances by researchers at Princeton University and New York University found that fewer than 9 percent of Americans shared fake-news links during the election. But the ease with which foreign adversaries exploited the site raises questions about Facebook’s understanding of the vulnerabilities of its own platform.
In the 2016 election, Facebook was more necessary to the shaping of the political message of candidates than it had been during the Obama campaigns. Donald Trump’s election effort relied on tools such as Facebook’s Custom Audiences (which didn’t exist until 2014) to micro-target voters in key states and sow doubts about Hillary Clinton among Democratic voters who had favored Bernie Sanders in the primary. The aim was to discourage them from voting at all. Facebook was also a crucial Trump fundraising tool. As Trump’s digital campaign strategist, Brad Parscale, told BuzzFeed, “Facebook was the single most important platform to help grow our fundraising base.”
Facebook found itself newly implicated in geopolitics, as well. In 2018, Reuters reported that Facebook was being used by the Buddhist majority in Myanmar to organize and carry out targeted attacks on the minority Rohingya Muslim population; a UN report about the conflict criticized Facebook specifically for enabling “ethnic cleansing.” In February 2019, the New York Times reported that Facebook had taken down Iranian propaganda pages that had been active on the site for nearly a decade and that had engaged in a steady disinformation campaign meant to undermine American democracy.
None of these campaigns required much skill or sophistication on the part of the bad actors behind them. The open secret about Facebook, as anyone who knows a Holocaust denier, an anti-vaxxer, or a flat-earth proponent can attest, is that disinformation and propaganda thrive in the Facebook ecosystem. The tools necessary for successfully disseminating disinformation have always been baked into the system, which makes the manipulation of that system a fairly straightforward and inexpensive task. Conspiracy theorists with delusions of grandeur no longer need to buy expensive full-page ads in major newspapers to find an audience for their cause or go through the laborious process of organizing meetings or recruiting door-to-door. They can just form a Facebook group, start posting incendiary content, and watch the converts roll in.
The onslaught of fake news and propaganda also flourished on Facebook because the media institutions that might have pushed back against them had become shells of their former selves. By 2016, with ad revenue now permanently flowing to online outlets, print publications and traditional news organizations struggled to compete. Having committed themselves to the Facebook model of attention-seeking, they found themselves at the mercy of a platform whose primary motivation was profit on its own terms, not journalistic truth-seeking.
When Facebook announced with great fanfare in 2015 that it was “pivoting to video,” for example, media outlets dutifully shifted their resources to producing Facebook-friendly videos in an effort to lure more advertisers—and fired traditional reporters and editors to make way for the “golden age of online video” that Zuckerberg promised them. This left media outlets hobbled when, less than a year later, Facebook admitted that it had woefully overestimated user engagement with videos and was pivoting away from them.
As a Nieman Lab report found, “plenty of news publishers made major editorial decisions and laid off writers based on what they believe to be unstoppable trends that would apply to the news business.” Among those publishers were Mic, Vice, MTV News, Fox Sports, Vocativ, Bleacher Report, and Mashable, all of which fired traditional writers and editors to free up resources for video production. Mic went out of business largely because it was so reliant on Facebook. As Fast Company noted, “Facebook presented itself as the place where Mic could meet its audience. But the publisher never bargained for what would happen if it lost the platform’s support.”
More recently, when Facebook announced plans to give priority to friends rather than news outlets on Facebook users’ News Feeds, Facebook-referred traffic to news sites took a deeper nosedive. Facebook continues to refuse to pay publishers to license their content for use on its platform. When asked about this refusal, Zuckerberg has professed his admiration for journalism before allowing that since there was really nothing Facebook could do beyond its many in-house initiatives, perhaps the federal government could “support” the languishing field of journalism with taxpayer money.
Part Five: “Failing to Keep Privacy Promises”
From the company’s earliest days, Facebook’s leaders have adopted a remarkably consistent approach to the exposure of problems and missteps: a mercenary variation of the “ask for forgiveness, not permission” strategy. Any time the company does something irresponsible or privacy-violating, Zuckerberg issues an apology on Facebook and Sandberg appears on television programs to reassure an anxious world that Facebook will do better. As Zeynep Tufekci observed in Wired: “By 2008, Zuckerberg had written only four posts on Facebook’s blog: Every single one of them was an apology or an attempt to explain a decision that had upset users.”
But he kept doing it because it worked—until 2018. That year, Apple booted Facebook off of its app store for violating Apple’s privacy rules. Facebook had used an app to facilitate a research study wherein Facebook paid teenagers and some adults to let the company monitor everything they did on their mobile phones. That same year, there was a major hack of Facebook, this one affecting approximately 29 million Facebook users; the company issued its standard sorry-we-promise-to-do-better statement and changed nothing about its core business model.
Around the same time, as TechCrunch reported, Zuckerberg and other Facebook executives secretly “disappeared” their sent messages from their accounts, removing years’ worth of Facebook correspondence from other users’ mailboxes with no warning. Once caught out, the company claimed that it had done so for vaguely defined security reasons, but the lack of transparency by a company that insists that everyone should share everything was seen as hypocritical by many Facebook users.
The most notable scandal involved a political consulting firm based in the UK called Cambridge Analytica, which, reports revealed, improperly used data it had scraped from more than 50 million Facebook user profiles in the U.S. to craft political ads for candidates such as Trump and Ted Cruz during the Republican presidential primary in 2016. (Less well publicized was the fact that Cambridge Analytica’s effort to micro-target and manipulate voters was largely unsuccessful; Republican consultants ended up relying on more traditional and publicly available demographic data.)
The Cambridge Analytica news sparked a louder outcry from the public than previous Facebook scandals had done, in part because it offered an easily digestible, quasi-conspiratorial explanation for why Donald Trump had won the White House. Many people declared their intention to #DeleteFacebook, and celebrities such as Cher and Will Ferrell publicly announced that they were leaving the platform as a result of the scandal. As one previously avid Facebook user told the New York Times: “We have surpassed the tipping point, where the benefit now fails to outweigh the cost. But I will definitely miss what the promise of Facebook used to be—a way to connect to community in a very global and local context.”
But the real scandal wasn’t that Cambridge Analytica had misused Facebook data. No, it was that for years Facebook had made it easy for third parties to gain access to users’ information despite the fact that the company had literally agreed not to do so in a settlement with the Federal Trade Commission in 2011. The FTC investigation concluded that Facebook had “deceived consumers by failing to keep privacy promises.” As part of the 2011 consent decree, Facebook agreed to take a number of steps to prevent privacy violations from happening in the future. As the Cambridge Analytica scandal revealed, Facebook had not taken that legal obligation seriously.
There are signs that the public is now less willing to forgive and forget Facebook’s behavior. A March 2019 Wall Street Journal/NBC poll found that 57 percent of respondents felt social media does more to divide the country than to bring it together, and 54 percent “said they aren’t satisfied with the amount of federal government regulation and oversight of social-media companies.” More than 90 percent of respondents said that “companies that operate online should get permission before sharing or selling access to a consumer’s personal information, and that they should be required by law to delete it on request.” Respondents had noticeably more negative feelings about Facebook than other big tech companies (it had the highest negative rating), and only about 5 percent said they trusted it to protect their personal information. This suggests that the company’s strategy—public hand-wringing but no meaningful action—isn’t working as well as it once did.
The public (and some lawmakers who were previously loath to criticize Big Tech for fear of losing campaign contributions) have also become impatient with Facebook’s many evasions about its policies for dealing with extremist, violent, or propagandistic content on the platform. The killer who attacked worshippers at a mosque in Christchurch, New Zealand, in March 2019 streamed the killings in real time on Facebook Live; indeed, he appears to have tailored his actions to suit the demands of the platform on which he broadcast the murders. Facebook took down the video as quickly as it could, but that didn’t prevent it from rocketing around the globe.
Facebook then boasted about the number of uploads of the video that its A.I. system had detected and successfully blocked; the company said nothing of substance about how it planned to prevent extremism from thriving on Facebook in the future. This prompted some well-deserved criticism from policymakers. “Your systems are simply not working and quite frankly it’s a cesspit,” UK MP Stephen Doughty told a Facebook representative during a parliamentary hearing about hate crimes not long after the killings in Christchurch. “It feels like your companies don’t give a damn. You give a lot of rhetoric but don’t take action.”
From Facebook’s perspective, why should they? As Peter Kafka observed in Recode, and as the Christchurch killer obviously expected, social-media platforms “did exactly what they’re designed to do: allow humans to share whatever they want, whenever they want, to as many people as they want.”
The problem lies within the architecture of Facebook itself. It has built a platform with no incentive to block content or even to judge its veracity or worth. In the universe of Facebook, content, like connection, is an unalloyed good. Facebook’s leaders defend themselves by making reference to the nonhuman algorithm serving up content to its users. At an advertising conference in New York in 2016, Sandberg told the audience, “One of the theories out there is that we are controlling the news. We’re not a media company, we don’t have an editorial team deciding what’s on the front page. Our algorithms determine that based on the connections you have.”
But company engineers do make those decisions. Gizmodo reported in 2016 that “Facebook workers routinely suppressed news stories of interest to conservative readers from the social network’s influential ‘trending’ news section.” Curators also “injected” stories into trending news—the subjects Facebook tells us are of most interest to its users at that moment—to make the company look like it cared about the right things. “People stopped caring about Syria,” one curator told Gizmodo. “[And] if it wasn’t trending on Facebook, it would make Facebook look bad.” The curator also claimed that the same thing was done with the Black Lives Matter movement. “Facebook got a lot of pressure about not having a trending topic for Black Lives Matter. . .They realized it was a problem, and they boosted it in the ordering. They gave it preference over other topics.” It’s odd, then, that Zuckerberg told Recode’s Kara Swisher in 2018, “I don’t think that we should be in the business of having people at Facebook who are deciding what is true and what isn’t,” when that is precisely the business he is in. (Facebook denied claims that it had allowed curators to manipulate trending stories.)
Zuckerberg’s aversion to controlling content is the logical conclusion of the beloved Silicon Valley parable about how “information just wants to be free,” a phrase attributed to Stewart Brand, the original tech hippie and founder of the Whole Earth Catalog. Interpreted another way, of course, it meant the abolition of anything remotely like intellectual property—which was welcomed as progress in the new world of perpetual sharing and connecting.
According to Facebook’s value system, it doesn’t matter if the content is true, manufactured, or intentionally false. Content is content because content is data, which is the only thing Facebook can translate into profit for itself, and thus the only thing that matters. Truth or falsehood is of no concern so long as the content keeps people engaged with Facebook. Describing Facebook’s decision not to remove a video of House Majority Leader Nancy Pelosi that had been doctored to make her appear to be slurring her words, Atlantic contributor Ian Bogost noted: “The problem is, a business like Facebook doesn’t believe in fakes. For it, a video is real so long as it’s content. And everything is content.”
Facebook did attach a milquetoast warning to the doctored Pelosi video, but, as Monika Bickert, Facebook’s vice president of product policy, explained to CNN: “The conversation on Facebook, on Twitter, offline as well, is about the video being manipulated as evidenced by my appearance today. This is the conversation.”
But it doesn’t have to be.
Facebook’s power could be reined in. Limits could be placed on its interference in elections and its manipulation of voters, its violations of user privacy, and its enabling of extremism and violence and misinformation. We could be having better conversations.
Part Six: “Facebook Is More Like a Government”
Today, Facebook’s annual revenue is approximately $56 billion; according to CNBC, it boasts 1.52 billion daily active users and 2.32 billion monthly active users. If you include all of Facebook’s platforms (such as WhatsApp, Instagram, and Messenger), more than 2 billion people use its services every day. According to a Pew Research survey from 2017, 67 percent of American adults get their news from social media. Excluding China, Facebook controls four out of the top five social-media platforms on earth. By nearly any economic measurement, Facebook is a global success.
By nearly any measurement, it is also a predatory monopoly.
Not surprisingly, Facebook shows no interest in changing anything about its core business model in a way that might address consumers’ and regulators’ concerns. “Success should not be penalized,” Nick Clegg, Facebook’s recently appointed vice president for global affairs and communications, wrote in a New York Times opinion piece. “Our success has given billions of people around the globe access to new ways of communicating with one another.”
He’s right; the government shouldn’t penalize private companies for doing what they do very well. But what Facebook is doing isn’t what traditional companies have ever done, so perhaps the old rules can’t apply.
Traditional anti-trust theory suggests that because companies like Facebook offer services “free” to the consumer, there can be no harm to the consumer because there are no price impacts on him. This fails to assess Facebook’s impact on innovation and competition in the marketplace. As Lina Khan argued in a seminal Yale Law Review article about Amazon: “Antitrust law and competition policy should promote not welfare but competitive markets. By refocusing attention back on process and structure, this approach would be faithful to the legislative history of major antitrust laws. It would also promote actual competition—unlike the present framework, which is overseeing concentrations of power that risk precluding real competition.”
Cash-rich companies such as Facebook dominate the market not by crushing rivals or overcharging consumers (because Facebook is “free”), but by buying them. Would-be social-media entrepreneurs in Silicon Valley today don’t dream of being the next Facebook; that’s impossible under current conditions. They can only hope to be bought by Facebook.
Facebook has responded to increasing pressure not by taking a hard look at the structure of its platform and its downstream effects, but by announcing that the company is now embracing privacy as a core value. In March, Zuckerberg released a “privacy-focused vision for social networking” that outlined Facebook’s future plans for protecting the privacy of its users. Close readers of the new vision noticed that it was heavy on promises and light on fundamental changes to Facebook’s business model.
This prompted Missouri Senator Josh Hawley to send a letter to Zuckerberg requesting more detail about Facebook’s “pivot to privacy,” given the company’s business model, which is built on harvesting as much data as possible from its users. Facebook refused to engage Hawley’s concerns. “I am frankly shocked by Facebook’s response,” Hawley said in a statement issued by his office. “I thought they’d swear off the creepier possibilities I raised. But instead, they doubled down.”
It’s a warning we should heed. Because it has no meaningful competitors, Facebook has no checks and balances on its reach and power at a time when its leaders have announced ambitious plans to expand the platform’s reach around the world. “Facebook is more like a government than a traditional company,” Zuckerberg told Time magazine a few years ago. “We have this large community of people, and more than other technology companies we’re really setting policies.”
But Facebook promotes the policies of the engineer, not the statesman. According to the Guardian, Zuckerberg once told a group of software developers, “You know, I’m an engineer, and I think a key part of the engineering mindset is this hope and this belief that you can take any system that’s out there and make it much, much better than it is today.” Things don’t always go as the engineer intends, however. Zuckerberg was appropriately criticized in 2017 when he showed off the company’s Oculus Rift virtual-reality headsets by launching a virtual tour of hurricane-ravaged Puerto Rico. “One of the things that’s really magical about VR is you can get the feeling you’re really in a place,” Zuckerberg’s grinning avatar said while “scenes of flooding and destruction” unspooled behind him, according to the Guardian.
One way Facebook plans to make the world much, much better is through the expansion of its Internet.org program, which Zuckerberg launched in 2013 by declaring that connectivity was a “human right.” Facebook claims it has spent more than $1 billion “to connect people in the developing world,” in places such as Myanmar, Africa, and, until it was kicked out recently, India. But as David Talbot at MIT Technology Review noted, Zuckerberg prefers to speak in vaguely uplifting terms of money spent while avoiding any discussion of specifics. Of the $1 billion spent, Talbot asks, “Spent on what, to connect whom, and to what? . . . On closer inspection, that statement apparently means ‘connect people to Facebook.’”
Facebook also reportedly plans to release a bitcoin-style global currency, which some people are calling GlobalCoin, in 2020, and has been talking to bankers, creditors, online retailers, and regulators around the world about accepting it. According to the BBC, Facebook “is hoping to disrupt existing networks by breaking down financial barriers, competing with banks and reducing consumer costs.” The Economist reported that Zuckerberg has told software developers, “It should be as easy to send money to someone as it is to send a photo.”
Or to send robots: In May 2019, Facebook filed a European patent application for an “emotionally sensitive robot.” As one A.I. researcher at Facebook told Wired of the company’s efforts to teach a six-legged robot to walk, “what we wanted to try out is to instill this notion of curiosity” in Facebook’s robots.
Despite all of these globally ambitious projects, Facebook’s leaders show no curiosity about the dangers posed by the company’s immensity. Facebook has already announced plans to further integrate all of its services—including WhatsApp, Instagram, and Facebook Messenger—to allow for greater data consolidation and, one assumes, profitability. The move prompted the founders of Instagram to leave Facebook over their differences with Zuckerberg’s vision. In 2017, Brian Acton, the founder of WhatsApp, also left Facebook in protest over Zuckerberg’s plans to “prioritize monetization over user privacy,” according to the Verge, and he has repeatedly urged people to stop using Facebook.
Other companies—even big-tech companies such as Amazon—pose similar challenges with regard to their size and reach. But at least Amazon, which is selling consumers products and services, has some incentive to innovate and improve on those services to retain their customers. Because Facebook is capturing your attention to sell it to others, any innovations it creates don’t redound to the individual user except as ephemeral efficiencies of experience on-site; instead, they benefit Facebook.
Part Seven: “We Don’t Want Our Services to Be Used to Manipulate People”
Taming Facebook is a bipartisan challenge, because Facebook’s downstream negative effects are shared across the ideological divide. Whatever fixes Facebook made after the 2016 election to prevent foreign agents from using its platform to undermine elections don’t seem to be working. In late May, for example, Facebook announced that an outside cybersecurity firm called FireEye had alerted the company to potentially nefarious activity on the site by foreign agents. Facebook announced that it had removed “51 accounts, 36 pages, 7 groups, and 3 Instagram accounts involved in coordinated inauthentic behavior that originated in Iran.”
Although Facebook didn’t mention it in its statement, FireEye issued its own report that noted the accounts used “fake American personas that espoused both progressive and conservative political stances” and that some “impersonated real American individuals, including a handful of Republican political candidates that ran for House of Representative seats in 2018.”
Facebook offered its standard sorry-not-sorry defense: “We’re constantly working to detect and stop this type of activity because we don’t want our services to be used to manipulate people.” Saying you “don’t want” your services to be used to manipulate people isn’t the same thing as taking responsibility for the mistake and committing to successfully preventing it from happening in the future. Hospitals “don’t want” patients to get sick in the hospital, but if the way a hospital is administered puts patients at greater risk of complications, it’s reasonable to assume the hospital would change its practices.
But Facebook’s priority is protecting its business model and profits, not protecting its users from attempts at manipulation by adversarial foreign governments trying to undermine our democracy. This is why Zuckerberg continues to talk about his creation with Dr. Frankenstein–like obliviousness, as if Facebook is merely the inevitable manifestation of a progressive new vision of technology-enabled global connectedness that he has created and that everyone should agree is all for the good. Meanwhile, his creation, now full-grown, lurches around frightening the villagers, and all Zuckerberg can say in response is that he’s excited to see that villager “engagement” is high.
It is true that among many users of Facebook, engagement is still high, and lots of people continue to benefit individually from their use of the platform. But collectively, the harms caused by Facebook now outweigh the benefits. Because the company exercises a form of alternative power that can influence our democratic institutions, it affects all of us—even those who don’t use it. Given its scale and complexity, Facebook can’t regulate itself or solve the problems that it has created.
The same architecture and business model that have made Facebook an extraordinary success have had deleterious effects on elections, debate, and free-market competition. Whether you are an advertiser intent on persuading someone to buy batteries or an authoritarian regime trying to undermine an election or an extremist intent on killing as many people as possible, Facebook will work for you. As Vaidhyanathan put it, “the problem with Facebook is Facebook.”
This wasn’t how it was supposed to be. As Shoshanna Zuboff, author of The Age of Surveillance Capitalism, told the New York Times, something significant changed from Facebook (and Google’s) founding to today: “We saw these digital services were free, and we thought, you know, ‘We’re making a reasonable trade-off with giving them valuable data.’ But now that’s reversed. They’ve decided that we’re free, that they can take our experience for free and translate it into behavioral data. And so we are just the source of raw material.”
Part Eight: “Don’t Worry About Making Mistakes Too Much”
In the 2014 town-hall meeting, Zuckerberg was asked what advice he would give someone who wanted to start a new business. “Don’t worry about making mistakes too much,” he said. “The number one question I’m asked is, what mistakes do you wish you could have avoided? I really don’t think that’s the right question; that’s how you learn. The real question is how you learn from them, and not which things you can avoid.”
Learning from one’s mistakes is, of course, a useful process. But Zuckerberg has done something else: He assumes that the mistakes Facebook makes are external or unrelated to the core principles and architecture of Facebook itself, when in fact they are the logical conclusion of it. With each new grandiose announcement of his plans for Facebook, Zuckerberg seems to inch further and further into a kind of technology-induced Walter Mitty syndrome.
For too long, Facebook’s leaders have avoided the negative political, social, and cultural effects Facebook has had on the world. Zuckerberg has long insisted that Facebook is a “global community,” as if a community is a morally neutral unit. Does ticking a box agreeing to Facebook’s deliberately opaque terms of service constitute membership in a “community”? If so, then terrorists organizing attacks on innocent people using WhatsApp, and Russians seeding people’s News Feeds with fake stories meant to incite them, and bored employees sharing cat videos are all members of the same “community.” Even if you share Zuckerberg’s belief that Facebook is a community, communities are built on trust, and Facebook has lost ours.
Its power must be tamed.
How? First, anti-trust action must be taken to force Facebook to divest itself of WhatsApp, Instagram, and Facebook Messenger so that they can function as independent, stand-alone businesses that compete with Facebook and one another.
Like the break up of AT&T in the 20th century, which led to further innovations in the telecommunications industry, a breakup of Facebook could spur innovation and competition in the social-media landscape and end Facebook’s unfair exercise of monopoly power. Facebook has made it impossible for competition to flourish. Even if half of Facebook’s users closed their accounts tomorrow, they would have nowhere else to go. We need legitimate alternatives to Facebook.
Fearful of anti-trust action, Facebook executives have taken to raising the threat of Chinese social-media companies as an argument against breakup or regulation. In an interview in May on CNBC, Sheryl Sandberg claimed Facebook was committed to earning back people’s trust but also warned, “While people are concerned with the size and power of tech companies, there’s also a concern in the United States with the size and power of Chinese companies, and the realization that those companies are not going to be broken up.”
But raising the specter of WeChat is a distraction. Anti-trust action such as requiring Facebook to relinquish control of Instagram and WhatsApp wouldn’t mean the destruction of those services or a takeover of the social-media sector by Chinese companies. It would mean the beginning of genuine competition. Competition would blunt the worst tendencies of Facebook while continuing to protect free-speech rights. As Glenn Harlan Reynolds argues in his new book, The Social Media Upheaval, many of the problems created by social-media platforms, such as polarization and disinformation, could be solved with competition. “If Twitter or Facebook were competing with five or ten other similar services, or maybe even two or three,” Reynolds writes, “this sort of thing would be more likely to damp out, after the fashion of the old, loosely coupled blogosphere.” Competition would also “promote greater attention to matters of privacy, algorithmic integrity, and so on because users could more easily leave for another service.”
This is a solution that would remove the free-speech questions inevitable in any attempt to moderate content on these platforms—in practice, even with sophisticated A.I., an impossible task given the scale of platforms like Facebook.
Second, the United States needs stronger data-privacy and -protection laws, including laws that grant users access to the data dossiers that Facebook has compiled on them and the option to deny Facebook the ability to share that data with third parties unless explicit permission (as opposed to byzantine terms of service agreements) is given. There are plentiful models for such laws, most notably data- and privacy-protection laws now in force in Europe. Another proposal would have Facebook safeguard user information the same way that other “information fiduciaries” such as lawyers and medical providers and financial advisers do for their clients, and face fines and other punishments if it did not.
Third, we need stronger and more consistent enforcement by the Federal Trade Commission of existing agreements it has signed with Facebook and harsher punishments when Facebook is found to be in breach of those agreements (which, if history is any guide, will be often). The time for symbolic punishments and slaps on the wrist and weakly enforced consent agreements is long past. At the beginning of June, the FTC announced that it was planning an anti-trust investigation of Facebook, which suggests that Facebook’s long honeymoon period with regulators might finally be over.
Fourth, Facebook’s users should reconsider giving their attention and data to a company that is at best amoral and at worst, actively harmful in its effects. Young people are already leaving the platform; the Wall Street Journal reported that Edison Research found Facebook had lost approximately 15 million users in 2017, and that “most of these were in the coveted 12-to-34-year-old demographic.” (Some of these younger users are probably on Instagram, however, so they are still part of the Facebook ecosystem.)
It is true that individual protests against Facebook have only a symbolic effect at this point. Even if every American user of Facebook quit the platform tomorrow, it wouldn’t end the company’s dominance; Facebook now has more users outside of the U.S. than in it. But as recent studies of Facebook use attest, it might be healthier for us as individuals even if it doesn’t have an impact on the company at large.
One of the most thorough recent studies of Facebook use by researchers at Stanford University and New York University offered some intriguing clues about how quitting Facebook affects well-being. Study participants were paid to abstain from using Facebook for a month. As the New York Times reported, abstainers “freed about an hour a day on average, and more than twice that for heavy users. They also spent more time offline, including with friends and family.” There is some evidence that the abstainers also scored lower on measures of political polarization and slightly increased their feelings of well-being. Those who abstained for the experiment ended up using Facebook less when the study ended.
What won’t work is trusting Facebook to control its own worst impulses, because Facebook’s leaders refuse to acknowledge what their platform has become and have no incentive to control or minimize its corrosive effects on its users and on society.
In The Life of the Mind, Hannah Arendt observed, “The sad truth is that most evil is done by people who never make up their minds to be either good or evil.” Facebook’s leadership will continue to insist that it is a neutral platform for connecting people and thus a force for good. We now know otherwise, and it’s time we make up our minds and do something about it.