Can We Fix What’s Wrong with Social Media?

several different emojis: surprised, sad, and angry

Yale Law School alumni, faculty, and students are grappling with some of the most difficult questions in the online environment.


As the daughter of immigrants from Pakistan, Nabiha Syed ’10 had been awed — and disturbed — by the power of mainstream media outlets to craft narratives about Muslim Americans in the wake of the terrorist attacks of Sept. 11, 2001. At Yale Law School, one of her final papers was about the “liberating effects” of blogs, which allow people to speak for themselves by sharing their perspectives directly with the world.

She doubled down on those ideas at the University of Oxford, writing her thesis on a relatively new website called WikiLeaks and even interviewing its now infamous founder, Julian Assange, in the process. She was intrigued by the internet’s ability to negate the need for “gatekeepers,” like journalists.

“I was excited,” she recalled. “When truth is filtered through traditional institutions, some kinds of truths never make it.”

But she became wary of the WikiLeaks strategy of indiscriminately releasing information it obtained. “I thought, ‘Who’s verifying this? Who’s doing the work of context-building that you rely on journalists to do?’” she said. “It was my first moment thinking, ‘None of this is going to work in the way we think it will.’”

She was right.

The internet changed everything; then, social media changed the internet. Aspirational ideas about free speech and democratic access to information and power are wavering in the online environment. The digital world has opened up space for communities of all shapes and sizes to bond and build connections; it has also proven susceptible to misinformation, disinformation, hate speech, and a host of other problems.

“No part of the First Amendment classes I took in law school captured this,” said Syed, President of The Markup, a nonprofit news outlet that conducts data-driven investigations of new technologies. “Which prompts the question: Did we build the right theories for the people we are, under the technologies we use? It feels like the answer is not yet.”

Looking for Governance Models

Questions about how social media companies should handle potentially harmful content — how they should govern themselves and be governed — are now ubiquitous, with calls for reform coming from Congress and around the globe. That debate is also playing out within the companies themselves. Part of The Markup’s job is to examine such efforts. Last year, for example, it published a series of stories about Google’s attempts to prevent advertisers from linking to hate speech on YouTube. Reporters found that many of the terms on the list — including “white power” — were not actually blocked.

Meanwhile, the company’s blocklist did prevent advertisers from using social and racial justice terms such as “Black Lives Matter,” “Black Power,” “sex work,” and “American Muslim.”

“It’s not our job to ascribe intent” but to “surface” inconsistencies, Syed explained. “I don’t know why you can’t get things for Black Lives Matter, but someone should have to answer for that.”

So far, one of the highest profile efforts at balancing speech and governance has been Facebook’s Oversight Board, which launched in 2020. The stated purpose of the board is to “promote free expression” by making decisions about whether Facebook and Instagram content can stay up or must come down. Sterling Professor of Law and former Dean Robert C. Post ’77 was an early advisor on the project and now serves as a trustee.

“There was just a real recognition around that time that we needed a new system of governance. It’s sort of an accident that these private companies ended up responsible for the speech of billions of people.”
—Jennifer Broxmeyer ’09

Post, who specializes in constitutional law with an emphasis on the First Amendment, said the Oversight Board appealed to him because it represented a “third way” of approaching online content regulation, distinct from the view of platforms as an extension of the government, in which regulation would “be virtually precluded”; but also from one that leaves everything up to “the entirely arbitrary control of profit-making corporations.”

The Oversight Board, Post said, “held the potential to infuse private social media platforms with public law values.”

“This seemed to me an experiment worth trying,” he said.

One of the Oversight Board’s first cases — referred from Facebook — was a review of the company’s decision to indefinitely suspend former President Donald Trump for two posts he made during the attack on the U.S. Capitol on Jan. 6, 2021. The first was a 161-word video, saying, among other things, “I know your pain. I know you’re hurt. We had an election that was stolen from us…but you have to go home now…”

Facebook removed that post for violating its Community Standards on Dangerous Individuals and Organizations, which states, “We do not allow organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Facebook.”

Later, Trump posted: “These are the things and events that happen when a sacred landslide election victory is so unceremoniously viciously stripped away from great patriots who have been badly unfairly treated for so long. Go home with love in peace. Remember this day forever!”

That post was up for eight minutes before Facebook removed it for violating the same standard.

After reviewing the case, the board upheld the decision to restrict Trump’s access but found that the penalty’s indefinite length was inappropriate. In response, Facebook announced new enforcement protocols and amended the suspension to two years.

The case drew enormous attention around the world, highlighting a problem that was years in the making: how to manage the power of social media to amplify harmful messages. And Facebook acknowledged as much, writing in its response that the matter “confirms our view that Facebook shouldn’t be making so many decisions about content by ourselves.”

Jennifer Broxmeyer ’09 leads Facebook’s work on the Oversight Board as Director of Governance at Facebook’s parent company, Meta.

“There was just a real recognition around that time that we needed a new system of governance,” she recalled. “It’s sort of an accident that these private companies ended up responsible for the speech of billions of people. The models we have just don’t work. It can’t be right that a couple of people in China or Menlo Park or Mountain View should be making decisions [about content] on behalf of the world.”

Broxmeyer said the Oversight Board, which consists of 20 global experts and civic leaders, will continue to evolve. Originally, for instance, it was analogized as a sort of Supreme Court for Facebook, designed to weigh individual expressions of speech against any potential harms. But in practice, it has also played the role of a pseudo-regulatory agency, weighing in — sometimes at Facebook’s request — on policy matters such as the sharing of private residential information and cross-check, one of the company’s systems for deciding when to remove content.

“The board gives us wide-ranging recommendations on changes to our policies, products, and processes, many of which overlap with proposed regulations,” Broxmeyer explained, adding that Facebook has also been thinking internally about how the board can “weigh in on product decisions,” such as how to rebuild Facebook’s penalties system.

Broxmeyer said the mere existence of the board — and Facebook’s commitment to it, including in the form of a $130 million irrevocable trust to ensure its independence — is “incredible.” And even some of the company’s fiercest critics agree.

“I do think it’s an innovation…it’s hard to say that the board is just a PR campaign and isn’t having some systemic impact on Facebook,” prominent online speech expert Evelyn Douek told Broxmeyer during an episode of The Lawfare Podcast in August 2021. “Do I think it could be more, do I think that it should be more, and that the remit should be expanded?…Absolutely, and I will keep being grouchy about it, but I do think it’s also fair to say that no other company has stepped up.”

With its community standards and the new Oversight Board, Broxmeyer said, Facebook is “basically creating new paradigms for governance of online speech.”

“How do you set rules about what is and isn’t allowed online, given the vast differences between ‘real-world’ speech and online speech, which can be amplified or viewed by billions of people around the world?” she asked.

The company isn’t pretending it can come up with answers on its own. When she started in her role with the board, Broxmeyer had only a couple of staffers borrowed from other departments. Today, she has around 50 people working for her, including an entire team responsible for keeping track of scholarship about best practices for handling speech and other issues on Meta’s platforms.

“We need government participation, we need civil society, we need academics,” Broxmeyer said. The problem is “too hard — and frankly too important — to solve on our own.”

Fresh Thinking about Behavior Online

At Yale Law School, questions about social media governance are being explored in a variety of ways, including through the Social Media Governance Initiative (SMGI) at Yale Law School’s Justice Collaboratory. Founded in 2015, the Justice Collaboratory seeks to advance public policies that are based on scientific evidence to build strong and safe communities, including those that promote positive behaviors rather than punish negative ones.

SMGI applies that same framework to online communities, starting from the premise that social media is generally good for society rather than generally bad. Its research is based on evidence that most people follow the rules, not because anyone tells them to or because they’ll be punished if they don’t, but because they believe it’s the right thing to do.

“If you want to reform social media, you have to reform the basic business practices these companies have developed over the years.”
—Professor Jack Balkin

Many social media governance models don’t take that reality into account. Instead, they focus on finding and punishing the bad actors.

“We want to promote desirable behaviors,” said Macklin Fleming Professor of Law and Professor of Psychology Tom R. Tyler, founding codirector of the Justice Collaboratory. “Part of that is traditional regulation — the stopping of bad things. But we’re also very interested in promoting good things.”

SMGI faculty and students have conducted studies on Facebook, Twitter, and Nextdoor, among other companies. The initiative also offers feedback to platforms. When Facebook was collecting input on its Oversight Board, for instance, SMGI argued for more user involvement in its content moderation efforts.

“We were lobbying Mark Zuckerberg in a different direction than he went,” Tyler said.

This spring, Tyler and SMGI Director Matt Katsaros taught a new SMGI lab in which students wrote policy papers based on their interviews with people working at social media platforms. The students looked at what’s wrong with content moderation currently and what models might be better. The topics included how different companies address gendered harassment, how platforms communicate their rules to users, and technology ethics.

“We are on the cutting edge of efforts to think more systematically about getting ahead of these problems,” Tyler said.

Confronting the Business Model

Part of what is problematic about internet content is the business model that supports most of it: Many social media platforms are free, but users pay for that access with their privacy. Platforms make money by selling user data to advertisers who then target their marketing campaigns to the most relevant people.

Syed has examined this model in her research. In 2017, she wrote “Real Talk about Fake News: Towards a Better Theory for Platform Governance,” a piece for the Yale Law Journal juxtaposing the realities of online speech with traditional First Amendment theories, including the so-called marketplace of ideas model, which maintains that a pro-speech environment will eventually produce the truth. That theory isn’t helpful, she argues, in a world driven by algorithms in which the most provocative content pays, and prevails — whether it’s true or not.

In her article, Syed cited the work of Knight Professor of Constitutional Law and the First Amendment Jack M. Balkin, among others.

Balkin founded and directs Yale Law School’s Information Society Project (ISP), an interdisciplinary center that explores issues at the intersection of law and technology. ISP is also home to the Abrams Institute’s Media Freedom and Information Access Clinic, which filed a lawsuit last year against an outlet that repeatedly published false stories. Balkin has written extensively about ways to effectively regulate social media, including by encouraging and facilitating new competitors in the digital sphere and through new antitrust, privacy, and consumer protection laws.

“If you want to reform social media, you have to reform the basic business practices these companies have developed over the years,” Balkin said. There’s no “silver bullet” for the challenges that come from providing access to the largest speech platforms in human history while also protecting people’s safety and the integrity of democratic processes, Balkin added. For instance, he said Facebook’s Oversight Board is “nice” but ultimately inadequate for the scale of the problem.

“Imagine an enormous, hideous beast,” he said. “And atop this beast is a tiny gorgeous hat with a beautiful flower. What’s wrong with the hat? Nothing. The hat is lovely — the flower is beautiful — but it’s sitting on top of this huge beast.”

Syed agreed, calling the Oversight Board “a wonderful experiment.”

“It’s not going to be the solution,” she said. “I don’t think they think it’s the solution, but it is a show of good-faith effort internally to try to move toward a solution.”

Indeed, there are no obvious answers to questions about online content governance, even though the questions are coming from the very highest levels.

After she wrote her 2017 Yale Law Journal article, Syed had the chance to brief former President Barack Obama on her argument that existing free speech models have failed to account for the social media era.

“It was terrifying and very exciting,” she recalled of the experience. Then Obama asked her what a better model would be. Syed didn’t have an answer for him. But she hopes to, eventually.

“For my next act, I want to return to something more scholarly,” she said. “We need to constrain the technology — or at least interrogate it — then craft theories that strike the balance of what we’re seeing.”