Should Facebook and Twitter Be Regulated Under the First Amendment?
Donald Trump’s Twitter account now has 40 million followers. It ranks 21st worldwide among 281.3 million or so accounts. It’s no secret that Trump is proud of his ability to use the account to communicate directly with his constituents. This summer, the president tweeted, “My use of social media is not Presidential—it’s MODERN DAY PRESIDENTIAL.” He meant that his tweets are official statements of the president of the United States. The National Archives concurs: It says they must be preserved under the Presidential Records Act. When Trump’s aides have tweeted about the president’s agenda, they have referred to it as the agenda of @realDonaldTrump.
Trump has used Twitter to announce his plan to ban transgender people from the military, to blow air kisses to Russia’s Vladimir Putin, to attack the so-called fake media’s coverage of his administration, and to assert his version of the facts (“No WH chaos!”). Almost every week, he takes to Twitter to feud with new foes: NFL players, for kneeling during the national anthem; the mayor of San Juan, Puerto Rico, for deriding the federal government’s sluggish response to the island’s dire needs after the monstrous Hurricane Maria; Senator Bob Corker of Tennessee, for tweeting, “It’s a shame the White House has become an adult day care center.” Trump’s following climbs about 80,000 a day—a rate of almost 30 million add-ons a year.
Anyone with a Twitter account can follow the president.
Well, almost anyone.
In June, Rebecca Buckwalter-Poza, a writer and legal analyst in Washington, DC, was blocked from reading and replying to the president’s account and from reading other related comments. This happened after Trump tweeted, “Sorry folks, but if I would have relied on the Fake News of CNN, NBC, ABC, CBS washpost or nytimes, I would have had ZERO chance winning WH,” and Buckwalter-Poza tweeted back, “To be fair you didn’t win the WH: Russia won it for you.” Also in June, a police officer in Houston, Texas, named Brandon Neely was similarly blocked after the president tweeted, “Congratulations! First new Coal Mine of Trump Era Opens in Pennsylvania,” and Neely replied, “Congrats and now black lung won’t be covered under #TrumpCare.” Many other people have been blocked, apparently for similar dissents.
It raises the question: Are social media platforms like Twitter subject to the First Amendment? Is there a right to free speech on social media owned by private corporations?
The Knight First Amendment Institute thinks so. In July, the institute sued the president, his director of social media, and his press secretary to unblock the blocked. By banning these users based on views they expressed about tweets by the president, the Institute argues, Trump violated the users’ right to free speech because the blocks were based on disagreement with the users’ messages. (I am affiliated with the institute, but not directly involved in the lawsuit.) Two weeks ago, as part of this litigation, lawyers for the president acknowledged that he personally blocked the Twitter users “because the Individual Plaintiffs posted tweets that criticized the president or his policies”—what free speech law calls “viewpoint discrimination.” In places where the First Amendment applies—such as public forums—it bars the government or its officials from such bias.
The president’s Twitter account is not a traditional public forum, like a town hall or public park, where citizens are said to exchange views in the “marketplace of ideas” on which, it’s also said, democracy depends. In those forums, the government can restrict speech based on its content only if the restriction serves a strong interest of the government, like preventing violence. But here’s the thing: In an age when so much public discourse happens on platforms like Twitter, @realDonaldTrump should be subject to the same strict standard as a designated, or limited, public forum used for expressing views of the president.
Otherwise, the Knight Institute argues, the government could turn the marketplace of ideas into an echo chamber, where the only opinions heard are favorable to the president and his administration. That would contradict the bedrock idea of the First Amendment about free speech, which Justice William J. Brennan Jr. summarized 53 years ago, in New York Times v. Sullivan, as “the principle that debate on public issues should be uninhibited, robust, and wide-open, and that it may well include vehement, caustic, and sometimes unpleasantly sharp attacks on government and public officials.”
In early June, a month before the Knight Institute filed its lawsuit, the group wrote a widely publicized letter to Trump asking him to unblock the accounts of its clients and others blocked for similar reasons. Some of the country’s leading constitutional scholars responded, explaining that they thought the institute’s legal argument was wrong. One wrote that @realDonaldTrump is a personal account—“the work of Trump-the-man (albeit a man to whom people pay attention because he is president), just as it was before November [of 2016], and not Trump-the-president. His decisions about that account are therefore not constrained by the First Amendment.”
Harvard Law School’s Noah Feldman added his voice to the dissenters. “There’s no right to free speech on Twitter,” he asserted. “The only rule is that Twitter Inc. gets to decide who speaks and listens—which is its right under the First Amendment. If Twitter wants to block Trump, it can. If Trump wants to block followers, he can. Trump’s account can’t be a ‘designated public forum,’ as the center claims, because it isn’t public at all. Rather, Trump’s account is a stream of communication that’s wholly owned by Twitter, a private company with First Amendment rights of its own.”
The institute replied that the “fact that Twitter is a private company doesn’t mean the First Amendment is inapplicable to President Trump’s Twitter account. The key question is whether the president has opened up a forum for expressive activity to the public.” This view is about the account’s function and the president’s use of it, not Twitter’s form as a company. The lawsuit is against the government, not Twitter.
The Supreme Court indirectly supported that view in late June. It struck down a North Carolina law that made it a felony for a registered sex offender “to access a commercial social networking Web site where the sex offender knows that the site permits minor children to become members or to create or maintain personal Web pages.” The law, Justice Anthony M. Kennedy wrote for a majority of the Court, violated a “fundamental principle of the First Amendment”—namely “that all persons have access to places where they can speak and listen, and then, after reflection, speak and listen once more.” Now, Kennedy wrote, quoting a prior Court opinion, the most important of those places is “cyberspace—the ‘vast democratic forums of the internet’ in general, and social media in particular.”
Still, Feldman’s argument is significant because it reflects the dominant view of free speech law, which builds on that law’s fundamental purpose. As part of the Bill of Rights, whose role is to protect the rights of individuals against incursions of the government, the First Amendment and its clause protecting “the freedom of speech” do so only by prohibiting government action restricting that freedom. Last year, Floyd Abrams, a venerated lawyer and advocate for free speech, published a book called The Soul of the First Amendment. The soul, he says, is “anticensorial,” against all but narrowly defined “government interference with and control over free expression” (I added the italics for emphasis).
In the 21st century, however, that view addresses only some of the major challenges to free speech. It doesn’t address ones posed by Twitter, Facebook, and other social media as new colossuses of communications: About seven of every 10 American adults used at least one social media site in 2016. Among lawyers, scholars, and activists who focus on the First Amendment and its safeguards of free speech, one of the most divisive and pressing questions is whether America’s understanding of them must change radically, so the “vast democratic forums of the internet” don’t drown the country’s system of government. As the legal scholar Tim Wu explains in a provocative new essay, “it is no longer speech itself that is scarce, but the attention of listeners.” Fake news and its handmaidens—propaganda robots and paid trolls—are the enemy of free speech because, as Wu says, they use “‘flooding’ tactics (sometimes called ‘reverse censorship’) that distort or drown out” speech.
A hallmark of American free speech law is that it prohibits the government from censoring or punishing hate speech. In public forums, the law allows hate speech and expressions of hate—verbal attacks on homosexuals near the site of a funeral of a military veteran, burning a cross on the lawn of an African American couple, calls for the overthrow of the US government by a member of the Ku Klux Klan. It’s all protected under the Constitution.
When the Supreme Court reaffirmed this view in June, Justice Samuel A. Alito Jr. wrote for a majority of the Court that “speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express ‘the thought that we hate.’” (The internal quotation is from the great Justice Oliver Wendell Holmes Jr., the architect of that jurisprudence.)
But Twitter, Facebook, and other social media platforms treat information and opinion very differently from how the Supreme Court says the government must under the First Amendment. Facebook, for example, explicitly bans hate speech. “Our mission is to give people the power to build community and bring the world closer together,” reads the site’s community standards page. To encourage respectful behavior, “Facebook removes hate speech, which includes content that directly attacks people based on their: race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, or gender identity, or serious disabilities or diseases.” It does not allow organizations and people to use Facebook if they are “dedicated to promoting hatred against these protected groups.”
Applying these standards fairly and uniformly is not as simple as Facebook’s statement on community standards would suggest. In June, ProPublica published an investigation into how the company has applied its rules, an exposé of “the secret guidelines that Facebook’s censors use to distinguish between hate speech and legitimate political expression.” Among other things, the article revealed a Facebook training document that included a slide titled “Quiz!” Which groups should be protected against slurs and harsh language, it asked: female drivers, black children, and white men. The correct answer was white men, because “white” is an ethnicity and “men” are a gender and both traits were protected. Since “female” was a protected trait but “drivers” weren’t, and “black” was a protected trait but “children” weren’t, attacks on female drivers and black children could stand.
As of September, about two billion people used Facebook every month, up 17 percent from the year before, and 1.3 billion people—one-sixth of the world’s population—used the platform every day. Facebook deletes about 66,000 hate speech posts a month worldwide, “what may well be the most far-reaching global censorship operation in history,” ProPublica wrote. As Kate Klonick, a PhD candidate at Yale Law School, told ProPublica, those decisions were often strikingly inconsistent. People deemed worthy of protection, she said, were “disproportionately the people who have the power to update the rules” and persuade Facebook to reverse a decision about hate speech.
The Facebook investigation shows a cardinal reason why American law has prided itself on allowing hate speech: It is hard to define. Policing it inevitably involves disapproving one version of speech while letting another pass. And that violates the free speech axiom that, in public discourse, all speech is equal.
Along with Twitter and other social media companies, however, Facebook has been pressed by the European Union and some of its members to combat hate speech, which is illegal in Europe. Speech that degrades, insults, or threatens people because of their race, religion, or ethnicity is prohibited—even if the speech isn’t likely to ignite violence. Two weeks ago, the EU gave social media companies an ultimatum: Take down hate speech faster or get severely fined. In much of Europe, the answer to hate speech is to erase it. In the US, the answer is more speech—protecting hate speech and contentions that it’s wrong. The two sides are supposed to fight it out in the marketplace of ideas.
In the US, in addition to the fiery debate about whether the First Amendment should protect hate speech, there is a fundamental argument going on about how to handle the excesses of digital communication. The stakes in that debate became enormous during the 2016 presidential election, amid waves of alleged facts that were simply made up, relentless conspiracy theories, and free-flowing propaganda. The destructive effect of these excesses on American democracy has led a growing set of First Amendment scholars to propose new interpretations of the law.
All of this plays out in the context of a law that controls digital media—the 1998 Communications Decency Act. It says that online services like Facebook and Twitter aren’t legally responsible for content posted by their users, even if it’s illegal: If a Facebook user posts something defamatory, the injured person can sue the user, but not Facebook.
The scholars urging a reexamination of this law argue that unless social media platforms are treated as publishers responsible for the content they distribute, enemies of democracy will increasingly use speech on social media as a weapon to attack or suppress truthful speech. The traditional role of the First Amendment is to protect speech about public affairs against coercive control or suppression by the government—to make sure that speech gets heard. But that position isn’t helpful when the culprits are social media outlets owned by private corporations rather than the government.
To be sure, these excesses are not the only cause of democracy’s disruptions. The political parties have weakened as politics has become more polarized; independent spending has made elections more combative and swelled the influence of wealthy, often anonymous, organizations and individuals; forces of racism, populism, nativism, isolationism, and xenophobia have almost overwhelmed the forces of moderation. But the impact of transformations in technology and communications is enormous. A major premise of these recent proposals to reinterpret the First Amendment is what the Supreme Court said in June: Social media are today’s town halls and public parks, where ideas compete for influence on topics “as diverse as human thought.”
For a rapidly growing segment of Americans now use social media as the primary means through which to obtain news. In 2016, the Pew Research Project found that only 20 percent of American adults got their news primarily from newspapers. (And among those 18 to 29, it was only 5 percent.) More people (about half) preferred to watch news than read it. But among the readers, most preferred reading online (59 percent) than reading in print (26 percent). As a result, among all adult Americans, about four out of every 10 got their news online. If you count infrequent as well as frequent users, about six out of every 10 got news from social media. The point is: What happens online is enormously important when it comes to having an informed citizenry.
And that’s where the hidden influence of supposedly neutral online platforms has attracted the attention of First Amendment revisionists. Olivier Sylvain of Fordham Law School argues that, rather than the “passive conduits” they claim to be, social media platforms are often active shapers of the content posted by their users, “constantly managing the design of their applications in order to structure the manner in which user content gets shared and manipulated by others.” With these algorithms, sometimes called computational propaganda, they are “ever more determinative of online conduct.” The filter they impose through hate speech rules, he explains, is only one of many ways they influence content.
For that reason, he goes on, the broad interpretation that courts have given the Communications Decency Act is often wrong, since it’s based on the mistaken premises that social media platforms are not publishers, speakers, or otherwise producers of content. “Courts,” Sylvain proposes, “should be far more attentive to the designs that determine online content than the prevailing doctrine has allowed to this point. They should shield providers from liability for third-party online conduct only to the extent such providers truly operate as neutral conduits.”
Sylvain concentrates on the meaning of a single statute, in a narrow and lawyerly argument. But the same facts lead other scholars to say that social media companies should be held responsible for content posted on their platforms, just as newspapers, broadcast outlets, and other forms of old media are responsible for what they publish or broadcast. They even suggest that it’s time to rewrite the Communications Decency Act—to impose responsibility on social media for the factual accuracy of content they host.
That would be a radical change in American law. But so was the 1964 Sullivan ruling, which held that a factual error, on its own, was no longer the basis for a libel judgment against a publisher. In his account of the case, Make No Law, Anthony Lewis wrote that “libelous utterances had always been regarded as outside the First Amendment, an exception to ‘the freedom of speech’ it guarantees.” (There are many other exceptions: blackmail; plagiarism; child pornography; harassment in the workplace; and more.) But southern officials were using libel lawsuits against news outlets, most prominently The New York Times, to stop them from covering segregation in the South, because those outlets were “educating the country about the nature of racism.” Lewis wrote that “in the broadest sense the libel suits were a challenge to the principles of the First Amendment.”
The function of news organizations, the Supreme Court held, is so important to American democracy that public officials (the court later expanded the group to include public figures) can win a libel judgment against a news outlet only when they can prove the outlet knew, or suspected, that what it published was false. The court’s (and the country’s) conception of the First Amendment fundamentally changed when it realized that America’s most important news outlets—central to “free political discussion,” in the court’s words, “the very foundation of constitutional government”—were in jeopardy of being silenced by libel judgments. By putting the burden on the defendant in a libel lawsuit to prove the truth of a statement said to be defamatory, the law abridged freedom of speech, and of the press in particular, by unduly restricting criticism of conduct by government officials.
The Sullivan ruling is a paean to the press, a high point of court decisions and opinions about the role of journalism in American life. It doesn’t spell out what it and virtually every other influential institution in the country took for granted: The mainstream media, as we now think of it, imposed filters of reporting, fact-checking, and editing in deciding what was newsworthy. While nonmainstream views found niche audiences, most of the content that found large audiences was filtered through the methods and means of respectable journalism.
Today—even with the breakdown of business models in journalism, controversies over whether major outlets should publish information leaked to them from hackers, and the propensity of journalists to overstate the honor of our profession—the content that the mainstream media produces is still filtered through the methods of journalism. But the mainstream media has much less control as gatekeepers, because technologies of digital communication make it possible for anyone to post content on the internet. The vast expanse of content makes it hard even for content creators with well-known names to get attention. Still, a lot of content produced by cranks, rogues, and hackers who don’t fit the model the Supreme Court had in mind in Sullivan finds an audience. Sometimes it’s an enormous audience.
Users of social media are inundated with misinformation, which often spreads virally because of the networks of networks of networks that the platforms generate. The effect is that tens of millions of Americans, across the political spectrum, now believe a universe of things that are demonstrably false. During the 2016 election, some of the most damaging of this was old-fashioned propaganda—disinformation posted to unglue American democracy. But even without propaganda, misinformation is a grave threat to democracy. “You cannot run a democratic system unless you have a well-informed public, or a public prepared to defer to well-informed elites,” says Larry Kramer, president of the Hewlett Foundation and an expert in constitutional law. “And we are now rapidly heading toward neither. Without one or the other, our constitutional system and our liberal democracy will end, perhaps not imminently, but over time.”
One solution would be for the major social media platforms, like Facebook and Twitter, and other major digital platforms, like Google and Apple, to go beyond relying on automated fact-checking and software algorithms that identify content with “fake news” labels, and take responsibility for the content they distribute by regulating themselves, as the mainstream media does.
But, Kramer tells me, leaders at Google, Facebook, and Apple have said that this would require them to be “censors,” in violation of the First Amendment. In Kramer’s view, that’s clearly wrong, for the reason Noah Feldman underscores: The First Amendment and its clause protecting free speech prohibit only government action restricting it, not private action by those companies. That’s why the companies can choose to filter out hate speech without violating the First Amendment, even though that filter amounts to censorship—suppression of speech. Yet the choice not to impose a wider filter is still a choice, Kramer says: “The platforms cannot escape that their decisions about what to allow through their pipelines define what the public sees and gets—meaning they must accept responsibility for the consequences.”
Another solution might be for the government to impose standards of care through regulation. From Feldman’s perspective, that would require changing the meaning of the First Amendment—for example, the view that the answer to hate speech, false speech, or propagandistic speech is more speech in the form of counter-speech. As Floyd Abrams wrote, the free speech clause is “anticensorial,” a negative liberty that forbids the government from abridging the freedom of speech. That was Justice Holmes’ laissez-faire conception.
But among free speech advocates, there is a vigorous and growing counterview that the First Amendment protects a positive liberty, rooted in the free speech clause’s purpose of providing the American people with the information, opinion, and opportunity to speak and listen necessary for self-governance in a democracy. That was Justice Louis D. Brandeis’ civic conception—which, the legal scholar Cass Sunstein wrote, “reflects a commitment to a kind of deliberative process” and “calls for government protection of public discourse.”
The rhetoric of free speech is so libertarian that it obscures the government’s paternalism in regulating news organizations. From 1949 until 1987, the government rule called the fairness doctrine required radio and TV stations to present controversial issues of public importance in a way that was fair—honest, equitable, and balanced. The Supreme Court held that the rule didn’t violate the First Amendment: There was nothing in it that “prevents the Government from requiring a licensee to share his frequency with others.” The Court emphasized, “it is the right of the viewers and listeners, not the right of the broadcasters, which is paramount.”
Should the government regulate social media and other major digital platforms because these new technologies have enormous power to harm American democracy, and are doing so? Should it pass what Tim Wu describes as “new laws or regulations requiring that major speech platforms behave as public trustees, with general duties to police fake users, remove propaganda robots, and promote a robust speech environment surrounding matters of public concern”?
The obvious counterargument is that allowing the government to impose such standards could lead to even more government censorship of speech, when there is far too much of it already in the name of national security and other interests, which the government imposes with almost total impunity. And there is no consensus even among experts who favor this kind of regulation, or at least endorse exploring it, about what it should entail or cover.
But as it stands, the country’s libertarian conception of free speech is allowing, and even ferociously feeding, an erosion of the democracy it is supposed to be essential in making work—and some government regulation of speech on social media may be required to save it.
Lincoln Caplan is the Truman Capote Visiting Lecturer in Law at Yale Law School and the author of six books about legal affairs, most recently American Justice 2016: The Political Supreme Court.