In letters sent to Parler about their decisions, Amazon, Apple, and Google all cited the social media company’s lack of a workable system to keep violent content off its platform. “The processes Parler has put in place to moderate or prevent the spread of dangerous and illegal content have proved insufficient,” wrote Apple. “Specifically, we have continued to find direct threats of violence and calls to incite lawless action.”
You can see why these companies wouldn’t want to expose app store customers to a social media platform whose moderation system has failed to prevent the spread of harmful material. But then you’ve got to wonder what’s keeping them from banning the likes of Facebook, Twitter, and YouTube. The past few years of social media history have been nothing if not a relentless cycle of platforms failing to live up to their claims about how well they police themselves. Facebook was used to facilitate ethnic cleansing in Myanmar, and with its vastly larger user base was almost certainly a greater vector of “Stop the steal” disinformation than Parler. Journalists and academics have credibly accused YouTube of driving right-wing radicalization. Twitter was long notorious for permitting heaps of sexist and racist abuse.
These three companies have, to varying degrees, imposed stricter policies over the past year due to the coronavirus pandemic and the election. But it remains easy to find content that seems to violate the letter of the rules. Even days before the attack on the Capitol, journalists found groups on Facebook and Twitter calling for revolution. Amazon’s letter to Parler notes that the company flagged 98 examples “of posts that clearly encourage and incite violence.” It’s hard to imagine that Facebook, with its vastly larger user base, doesn’t eclipse that number.
All of which makes the decision to ban Parler seem somewhat capricious.
“I think the public perception is that all those scary people who gathered on Capitol Hill, they met up and continue to meet up on Parler, whereas Facebook and Twitter are doing something about it,” said Danielle Citron, a law professor at the University of Virginia and an expert on online harms. “And so Parler is the lowest hanging fruit.”
To be clear, there are big differences between Parler, whose entire raison d’être is to provide a space of almost completely uninhibited expression, and the mainstream platforms, which now boast of their efforts to combat certain types of misinformation and their sophisticated AI moderation tools. Parler did have a few, minimal rules, including against fraud, doxing, and threats of violence. But the company’s stated mission was to create an online platform where content is governed by the principles of the First Amendment. “Parler doesn’t have a hate speech policy,” Jeffrey Wernick, Parler’s COO, told me last week, before the Capitol riot. “Hate speech has no definition, OK? It’s not a legal term.”
Wernick is right. The First Amendment—which, I feel compelled to remind you, applies to the government, not private companies—protects a lot of material that most people don’t want to see on social media. It allows pornography. It allows glorification of violence. It allows explicit racism. And so, therefore, does Parler.
By tracking the First Amendment, however, Parler’s policies were incompatible on their face with those of Apple, Google, and Amazon, even aside from the matter of enforcement. Google and Apple, for example, both explicitly prohibit apps in their stores from allowing hate speech.
Perhaps Parler’s biggest problem was that it provided much more latitude for the type of material that the big platforms define as threatening violence. That’s because, under First Amendment doctrine, the government can only criminalize very narrow categories of speech, such as so-called “true threats”—roughly, language explicitly intended to make an individual or group fear for their life or safety. Arguing that people should rise up in arms, or that a politician or celebrity should be shot, wouldn’t meet the criteria for incitement or true threat. Believe it or not, that sort of speech is legally protected. (It can still earn you an inquisitive knock on the door from the Secret Service. I don’t recommend it.) Parler’s community guidelines mirrored that standard.