Facebook’s fact-checking just isn’t working
The Facebook part of each breaking news cycle is pretty familiar to journalists: A story breaks, and it seems just a matter of time until a false or misleading bit of information appears on Facebook’s “trending” section or otherwise makes its way around the platform with alarming speed.
Facebook’s fake news problem, which the company initially downplayed, was on display during the 2016 election, when the platform was found to have enabled propaganda and profitable lies to spread to millions of people.
In theory, Facebook has taken steps to prevent this. They’re partnered with journalists at the Associated Press, FactCheck.org, and others to verify the truth of supposed news items on Facebook. When something doesn’t meet the journalists’ standards, that story gets a “disputed” tag.
But journalists working on behalf of Facebook told The Guardian that this “disputed” tag doesn’t do much. A year after these journalists began their partnership with the platform, they spoke to The Guardian out of concern that Facebook wasn’t serious about stopping disinformation.
“They have a big problem, and they are leaning on other organizations to clean up after them,” one journalist told The Guardian.
These journalists reportedly don’t have access to stats that would tell them whether or not their work stems the tide of misinformation. They reportedly often can’t tell when a “disputed” tag gets slapped on an article, and, even if they see that tag, they don’t know whether it even slows that article’s spread.
In an emailed statement, a Facebook spokesperson wrote that “once a news story has been labeled false by our third-party fact-checking partners, we typically see its future impressions drop by 80 percent,” though a study on the effects of the “disputed” tag showed it did little to deter Facebook users from clicking. The study also said that by adding a tag to some but not all false stories users may be more likely to believe any story that isn’t tagged.
Facebook and other tech giants such as Google and Twitter have recently been scrutinized for their role in spreading misinformation during and after the 2016 United States presidential election. That scrutiny is largely about how the Russian government manipulated these platforms to spread propaganda through Facebook pages, Twitter bots, and more, but it’s since become more clear just how easily misinformation spreads on these gigantic oceans of the internet whenever a story consumes the country.
After October’s mass-shooting in Las Vegas that left 59 people dead, many Facebook users were treated to a story about how the shooter was, as The New York Times put it, “an anti-Trump liberal who liked Rachel Maddow and MoveOn.org, that the F.B.I. had already linked him to the Islamic State, and that mainstream news organizations were suppressing that he had recently converted to Islam.” Obviously aimed to rile up right-wing Americans, none of this was true.
Facebook was unable to distinguish news from 4Chan garbage despite its supposed effort to combat misinformation. The platform has announced huge numbers of hires to review who’s buying ads on its platform and to combat violent messages. Maybe they should hare a bunch of their own fact-checkers, too.