The European Union wants companies like Facebook and Twitter to detect and eliminate hate speech and “terrorist material” as soon as those types of messages appear on their social platforms.
The EU’s European Commission — a regulatory committee — has long put the onus on tech companies to police videos and other content that pops up on their platforms. On Thursday they upped the pressure by requesting “automatic detection technologies.”
In theory, such capabilities would allow platforms such as Facebook to pick out hate speech and “terrorist material” as soon as it’s published, so the company could remove the content as quickly as possible. The commission also made it clear that such technology should be used to block banned content someone is trying to re-upload.
“We cannot accept a digital Wild West, and we must act,” said Vĕra Jourová, EU Commissioner for Justice, Consumers and Gender Equality. “The code of conduct I agreed [to] with Facebook, Twitter, Google, and Microsoft shows that a self-regulatory approach can serve as a good example and can lead to results. However, if the tech companies don’t deliver, we will do it.”
Those four companies had signed on to an EU commitment to review “the majority of valid notifications of illegal hate speech in less than 24 hours” back in June of 2016. In June of this year, the commission reported that tech companies were removing significantly more “illegal hate speech” than they had before the agreement, and that the number of “notifications reviewed within 24 hours” had bumped up to above 50 percent.
But the commission has still expressed displeasure with the pace of removal. Facebook, the commission wrote in June, was “the only company that fully achieves the target of reviewing the majority of notifications within the day.” On Thursday, the commission reported that, 28 percent of the time, platforms take more than a week to remove “illegal content.”
“The situation is not sustainable,” said Mariya Gabriel, the Commissioner for the Digital Economy and Society.
The commission insists that their plan includes “safeguards to prevent the risk of over-removal,” but groups such as the Electronic Frontier Foundation were concerned when the governing body announced the 24-hour plan last June.
“While some of the speech that concerns the Commission may very well qualify as illegal under some countries’ laws, the method by which they’ve sought to limit it will surely have a chilling effect on free speech,” Jillian York, EFF’s director for international freedom of expression, wrote at the time.
For now, the EU’s battle with tech companies continues apace.