Facebook on Tuesday announced a series of changes to limit hate speech and extremism on the social network, expanding its definition of terrorist organizations and planning to deploy artificial intelligence to better spot and block live videos of shooters. The company is also expanding a program that redirects users searching for extremism to resources intended to help them leave hate groups behind. The New York Times reports: The announcement came the day before a hearing on Capitol Hill on how Facebook, Google and Twitter handle violent content. Lawmakers are expected to ask executives how they are handling posts from extremists. In its announcement post, Facebook said the Christchurch tragedy “strongly” influenced its updates. And the company said it had recently developed an industry plan with Microsoft, Twitter, Google and Amazon to address how technology is used to spread terrorist accounts.
Facebook said that it had mostly focused on identifying organizations like separatists, Islamist militants and white supremacists. The company said that it would now consider all people and organizations that proclaim or are engaged in violence leading to real-world harm. The team leading its efforts to counter extremism on its platform has grown to 350 people, Facebook said, and includes experts in law enforcement, national security, counterterrorism and academics studying radicalization. To detect more content relating to real-world harm, Facebook said it was updating its artificial intelligence to better catch first-person shooting videos. The company said it was working with American and British law enforcement officials to obtain camera footage from their firearms training programs to help its A.I. learn what real, first-person violent events look like.