In a recent blog post, Facebook has announced their intention to begin using AI to find and remove terrorist content, before it reaches users. This comes after the social media giant has been widely criticized around the World, for appearing to do very little to prevent and remove said content, and not doing enough to tackle online extremism.
In the blog post, it is made clear by Facebook, that this technological tool is only one part in what needs to be a multi partner effort to tackle online extremism- alongside the introduction of AI, Facebook will be working alongside other industry experts, Governments around the World, and calling upon their own user community to report incidents which break the community standards, and to remain vigilant whilst they are online.
Commenting on this, Homer Strong- Director of Data Science at Cylance says ‘overall this direction is promising. A major issue with using humans to provide ground truth for AI is that humans are not perfect either. There needs to be processes for evaluating human judgement in parallel to machine judgement. Otherwise the AI can end up learning the subjectivities of individual reviewers, distracting the AI from learning properly.’ He adds, ‘Both the confidence and the decision of sufficiently sophisticated AI can be bypassed using adversarial learning techniques. A terrorist who is blocked by Facebook is more likely to switch to some other platform rather than bypass the AI, but Facebook can never completely remove terrorist content.’