Facebook: “We are removing 99 percent of ‘terror content'”

Facebook: “We are removing 99 percent of ‘terror content'”

The social media network with 2.1 billion users is deploying artificial intelligence to detect posts from groups like IS and al-Qaeda.

Facebook said on Wednesday that it was removing 99 percent of content related to Islamic State (IS) group and al-Qaeda before being told of it, as it prepared for a meeting with European authorities on tackling extremist content online.

Eighty-three percent of “terror content” is removed within one hour of being uploaded, Monika Bickert, head of global policy management, and Brian Fishman, head of counter-terrorism policy at Facebook, wrote in a blog post.

The world’s largest social media network, with 2.1 billion users, has faced pressure both in the United States and Europe to tackle militant content on its platform more effectively.

In June, Facebook said it had ramped up use of artificial intelligence, such as image matching and language understanding, to identify and remove content quickly.

“It is still early, but the results are promising, and we are hopeful that AI (artificial intelligence) will become a more important tool in the arsenal of protection and safety on the internet and on Facebook,” Bickert and Fishman wrote.

“Today, 99 percent of the ISIS and al-Qaeda-related terror content we remove from Facebook is content we detect before anyone in our community has flagged it to us, and in some cases, before it goes live on the site.”

The blog post comes a week before Facebook and other social media companies like Alphabet Inc’s Google and Twitter meet with European Union governments and the EU executive to discuss how to remove extremist content and hate speech online.

“Deploying AI for counter-terrorism is not as simple as flipping a switch … A system designed to find content from one terrorist group may not work for another because of language and stylistic differences in their propaganda,” Facebook said.

The European Commission in September told social media firms to find ways to remove the content faster, including through automatic detection technologies, or face possible legislation forcing them to do so.

In June, Germany passed a law that allows fining Facebook and other social media platforms up to $59m for failing to remove “obviously illegal” content within 24 hours.

Source: Middle East Eye