Facebook on Thursday outlined what it’s doing to combat terrorism, pulling back the curtain on how the world’s largest social network is tackling the challenge as tech firms face pressure to do more.
The company is using artificial intelligence to identify a terrorist photo or video before it’s uploaded, analyze text, detect fake accounts and find pages, groups or profiles that support terrorism.
But the Menlo Park tech firm isn’t only relying on technology to help.
Facebook has more than 150 counterterrorism specialists. It also recently announced it’s hiring 3,000 workers to review posts that could run afoul of its online rules.
“Figuring out what supports terrorism and what does not isn’t always straightforward, and algorithms are not yet as good as people when it comes to understanding this kind of context,” wrote Monika Bickert, the company’s director of global policy management and Counterterrorism Policy Manager Brian Fishman in a blog post. “A photo of an armed man waving an ISIS flag might be propaganda or recruiting material, but could be an image in a news story.”
The company has also been working with other tech firms including Microsoft, Twitter and YouTube along with other governments and organizations.
Following a terror attacks in London, U.K. Prime Minister Theresa May has been calling on social media companies to increase their efforts to combat terrorism.
Working with French government officials, May said this week that they were exploring whether to create a “new legal liability” for tech firms that fail to remove this offensive content.
Meanwhile, tech firms have faced lawsuits from victims of terrorist attacks, including from family members of those who died in the the 2015 San Bernardino terror attacks.
Bickert and Fishman said they want to answer some of the questions raised about tech firms head on.
“We want to be very clear how seriously we take this — keeping our community safe on Facebook is critical to our mission,” they wrote.
Photo Credit: Associated Press