Facebook bans some ‘misleading’ deepfake videos

Jan 8, 2020 | Regulation, Social media

Facebook bans some ‘misleading’ deepfake videos
Facebook is banning misleading videos from its site, in a push against deepfake content and online misinformation campaigns.

The new ban targets videos that are manipulated to make it appear someone said words they didn’t actually say.

Under the new move, Facebook won’t allow videos on its site if they’ve been either edited or computer-generated in ways that the average person couldn’t detect.

However, Facebook will allow the technique to be used in parodies or satire, and it will also allow clips that were edited only to cut out or change the order of words.

Those exceptions could open up a grey area in which fact-checkers must decide what content is allowed and what is taken down.

Deepfake videos can be created in several ways, from basic video editing software to sophisticated artificial intelligence tools.

“While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases,” Facebook’s vice president of global policy management, Monika Bickert, said in a blog post about the new policy Monday night.

Facebook is pushing back on deepfake videos as the 2020 presidential campaign ramps up — and the company clearly hopes to avoid a repeat of the fallout from the 2016 election, when it was accused of allowing voter manipulation from fake accounts and thousands of Russian-backed political ads.

In its announcement, Facebook laid out two main criteria for banning deepfake videos: “It has been edited or synthesized — beyond adjustments for clarity or quality — in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:

“It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”

In two high-profile cases from last summer, deepfake videos featured House Speaker Nancy Pelosi and Facebook’s own CEO, Mark Zuckerberg, in highly manipulated footage. In Pelosi’s case, the video and audio were altered to make it seem as if she was slurring. The Zuckerberg deepfake was more satirical, showing the billionaire gloating about using data to control the future.

Facebook says it will work with governments, academia and tech companies to go after manipulated media and the people who produce it. It also says it’s working with experts to improve its ability to detect manipulated media.

Read the full blog post here