What People Are Getting Wrong This Week: the Dangers of Deepfakes

Even before the term “deepfake” was coined in 2014, the fear that digitally doctored audio, photos and videos would be used by nefarious individuals to manipulate public opinion was common. Now that artificial intelligence can instantly create any fake with little or no effort, the specter of villains using tampered evidence to manipulate history, shape public opinion and destroy the very concept of truth is widely seen as inevitable. But that won’t happen (probably).

You can never say never, but over the past decade there have been no serious examples of deepfake images or videos successfully manipulating public opinion on a large scale. Even though fakes are ridiculously easy to create and spread, and people are clearly interested in shaping our opinions, deepfakes haven’t changed many people’s minds.

There is good reason to believe that the worst will never happen – not because the better angels of our nature will rise to power or some well-intentioned law will be passed, but because evidence, falsified or real, does not influence people’s opinions and beliefs. .

There is nothing new in the world of “post-truth”

Manipulating images and sounds and distributing them around the world is nothing new. Photographers have been modifying photographs since the invention of the camera. Movies have been staged or edited to obscure or change what they depict. Audio editing is just as easy and powerful. And people have been spreading lies widely through the written word ever since the printing press was invented in 1440.

None of these old technologies have created a post-truth culture in which no one believes their own eyes or ears. There have been a few minor standout cases – such as the scare over Orson Welles’s War of the Worlds radio broadcast – but for the most part fake images, audio or films have not been widely accepted as true because most people are fairly good at putting things into context and using common sense to recognize the authenticity of what they see and hear. Even when something really misleads people, like this fake image of the Pope in a puffer jacket , once it’s exposed, everyone gets off the bus.

For deepfakes, we use the same truth-telling tools we used to use for photographs: common sense and context. you can create a pornographic video of Taylor Swift, but no matter how good it looks, no one will be fooled. It doesn’t make sense for Swift to be in a video like this, and even if she did, it would be widely reported. No one actually thinks our former presidents sleep around and play Black Ops, even if it’s on YouTube.

There are practically no dangerous deepfakes

As Walter Schairer notes in his book The History of Fake Things on the Internet , academic researchers have such a hard time finding convincing deepfakes that they have to create their own to study them. Of course, the Internet is full of fake photos and videos, but they are not deepfakes in the sense that they are created to deceive someone. Almost all of them are memes, jokes or porn designed to make people laugh or masturbate, not to make them think or change their minds. Mainly because it won’t work.

Deep stories are stronger than deepfakes

Fake evidence doesn’t change anyone’s mind. But neither does the actual evidence. People form their opinions based on emotions, not facts or photo manipulation. About 15% of Americans believe the United States is controlled by a ring of Satan-worshipping pedophiles who run a global child sex trafficking ring. Not because someone created an AI image of Joe Biden presiding over a black mass, but because it is a compelling story that reinforces pre-existing bias. This is a “ deep story” as opposed to a deepfake, and deep stories are almost impossible to fight.

The best fake stories were always simple and so broad that no evidence could support or refute them. Things like “the election was stolen” or “911 was an inside job” stick in people’s minds. The really good ones also last a long time . As Daniel Immerwahr points out in The New Yorker , people still think Catherine the Great had sex with a horse.

Realistic evidence can actually make conspiracy theories less believable. It is widely believed in far-right circles that there is a video of Hillary Clinton killing a child as part of a satanic ritual. The evidence consists of “trust me, bro” descriptions of the footage on message boards and breathless speculation about when it would be released. But there is no fake version. No matter how “realistic” the AI-generated footage may look, the visual representation of the scene will likely ruin the illusion by highlighting how inherently absurd it is. You’ll be able to pick it apart and be forced to consider the details of how something like this would happen on planet Earth, rather than in the darkest corners of your mind.

More…

Leave a Reply