Think deepfakes don’t fool you? Sorry, you’re wrong

Deepfakes are basically undetectable to human eyes

Causing chaos

Deepfakes are based on a technology known asgenerative adversarial networksin which two algorithms train each other to produce images.

While the technology behind deep fakes may sound complicated, it is a simple matter to produce one. There are numerous online applications such asFaceswapandZAO Deepswapthat can produce deepfakes within minutes.

Google Colaboratory — an online repository for code in several programming languages — includes examples of code thatcan be used to generate fake images and videos. With software this accessible, it’s easy to see how average users could wreak havoc with deepfakes without realizing the potential security risks.

The popularity of face-swapping apps and online services likeDeep Nostalgiashow how quickly and widely deepfakes could be adopted by the general public. In 2019,approximately 15,000 videos using deepfakes were detected. And this number is expected to increase.

Deepfakes are the perfect tool for disinformation campaigns because they produce believable fake news that takes time to debunk. Meanwhile, the damages caused by deepfakes — especially those that affect people’s reputations — are often long-lasting and irreversible.

Is seeing believing?

Perhaps the most dangerous ramification of deepfakes is how they lend themselves to disinformation in political campaigns.

We saw this when Donald Trump designated any unflattering media coverage as “fake news.” By accusing his critics of circulating fake news, Trump was able to use misinformation in defense of his wrongdoings and as a propaganda tool.

Trump’s strategy allows him to maintain support in an environment filled with distrust and disinformation by claiming “that true events and stories are fake news or deepfakes.”

Credibility in authorities and the media is being undermined, creating a climate of distrust. And with the rising proliferation of deepfakes, politicians could easily deny culpability in any emerging scandals. How can someone’s identity in a video be confirmed if they deny it?

Combating disinformation, however, has always been a challenge for democracies as they try to uphold freedom of speech. Human-AI partnerships can help deal with the rising risk of deepfakes by having people verify the information. Introducing new legislation or applying existing laws to penalize producers of deepfakes for falsifying information and impersonating people could also be considered.

Multidisciplinary approaches by international and national governments, private companies, and other organizations are all vital to protect democratic societies from false information.

Article bySze-Fung Lee, Research Assistant, Department of Information Studies,McGill UniversityandBenjamin C. M. Fung, Professor and Canada Research Chair in Data Mining for Cybersecurity,McGill University

This article is republished fromThe Conversationunder a Creative Commons license. Read theoriginal article.

Story byThe Conversation

An independent news and commentary website produced by academics and journalists.An independent news and commentary website produced by academics and journalists.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with

More TNW

About TNW

AI is changing science: Google DeepMind duo win Nobel Prize in Chemistry

Tech bosses think nuclear fusion is the solution to AI’s energy demands – here’s what they’re missing

Discover TNW All Access

How AI can help you make a computer game without knowing anything about coding

German startup OroraTech raises €25M to scale wildfire early warning system