Politics

/

ArcaMax

‘Inoculation’ helps people spot political deepfakes, study finds

Bingbing Zhang, University of Iowa, The Conversation on

Published in Political News

Informing people about political deepfakes through text-based information and interactive games both improve people’s ability to spot AI-generated video and audio that falsely depict politicians, according to a study my colleagues and I conducted.

Although researchers have focused primarily on advancing technologies for detecting deepfakes, there is also a need for approaches that address the potential audiences for political deepfakes. Deepfakes are becoming increasingly difficult to identify, verify and combat as artificial intelligence technology improves.

Is it possible to inoculate the public to detect deepfakes, thereby increasing their awareness before exposure? My recent research with fellow media studies researchers Sang Jung Kim and Alex Scott at the Visual Media Lab at the University of Iowa has found that inoculation messages can help people recognize deepfakes and even make people more willing to debunk them.

Inoculation theory proposes that psychological inoculation – analogous to getting a medical vaccination – can immunize people against persuasive attacks. The idea is that by explaining to people how deepfakes work, they become primed to recognize them when they encounter them.

In our experiment, we exposed one-third of participants to passive inoculation: traditional text-based warning messages about the threat and the characteristics of deepfakes. We exposed another third to active inoculation: an interactive game that challenged participants to identify deepfakes. The remaining third were given no inoculation.

Participants were then randomly shown either a deepfake video featuring Joe Biden making pro-abortion rights statements or a deepfake video featuring Donald Trump making anti-abortion rights statements. We found that both types of inoculation were effective in reducing the credibility participants gave to the deepfakes, while also increasing people’s awareness and intention to learn more about them.

Deepfakes are a serious threat to democracy because they use AI to create very realistic fake audio and video. These deepfakes can make politicians appear to say things they never actually said, which can damage public trust and cause people to believe false information. For example, some voters in New Hampshire received a phone call that sounded like Joe Biden, telling them not to vote in the state’s primary election.

Because AI technology is becoming more common, it is especially important to find ways to reduce the harmful effects of deepfakes. Recent research shows that labeling deepfakes with fact-checking statements is often not very effective, especially in political contexts. People tend to accept or reject fact-checks based on their existing political beliefs. In addition, false information often spreads faster than accurate information, making fact-checking too slow to fully stop the impact of false information.

As a result, researchers are increasingly calling for new ways to prepare people to resist misinformation in advance. Our research contributes to developing more effective strategies to help people resist AI-generated misinformation.

 

Most research on inoculation against misinformation relies on passive media literacy approaches that mainly provide text-based messages. However, more recent studies show that active inoculation can be more effective. For example, online games that involve active participation have been shown to help people resist violent extremist messages.

In addition, most previous research has focused on protecting people from text-based misinformation. Our study instead examines inoculation against multimodal misinformation, such as deepfakes that combine video, audio and images. Although we expected active inoculation to work better for this type of misinformation, our findings show that both passive and active inoculation can help people cope with the threat of deepfakes.

Our research shows that inoculation messages can help people recognize and resist deepfakes, but it is still unclear whether these effects last over time. In future studies, we plan to examine the long-term effect of inoculation messages.

We also aim to explore whether inoculation works in other areas beyond politics, including health. For example, how would people respond if a deepfake showed a fake doctor spreading health misinformation? Would earlier inoculation messages help people question and resist such content?

The Research Brief is a short take on interesting academic work.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Bingbing Zhang, University of Iowa

Read more:
AI-generated political videos are more about memes and money than persuading and deceiving

Indian election was awash in deepfakes – but AI was a net positive for democracy

Marco Rubio impersonator contacted officials using AI voice deepfakes – computer security experts explain what they are and how to avoid getting fooled

Bingbing Zhang receives funding from the School of Journalism and Mass Communication at the University of Iowa.


 

Comments

blog comments powered by Disqus

 

Related Channels

The ACLU

ACLU

By The ACLU
Amy Goodman

Amy Goodman

By Amy Goodman
Armstrong Williams

Armstrong Williams

By Armstrong Williams
Austin Bay

Austin Bay

By Austin Bay
Ben Shapiro

Ben Shapiro

By Ben Shapiro
Betsy McCaughey

Betsy McCaughey

By Betsy McCaughey
Bill Press

Bill Press

By Bill Press
Bonnie Jean Feldkamp

Bonnie Jean Feldkamp

By Bonnie Jean Feldkamp
Cal Thomas

Cal Thomas

By Cal Thomas
Clarence Page

Clarence Page

By Clarence Page
Danny Tyree

Danny Tyree

By Danny Tyree
David Harsanyi

David Harsanyi

By David Harsanyi
Debra Saunders

Debra Saunders

By Debra Saunders
Dennis Prager

Dennis Prager

By Dennis Prager
Dick Polman

Dick Polman

By Dick Polman
Erick Erickson

Erick Erickson

By Erick Erickson
Froma Harrop

Froma Harrop

By Froma Harrop
Jacob Sullum

Jacob Sullum

By Jacob Sullum
Jamie Stiehm

Jamie Stiehm

By Jamie Stiehm
Jeff Robbins

Jeff Robbins

By Jeff Robbins
Jessica Johnson

Jessica Johnson

By Jessica Johnson
Jim Hightower

Jim Hightower

By Jim Hightower
Joe Conason

Joe Conason

By Joe Conason
John Stossel

John Stossel

By John Stossel
Josh Hammer

Josh Hammer

By Josh Hammer
Judge Andrew P. Napolitano

Judge Andrew Napolitano

By Judge Andrew P. Napolitano
Laura Hollis

Laura Hollis

By Laura Hollis
Marc Munroe Dion

Marc Munroe Dion

By Marc Munroe Dion
Michael Barone

Michael Barone

By Michael Barone
Mona Charen

Mona Charen

By Mona Charen
Rachel Marsden

Rachel Marsden

By Rachel Marsden
Rich Lowry

Rich Lowry

By Rich Lowry
Robert B. Reich

Robert B. Reich

By Robert B. Reich
Ruben Navarrett Jr.

Ruben Navarrett Jr

By Ruben Navarrett Jr.
Ruth Marcus

Ruth Marcus

By Ruth Marcus
S.E. Cupp

S.E. Cupp

By S.E. Cupp
Salena Zito

Salena Zito

By Salena Zito
Star Parker

Star Parker

By Star Parker
Stephen Moore

Stephen Moore

By Stephen Moore
Susan Estrich

Susan Estrich

By Susan Estrich
Ted Rall

Ted Rall

By Ted Rall
Terence P. Jeffrey

Terence P. Jeffrey

By Terence P. Jeffrey
Tim Graham

Tim Graham

By Tim Graham
Tom Purcell

Tom Purcell

By Tom Purcell
Veronique de Rugy

Veronique de Rugy

By Veronique de Rugy
Victor Joecks

Victor Joecks

By Victor Joecks
Wayne Allyn Root

Wayne Allyn Root

By Wayne Allyn Root

Comics

Michael de Adder John Darkow Michael Ramirez RJ Matson David Horsey Monte Wolverton