Skip to main content
Loading

Research on the “Liar’s Dividend” Gains Attention  

Kaylyn Jackson Schiff
Kaylyn Jackson Schiff

“The Liar’s Dividend: Can Politicians Claim Misinformation to Evade Accountability,” recently published research in the American Political Science Review by Dr. Kaylyn Jackson Schiff, Dr. Daniel S. Schiff, and Dr. Natália S. Bueno, is gaining attention from journalists and think tanks as they seek to understand the potential impacts of artificial intelligence on elections. In a CNN interview with Michael Smerconish, Kaylyn explains that their research found evidence that politicians can retain voter support by claiming negative stories about them are “fake news,” exploiting widespread confusion around AI-generated content and misinformation. By falsely claiming to be a target of a misinformation campaign, candidates facing real scandals can create uncertainty in the minds of voters about whether the scandal actually occurred and can even rally their supporters. Notably, they find that false claims of misinformation are more effective than other types of responses to a scandal, such as apologizing or remaining silent. While the findings are concerning, Kaylyn notes that there are approaches to combatting this type of misinformation-about-misinformation through fact checking, and she notes that there are emerging efforts around watermarking AI-generated images, video, audio, and text to help the public discern what is real and what is fake. The research article was covered in Political Science Now, and the authors also wrote a commentary piece for the Brookings Institution about their findings, titled “Watch out for false claims of deepfakes, and actual deepfakes this election year.” The authors’ research has also been featured in an expert brief prepared by the Brennan Center for Justice and has supported additional work by Brookings on generative AI and the liar’s dividend in an election year.

Daniel Schiff
Daniel Schiff

This research is part of a broader agenda in the Governance and Responsible AI Lab (GRAIL) at Purdue co-directed by Kaylyn and Daniel. GRAIL supports multiple research projects investigating the social, ethical, and governance implications of AI. Their recent article in The Conversation–“Generative AI like ChatGPT could help boost democracy – if it overcomes key hurdles”–explores potential benefits of generative AI for civic knowledge and constituent communication, but cautions that AI is not a panacea for resolving longstanding issues in political participation and representation. Other projects include building the AI Governance and Regulatory Archive (AGORA) and the Political Deepfakes Incidents Database. Their research has been funded by the National Institute of Justice, Google, and Arnold Ventures. Their work appears in leading journals in the fields of public policy, public administration, political science, criminology, and education.

Read More

Faculty

Research

AI

College News Home