- Politics of AI
- Posts
- More and More Common
More and More Common

AI-generated deepfakes of elected officials are increasingly common, often misleading, and difficult to detect. Recent tests showed just how prone AI is to creating deepfakes. In short, they are created wayyy too often.
Fakes Are More Common
AI generated voice recordings of elected officials are becoming more and more common. Programs have made it easier than ever to make an ad with your favorite or least favorite elected official.
An in depth test by Midjourney, a popular AI image generator, revealed that AI is prone to creating misleading materials regularly. Fake candidate images were created 50% of the time. Another test revealed misleading materials were produced 80% of the time. Earlier this year, fake robo-calls featuring an AI generated voice of President Joe Biden urging voters not to bother showing up at the polls in New Hampshire. London Mayor Sadiq Khan was victim of a deepfake audio recording of him disparaging Remembrance day in favor of a pro-Palestine rally.
This is a worldwide phenomenon, with deepfakes proliferating in India’s election this week and is showing up elsewhere. One video takes aim at Narendra Modi, showing him dancing on a stage while referring to him as a dictator. In another video, Rahul Gandhi, Modi’s most prominent opponent, is shown swearing himself in, layered with an A.I.-generated audio track.
AI regulations are happening slowly, leaving some feeling frustrated with the lack of oversight towards AI programs. “AI companies are clambering over each other to launch as fast as possible – in order to take first mover advantage and build brand and market share. But just like social media companies, they have completely failed to put into place sensible guardrails or safety features. Move fast and break things is a terrible mantra for products that have the power to undermine democracy and faith in elections,” said Imran Ahmed, CEO of the Center for Countering Digital Hate.
Meta and Google have announced they will require labels on images created with AI. TikTok has introduced similar requirements.
However, the question remains. How do we make sure these disclaimers show up when they’re intended? Some fakes are so convincing as to defeat programs which search for deepfakes. With their worldwide proliferation, deepfakes will become harder and harder to stop.
A Verified Deepfake
Follow Up
Senate bill 3897, which requires the Election Assistance Commission to develop guidelines to address the risks of Ai in elections, has gained a second sponsor. In addition to Sen. Amy Klobuchar, Arizona Democrat Mark Kelly has signed on to be Cosponsor of the legislation. The bill was mentioned in Politics of AI two weeks ago.
Worth It
—”TikTok users being fed misleading election news, BBC finds” by Marianna Spring of BBC
—”Deepfakes and their threat to global democracy” by Billie Gay Jackson of The Bureau of Investigative Journalism
—”How spammers, scammers and creators leverage AI-generated images on Facebook for audience growth” by Renee DiResta and Josh A. Goldstein of Stanford Internet Observatory
—”Warning to UK politicians over risk of audio deepfakes that could derail the election” by Tamara Cohen of SKY News
—”A small army combating a flood of deepfakes in India’s election” by Alex Travelli of The New York Times
Random News
—”The arm of a 19th century mummy came off after mishandling by museum staff” by the Associated Press
—”Man with suspended driver's license dials into court hearing while driving” by David Moye of the Huffington Post
Authored by Daniel Dean with the assistance of AI.