The Looming Threat of AI in the 2024 Presidential Election
A New Era of Electoral Manipulation
As the 2024 presidential election approaches, a new challenge looms on the horizon: the widespread use of generative AI to manipulate voters. While deceptive tactics like doctored images, altered videos, and misleading robocalls have long been a part of American politics, the accessibility and affordability of AI tools are set to take these practices to unprecedented levels. This election cycle will put to the test the boundaries of these emerging technologies, the public’s ability to discern truth from fiction, and the capacity of regulators to maintain control over the situation.
Election Officials Grapple with AI Impersonation
The recent incident involving a deepfake robocall of President Joe Biden urging New Hampshire voters to abstain from the primary has sent shockwaves through the election official community. Many now fear that they, too, could fall victim to AI-powered impersonation during this election cycle. Arizona’s secretary of state, who experimented with a deepfake of himself last year, expressed his concerns to Politico:
“It has the potential to do a lot of damage.”
AI Deepfakes: Accessible, Affordable, and Poised to Disrupt
Our latest episodes of Decoder delve into the most pressing issues in the news, with a particular focus on generative AI. Last week, we examined the potential impact of copyright lawsuits on the industry’s future. While the outcomes of these cases remain uncertain, a more immediate concern is the ability of AI systems to generate convincingly fake images and audio, with the possibility of video manipulation on the horizon thanks to tools like OpenAI’s Sora.
FCC Cracks Down on AI-Generated Robocalls
The Federal Communications Commission has taken a decisive step to combat the use of AI-generated voices in robocalls. In a ruling issued on Thursday, the FCC clarified that AI voice cloning technology falls under the Telephone Consumer Protection Act (TCPA), which restricts the use of “artificial or prerecorded voices” for non-emergency purposes without prior consent. This decision empowers state attorneys general to take action against callers who employ AI-generated voices in their robocalls.
Texas Companies Linked to AI Biden Robocall Scandal
New Hampshire Attorney General John Formella has revealed that two Texas-based companies, Lingo Telecom and Life Corporation, are connected to the robocall campaign that used an AI voice clone of President Joe Biden to dissuade New Hampshire voters from participating in the primary. Both companies have been served with cease-and-desist orders and subpoenas, and have previously faced accusations of illegal robocall investigations, as noted by the FCC.
The Road Ahead
As the 2024 presidential election draws nearer, it is clear that AI-driven manipulation will play a significant role in shaping the political landscape. Regulators, election officials, and the public must work together to navigate this uncharted territory, ensuring that the integrity of the democratic process is upheld in the face of unprecedented technological challenges.
Democrats Propose Bill to Combat AI-Generated Political Ads
Congressman Calls for DOJ Investigation into Fake Biden Robocall
Representative Joseph Morelle (D-NY) is urging the Department of Justice to look into a supposedly AI-generated robocall impersonating President Joe Biden, which disseminated inaccurate information to New Hampshire voters. This incident underscores the growing worry that AI-generated disinformation will continue to be a problem in future elections. The New Hampshire Department of Justice has already launched its own investigation into the matter.
This clear bid to interfere in the New Hampshire primary demands a thorough investigation and a forceful response.
Microsoft Offers Politicians Protection Against Deepfakes
As concerns mount over AI’s potential to facilitate the spread of misinformation, Microsoft is providing services to help safeguard against deepfakes and bolster cybersecurity in anticipation of several global elections. These services include a new tool that utilizes the Content Credentials watermarking system developed by the Coalition for Content Provenance Authenticity (C2PA). The aim is to assist candidates in protecting their content and likeness from misuse and prevent the dissemination of misleading information.
Meta Requires Disclosure of AI-Generated Content in Political Ads
Meta has announced that advertisers must disclose when AI-generated or altered content that could be misleading is used in political, electoral, or social issue ads on Facebook and Instagram. This rule applies to advertisements featuring “realistic” images, videos, or audio that falsely depict someone’s actions or imagine a real event unfolding differently than it actually did. The policy, set to take effect next year, also covers content portraying realistic-looking fake people or events.
Facebook Oversight Board Reviews Doctored Biden Clip
The Facebook Oversight Board is examining a case involving an altered video of President Joe Biden, which could impact Meta’s policies on “manipulated media” in the lead-up to the 2024 election. The video in question shows an edited clip of Biden placing an “I Voted” sticker on his granddaughter’s chest and kissing her cheek during the 2022 midterm elections, set to a suggestive song lyric. It was posted on Facebook with a caption calling Biden “a sick pedophile.”
Democrat Introduces Bill to Address AI-Generated Political Ads
In response to a fake AI-generated attack ad from the Republican National Committee (RNC), Representative Yvette Clarke (D-NY) has introduced a bill that would mandate disclosures of AI-generated content in political advertisements. The RNC ad, released shortly after President Biden announced his 2024 reelection campaign, provided Congress with a glimpse into how AI could be employed in the upcoming election cycle.
TikTok Updates Content Guidelines, Banning AI-Generated Deepfakes and Fake Endorsements
Illustration: Alex Castro / The Zero Byte
As the likelihood of a TikTok ban in the United States increases, the popular video-sharing platform has updated its content moderation policies. While the majority of the rules regarding what content can be posted and promoted remain largely the same, TikTok has introduced new restrictions on the sharing of AI-generated deepfakes, which have gained significant popularity on the app in recent months.
Key Updates to TikTok’s Community Guidelines
- The core of TikTok’s moderation policies, known as “Community Guidelines,” remains mostly unchanged and in line with expectations.
- Graphic violence, hate speech, and overtly sexual content are strictly prohibited, with varying rules for the latter based on the subject’s age.
- A newly expanded section addresses “synthetic and manipulated media,” specifically targeting AI-generated deepfakes that have become increasingly prevalent on the platform.
The Rise of AI Deepfakes on TikTok
In recent months, TikTok has witnessed a surge in the popularity of AI-generated deepfakes. These manipulated videos, created using advanced artificial intelligence techniques, can convincingly superimpose faces onto existing videos or generate entirely new content. While some deepfakes are created for entertainment purposes, others have the potential to spread misinformation or cause harm to individuals and organizations.
TikTok’s Stance on Deepfakes and Fake Endorsements
TikTok’s updated guidelines aim to curb the spread of malicious deepfakes and protect users from being misled by fake endorsements. The platform now explicitly bans deepfakes featuring non-public figures and prohibits the use of AI-generated content to create false endorsements or misleading advertisements. By implementing these measures, TikTok seeks to maintain the integrity of its platform and foster a safer environment for its users.
3 Comments
The 2024 election will be a wild ride with AI calling the shots!
AI’s influence in the 2024 election will be unprecedented, brace yourself for a chaotic journey
AI’s meddling in the 2024 election will make it a rollercoaster of misinformation and manipulation.