The Impact of AI and Deepfakes on Elections
Introduction to AI in Politics
This week on Politics Lab, we delve into how artificial intelligence (AI) might influence the upcoming US elections. The conversation also touches upon the challenges of regulating this technology across the nation.
The Rise of Deepfakes
Deepfake technology has advanced significantly, making it easier to create realistic but fake videos. These can be used maliciously to spread misinformation or manipulate public opinion during elections.
Challenges in Regulation
Difficulties in Nationwide Regulation
Regulating AI and deepfake technologies is complex due to varying state laws and enforcement capabilities. This inconsistency makes it challenging to implement a unified regulatory framework across the country.
“The rapid advancement of deepfake technology poses a significant threat to democratic processes.”
;
#### The Rise of Generative AI
A few months ago, there was widespread worry about how advanced AI might influence elections. The technology had improved so much that creating and spreading fake content became incredibly easy.
#### Deepfakes in Political Campaigns
Despite some reduction in panic around AI's role, deepfakes featuring prominent politicians like Kamala Harris, Joe Biden, and Donald Trump are still prevalent. These deepfakes pose significant challenges for legislation and regulation.
### Legislation Challenges
#### Regulating Political Deepfakes
Creating laws to manage political deepfakes is complex. As we approach the election season, it's crucial to understand what changes have occurred regarding these regulations and how much concern we should have about AI's influence.
### Conclusion
As technology continues to evolve rapidly, staying informed about its implications on politics is essential. For more insights into this topic:
### Tackling AI-Generated Porn: A State-by-State Approach
#### Introduction
In a recent discussion, WIRED's AI experts Leah Feiger and Vittoria Elliott, along with senior writer Will Knight, delve into the pressing issue of AI-generated porn and how various US states are addressing it.
#### The Current Landscape
##### Lack of National Regulation
Vittoria Elliott highlights the fragmented approach to regulating AI-generated porn in the US. Without national regulation, efforts are piecemeal. Congresswoman [Alexandria Ocasio-Cortez](https://ocasio-cortez.house.gov/) introduced the Defiance Act to allow victims to sue creators of nonconsensual deepfake porn. Similarly, Senator [Ted Cruz](https://www.cruz.senate.gov/) proposed the Take It Down Act for removing such content from platforms. However, these bills have seen little progress.
“We see this technology that's being deployed; we want to protect young people and women from being abused on the internet.”
##### Rising Concerns Among Youth
The issue has gained attention due to incidents involving middle and high school students using generative AI for bullying by creating explicit images of their peers.
### State-Level Actions
##### Varied Approaches Across States
States like Michigan have introduced bills focusing on minors. These laws would allow victims to sue creators of explicit deepfake content involving young people. Some states even propose criminal liability for offenders.
##### Legislative Challenges
Elliott explains that while 23 states have some form of legislation against nonconsensual deepfakes, these laws often don't align well across state lines. This inconsistency complicates enforcement beyond local levels.
### Why Now?
##### Increased Political Awareness
This year has seen heightened awareness about AI's role in politics and its misuse in creating nonconsensual porn targeting women. Republican state legislator Matthew Bierlein initially focused on political ads but shifted his attention after incidents like a widely circulated nonconsensual deepfake involving Taylor Swift highlighted the urgency.
“For someone so powerful and so rich to still be powerless in controlling her own image really hammered home that this was the moment.”
### The Role of Tech Companies
Will Knight discusses how open-source applications make it easy for anyone to create fake images or videos despite big companies imposing restrictions on their programs.
#### Propaganda Tools
Knight notes that while initial fears were about deepfakes fooling people completely, they are now more commonly used as propaganda tools or mockery rather than sophisticated deception.
“It's fascinating because it's not really fooling people; it's more about mass-producing propaganda-style images.”
### Future Directions
Feiger questions whether larger AI companies like [OpenAI](https://openai.com/) are collaborating with state officials to establish boundaries around this technology.
Knight believes there is some degree of cooperation but emphasizes that comprehensive solutions require collective effort from both tech companies and legislators.
The Evolving Challenge of Deepfakes
The Growing Threat
Deepfakes are becoming increasingly sophisticated, making it harder to detect them. Hany Farid, a leading expert in this field, suggests that we might soon need technology similar to anti-malware or spam filters to combat deepfakes. This technology could become essential for both companies and individuals.
"We're going to end up with a situation where it's similar to sort of anti-malware or spam restrictions."
Beyond Politics: A Broader Impact
While deepfakes initially seemed like a political issue, their impact is spreading. For example, financial scams have used deepfake videos of CEOs asking employees to transfer money. This shows how the threat can extend beyond revenge porn and affect various sectors.
Legislative Challenges
Federal Government's Role
There is growing pressure on the federal government in the U.S. to regulate AI-generated content, especially nonconsensual pornography involving public figures like Alexandria Ocasio-Cortez. However, progress has been slow due to various complexities.
"It's not just an issue that people do or do not care about this; thinking about how you might actually enforce this is really difficult."
Proving Intent
One major hurdle in regulating deepfakes is proving intent. Many creators of nonconsensual deepfakes claim they are fans rather than abusers. This makes it challenging to demonstrate harmful intent legally.
AI's Role in Elections
Current Landscape
Despite initial fears, AI hasn't played as disruptive a role in elections as expected. However, concerns remain about last-minute convincing deepfakes potentially impacting election outcomes.
"You might have quite a convincing deepfake very late on in the election that could have a big impact."
Propaganda and Misinformation
AI-generated content has been used for propaganda purposes. For instance, images like "Comrade Harris" aim to erode trust by presenting false narratives convincingly enough for some people.
Subtle Uses of AI
Beyond Deepfakes: Other Applications
AI's role isn't limited to creating deceptive content; it's also being used more subtly in campaigns worldwide:
- Speech Writing: Tools like ChatGPT help write speeches.
- Automated Outreach: In countries like India, automated phone calls reach constituents.
These applications aren't necessarily deceptive but show how deeply integrated AI has become in political processes.
"I think people default to being like 'ahh the deepfakes,' but there are many other uses of AI."
Market Reception and Future Prospects
Mixed Success So Far
AI companies have struggled somewhat with selling their products directly for campaign use:
- American Voters' Skepticism: When American voters realized they were talking to an AI bot during campaign calls, they often hung up.
However, these companies continue experimenting with ways these tools can be persuasive and effective over time.
"We are just at the beginning of this widespread use of language models."
### The Rise of AI in Social Interactions and Persuasion
#### Emotional AI Interfaces
AI technology is evolving to include emotional social cues, making interactions feel more genuine. This development is evident in the popularity of AI companions, such as virtual girlfriends, which users find emotionally engaging.
Perhaps people will always just reject it, especially if they know it's AI-generated.
#### Potential for Persuasion
Research indicates that Large Language Models (LLMs) can influence perceptions. Companies might leverage this capability for advertising and political persuasion. These chatbots could potentially sway opinions by providing misinformation or convincing arguments.
It seems very likely that that's where things would lead unless there's kind of efforts to really restrict that.
### Guardrails and Ethical Concerns
#### Current Safeguards
Efforts are underway to monitor and control the use of LLMs in politics. However, these measures are still in their infancy and primarily tested in uncontrolled environments.
This is arguably the scariest part of our conversation so far.
### Historical Context: Social Media Evolution
#### Early Internet Skepticism
Reflecting on past attitudes towards social media highlights how initial skepticism can evolve into significant societal impacts. Early platforms like MySpace were once dismissed but later became central to political discourse.
If we had judged how people were going to perceive the information ecosystem by the early days of social media...
### Deepfake Detection Challenges
#### Technological Limitations
Deepfake detection technologies vary from analyzing files to scrutinizing images or audio signals. Despite advancements, these tools often fail to catch all deepfakes effectively.
The truth is the detection is not that great.
#### Global Disparities
Detection tools struggle outside Western contexts due to biases in training data sets, which are predominantly white and English-speaking. This results in higher false positives/negatives for non-Western subjects.
It's a real challenge...
### Future Implications: Elections and Misinformation
#### Upcoming Election Concerns
As elections approach, companies may claim their ability to detect or debunk AI-generated content amidst rapidly changing political landscapes.
I mean, we're nine weeks out from election day...
#### The "Liar's Dividend"
The mere existence of deepfake technology allows individuals like politicians to dismiss incriminating evidence as fake easily. This phenomenon undermines trust and shared reality among the public.
### The Role of AI in Propaganda
#### Increasing Influence of AI-Generated Content
AI technology is increasingly being used to create and spread propaganda. While some people can identify fake images or recognize that movements like "Swifties for Trump" are not genuine, these creations still have the potential to influence public opinion.
#### Election Misinformation
As we approach critical election periods, the stakes are higher than ever. Questions about election integrity continue to arise, especially from those who deny the results of past elections. These communities are particularly susceptible to misinformation, and AI serves as a powerful tool for spreading false narratives.
"I think it's never been more important to have some shared truth and some commitment to defining it."
#### The Importance of Shared Truth
Will Knight emphasizes the need for a common understanding of truth, which has been under unprecedented attack. He references a book titled "The Death of Truth," which discusses how undermining truth can be used as a means of control.
### Conspiracy Theories: A Growing Concern
#### Introduction
Leah Feiger expresses her eagerness to revisit this topic in the future, highlighting how fluid our understanding of truth has become in today's political landscape.
### Conspiracy Theory Segment
#### Welcome Back!
Leah Feiger welcomes listeners back to WIRED Politics Lab's segment called "Conspiracy of the Week." In this segment, guests share their favorite conspiracy theories they've encountered recently or in the past.
#### Vittoria Elliott's Contribution
Vittoria Elliott presents two options centered around RFK Jr., whom she humorously refers to as her "real boyfriend."
---
For more information on related topics:
- [WIRED](https://www.wired.com)
4 Comments
Certainly, but let’s hope the voters are more influenced by policies than pixels.
That’s a good deal; AI might stir things up!
AI and deepfakes might muddle the lines, but details on election offers are pretty tempting.
Verdant: With deepfakes in the mix, how do we trust election integrity anymore?