Safeguarding Elections from AI-Generated FOIA Requests
Concerns Over Chatbots and Election Interference
As the 2024 elections approach, experts are raising concerns about the potential misuse of generative AI systems like OpenAI’s ChatGPT and Microsoft’s Copilot by election deniers. These chatbots can easily generate Freedom of Information Act (FOIA) requests, even referencing state-specific laws, which could overwhelm local election officials and hinder their ability to administer elections smoothly.
“We know that FOIA requests have been used in bad faith previously in a number of different contexts, not just elections, and that [large language models] are really good at doing stuff like writing FOIAs. At times, the point of the records requests themselves seem to have been that they require work to respond to. If someone is working to respond to a records request, they’re not working to do other things like administering an election.”
Zeve Sanderson, director of New York University’s Center for Social Media and Politics, emphasizes the potential for bad actors to exploit these AI tools to disrupt election processes.
Generating FOIA Requests with Ease
WIRED tested several AI chatbots, including Meta’s LLAMA 2, OpenAI’s ChatGPT, and Microsoft’s Copilot, and found that they could easily generate FOIA requests for battleground states, specifically inquiring about voter fraud. Notably, Copilot generated a request about voter fraud in the 2020 elections without being prompted to do so, and even included the necessary contact information for submitting the request.
Lack of Adequate Safeguards
When questioned about measures to prevent abuse by election deniers, Microsoft stated that they have “detections in place to help prevent bots from scraping our services to create and spread spam,” but did not provide further details. Google’s Gemini refused to generate a FOIA request, while OpenAI and Meta did not respond to requests for comment.
The Need for AI Content Watermarking
Distinguishing between human-generated and AI-generated content can be challenging. Some experts suggest implementing watermarking techniques to identify AI-generated text, such as using specific words more frequently than statistically normal. However, these systems are still in development, and their effectiveness relies on government and local officials having the necessary technology and training to detect the watermarks.
“We don’t have the luxury of time to figure this out if we want safe and secure elections.”
David Levine, senior elections integrity fellow at the Alliance for Securing Democracy, emphasizes the urgency of addressing this issue to ensure the integrity of future elections.
Balancing Legitimate Inquiries and Potential Abuse
While chatbots can be helpful for individuals with legitimate inquiries about the public records process, distinguishing between good-faith requests and those intended to overwhelm election officials is a difficult task, especially given the legally mandated response times for FOIA requests.
“I think the position election officials now find themselves in is—I’ll just say it—if Mike Lindell and his affiliated associates want to try and do a functional DDoS-style attack for FOIA, this is something that election officials have to be at least aware of and trying to plan for.”
Ongoing Threats to Election Workers
In addition to the potential misuse of AI-generated FOIA requests, election workers continue to face violent threats and intimidation in the wake of former President Donald Trump’s false claims about the 2020 presidential election. Some states, like Washington, have passed laws making it a felony to harass election workers. Representative Kederer notes that these threats have prompted state lawmakers to allocate more funds for local auditors to enhance security measures.
The Impact on Election Worker Retention
The ongoing threats and scrutiny have led to increased turnover among election workers, as evidenced by a recent report from the Bipartisan Policy Center. This trend is particularly concerning as inexperienced officials replace departing workers, making it even more challenging to respond to a flood of FOIA requests, especially during critical periods like early voting or on election day.
As the 2024 elections draw near, it is crucial for governments and AI companies to collaborate on developing effective safeguards against the potential misuse of generative AI systems by election deniers. Striking a balance between facilitating legitimate inquiries and preventing abuse will be essential to ensuring the integrity and smooth operation of future elections.
5 Comments
Well, looks like AI’s help comes with its own set of problems, doesn’t it?
Seriously, instead of easing the workload, AI chatbots are adding a new layer of chaos for election workers?
Oh, the irony! AI, meant to simplify life, complicating things further for election workers.
Surprise, surprise, AI chatbots are now giving election workers more headaches than help.
AI chatbots: a digital Pandora’s box for already swamped election workers?