The Evolution of Bellingcat: A Decade of Uncovering Truths
The Early Days of Eliot Higgins and Bellingcat
Ten years ago, Eliot Higgins could enjoy hotel room service without fearing for his safety. At that time, he hadn’t yet been labeled a foreign agent by Russia, nor had he exposed the plot to poison Russian dissident Alexei Navalny. Today, Bellingcat, the organization he founded, operates as an NGO based in the Netherlands and is highly sought after globally. Its staff trains newsrooms, conducts workshops, uncovers war crimes, and provides forensic evidence increasingly used in court trials.
Bellingcat’s Impact and Recognition
When I met Higgins in April at a pub near his home, he had just returned from the Netherlands, where he received an award for Bellingcat’s contributions to free speech. Soon, he would be heading back to collect another award for peace and human rights. Bellingcat’s journey highlights the complex nature of truth in the 21st century. Higgins, who began blogging as Brown Moses, quickly realized the internet’s potential for both good and evil. He discovered that the court of public opinion is fractured, with hard facts often devalued. Online, anyone can present their own narratives, even if they are false. Higgins has been on a quest to find places where truth still holds value and can empower the weak while holding the guilty accountable.
Challenges and Future Prospects
The upcoming year may be Bellingcat’s most significant yet. In addition to monitoring conflicts in Ukraine and Gaza, its analysts are inundated with falsified artifacts from elections in the US, UK, India, and many other countries. The rise of artificial intelligence (AI) poses another challenge. While AI is not yet sophisticated enough to deceive Bellingcat’s experts, it is increasingly capable of fooling the general public. Higgins is concerned that governments, social media platforms, and tech companies are not taking this threat seriously enough. He fears they will only act when “there’s been a big incident where AI-generated imagery causes real harm”—by then, it may be too late.
The Birth of Bellingcat
From Blog to Global Platform
WIRED: You now preside over the world’s largest open-source, citizen-run intelligence agency. A decade ago, when you switched from your blog to the Bellingcat website, what path did you see this taking?
ELIOT HIGGINS: At that point, I was still trying to figure out exactly how I could turn this into a proper job. I’d been blogging for a couple of years. But I had children, and it was getting more important to earn a living. When I launched Bellingcat, the goal was to have a space where people could come publish their own stuff. Because at that point, I had several people who’d asked to publish on my blog. I needed a better-looking website. I also wanted a place where people could come together. But that was the extent of my strategy. There was no grand plan beyond that. It was all, “What’s happening next week?”
The Catalyst: MH17
Bellingcat launched on July 14, and three days later, MH17 was shot down. The community that formed around this event became a massive catalyst for open-source investigation. It helped develop techniques and raised Bellingcat’s profile. Today, their Discord server has over 28,000 members, where people discuss potential investigations, and articles are published based on the community’s work.
Current Operations and Challenges
Adapting to New Conflicts
WIRED: The world is never boring these days. What has it been like at Bellingcat since October 7, for example?
HIGGINS: We’ve hired more people. We’re bringing in more editors. We’ve shifted people from other projects. We’ve already got one person who’s specifically working on archiving footage. But what’s different is that you don’t get the same kind of footage that we’ve gotten from, say, Ukraine or Syria. There’s actually a lot less coming from the ground.
WIRED: Because of internet blackouts?
HIGGINS: Yeah, and a lot of the stuff we find is actually from Israeli soldiers who’re misbehaving and doing stuff that I would say are definitely violations of international laws. But that’s coming on their social media accounts—they post it themselves.
The Role of AI in Misinformation
WIRED: Are there things you haven’t seen before, coming from this conflict?
HIGGINS: It’s certainly the first time I’ve seen AI-generated content being used as an excuse to ignore real content. When a lot of people think about AI, they think, “Oh, it’s going to fool people into believing stuff that’s not true.” But what it’s really doing is giving people permission to not believe stuff that is true. Because they can say, “Oh, that’s an AI-generated image. AI can generate anything now: video, audio, the entire war zone re-created.” They will use it as an excuse. It’s just easy for them to say.
The Quest for Accountability
Legal Accountability
WIRED: You have this entirely transparent process, where you put all your evidence and investigations online so anyone can double-check it. But it’s a feature of the world we live in that people who’re convinced of certain things will just remain convinced in the face of all the facts. Does the inability to change minds frustrate you?
HIGGINS: I’ve gotten used to it, unfortunately. That’s why we’re moving toward legal accountability and how to use open source evidence for that. We have a team that’s just working on that. You can have the truth, but the truth is not valuable without accountability.
WIRED: What do you mean by legal accountability?
HIGGINS: Well, you have people on the ground capturing evidence of war crimes. How do you actually take that from YouTube to a courtroom? No one has actually gone to court and said, “Here’s a load of open source evidence the court has to consider.” So we’ve been doing mock trials using evidence from investigating Saudi air strikes in Yemen.
Educating Legal Professionals
A lot of our work is educating people: Lawyers in general don’t know much about open source investigation. They need the education to understand how investigators work, what they’re looking for—and what is bad analysis.
WIRED: Because there’s more and more bad analysis with open source evidence. Do you know Nexta TV? They’re this Belarusian media organization, and they did a series of tweets after the attack on the concert in Moscow. They said there’s a lot of people in this scene wearing blue jumpers. They could be FSB agents [members of Russia’s Federal Security Service]. But where’s the proof they’re FSB agents in the first place? That was terrible analysis, and it went viral and convinced people there was something going on. If you can draw colored boxes around something and say you’re doing open source investigation, some people will believe you.
Preparing for Future Challenges
WIRED: There are elections this year in the US and in the UK and in India. Are you preparing to deal with these three big election events as you deal with Ukraine and Gaza?
HIGGINS: There’s a lot to prepare for, and we’re constantly adapting to new challenges. The rise of AI and the spread of misinformation make our work more critical than ever. We’re committed to uncovering the truth and holding those in power accountable, no matter the obstacles.### The Growing Threat of AI-Generated Disinformation
The Scale of Disinformation
The rise of AI-generated imagery and disinformation is becoming a significant concern. In the US, for example, the DeSantis campaign used AI-generated images of Trump and Dr. Fauci hugging, crossing a line that shows how accessible these tools are to the public, not just political agents.
The Role of Supporters
The real issue isn’t just what political campaigns decide to do, but what their supporters might do with these tools. This makes the situation much worse.
“It’s not what the campaigns decide to do, it’s what their supporters decide to do.”
The Impact on Fact-Checking
Given the flood of AI-generated content, there’s a concern that organizations like Bellingcat might turn into mere fact-checkers rather than conducting deeper investigations. A recent example is the viral TikTok videos about Kate Middleton, which spread misinformation rapidly.
Conspiracy Theories and Disinformation
People who believe in conspiracy theories often see themselves as truth-seekers fighting against a betraying authority. This belief is sometimes rooted in personal experiences of “traumatic moral injury.”
“People who believe in conspiracy theories have previously suffered some kind of ‘traumatic moral injury.’”
The Role of Communities
During the Covid pandemic, alternative health communities, already distrustful of medical professionals, amplified anti-vaxxer voices. This distrust was reinforced daily through their social media groups.
The Challenge of AI-Generated Imagery
The Need for Skepticism
In an era of proliferating AI images, it’s crucial for people to be alert and skeptical. However, constant skepticism can lead to distrust in everything, which is detrimental to democratic debate.
Verification Processes
To distinguish real from fake images, verification involves multiple data points like geo-location, shadows, and metadata. For instance, the fake Pentagon AI image that caused a stock market dip was quickly debunked due to the lack of multiple sources.
The Risk of Coordinated Campaigns
The real danger lies in coordinated social media campaigns using bot networks and fake news websites to create confusion and impact real-world events like the stock market.
Solutions and Responsibilities
Social Media Companies
Social media platforms must be legislatively required to implement AI detection and flagging as part of the posting process. A voluntary system won’t suffice; there need to be consequences for non-compliance.
“I think my worry is that we’re only going to figure this out when something really terrible has happened.”
Personal Insights and Experiences
Shifting Roles
The founder of Bellingcat no longer does much investigative work but focuses on PR and communications. Overcoming social anxiety has been a significant personal journey, with public speaking becoming more manageable over time.
Online Activities
Removing Twitter from his phone has helped reduce stress. Engaging in debates used to be a way to test knowledge, but the persistent myths about Bellingcat have made such discussions less fruitful.
AI for Entertainment
In his spare time, he uses AI tools like Suno AI and Udio for music creation. These tools have advanced significantly, allowing for creative expression through custom lyrics and style prompts.
“I like it especially when the AI generator really gets weird, goes completely off the rails.”
Conclusion
The rise of AI-generated disinformation poses a significant threat, requiring both heightened public awareness and legislative action to ensure social media platforms take responsibility.### Understanding Music Generation and Cyber-Miserabilism
Music Generation with AI
When creating music using AI, you can’t directly prompt it to mimic specific bands like the Beastie Boys due to legal concerns. Instead, you can use a workaround by asking ChatGPT for style tags related to a band, such as Kraftwerk. These tags can then be used in the music-generation program to achieve the desired sound.
The Concept of Cyber-Miserabilism
The term “cyber-miserabilism” refers to the nostalgia for a time before the internet. Many people feel their minds were calmer before the constant scrolling through feeds. This continuous connectivity can be traumatizing, as seen during events like the Ukraine conflict in 2022, where people felt overwhelmed by the content stream.
Coping with Traumatic Content
Exposure to Disturbing Imagery
In the early days of Bellingcat, staff were frequently exposed to graphic content, including footage of dead bodies. To cope, it’s crucial to compartmentalize and disassociate from the traumatic material. For instance, while analyzing the wreckage of MH17, one might encounter personal triggers, such as a familiar doll, and must know when to stop.
The Impact of Online Skepticism
Analyzing distressing content, like the victims of the 2013 sarin attacks in Syria, can be emotionally taxing. The trauma is compounded by online skeptics who claim such events are fake. This compulsive need to witness and acknowledge suffering can lead to self-traumatization without effecting real change.
Psychological Support and Security Measures
Access to Therapy
Bellingcat offers psychological support, including free therapy, to help staff cope with the emotional toll of their work. This support is essential not only for dealing with graphic content but also for managing aggressive reactions from governments.
Enhanced Security Protocols
After being declared a foreign agent by Russia in 2021, Bellingcat has implemented stringent security measures. These include cybersecurity reviews, physical security discussions, and staff training on handling surveillance. Transparency about funding has also been adjusted to protect donors linked to Russia.
Navigating Interactions and Data Security
Vetting Meetings
Before agreeing to meetings, thorough research is conducted to ensure safety. This includes verifying the identity of the person and being cautious of unusual questions or scenarios that could indicate a setup.
Data as an Equalizer
Data can level the playing field between individuals and states. Although governments may try to hide their data better, investigative techniques evolve. For example, after exposing Russian GRU officers, subsequent documents had altered photos, but this only confirmed their GRU affiliation. Investigators adapt by finding new methods to continue their work.
Data is the great equalizer between an individual and the state.
Conclusion
The continuous evolution of investigative techniques and the importance of psychological support highlight the resilience required in this field. Despite challenges, the pursuit of truth and transparency remains unwavering.
5 Comments
Is AI the future of justice or just another tool for surveillance?
How-To Guide? More like “How to Get Everyone Scared About AI!”
This must-read guide is a critical step towards ethical AI usage.
Is leading digital sleuths now the secret sauce for AI dominance?
Virtuoso: Is this the blueprint for digital detective supremacy?