Network Rail’s AI Trials: Privacy and Security Concerns
Lack of Transparency
Network Rail has not provided answers to WIRED’s inquiries regarding the status of their AI trials, including the use of emotion detection and privacy issues.
“We take the security of the rail network extremely seriously and use a range of advanced technologies across our stations to protect passengers, our colleagues, and the railway infrastructure from crime and other threats,” a Network Rail spokesperson says. “When we deploy technology, we work with the police and security services to ensure that we’re taking proportionate action, and we always comply with the relevant legislation regarding the use of surveillance technologies.”
Emotion Detection and Privacy
The extent of emotion detection analysis deployment remains unclear. Documents suggest caution, and station reports indicate it is “impossible to validate accuracy.” Gregory Butler, CEO of Purple Transform, a data analytics and computer vision company collaborating with Network Rail, mentioned that the capability was discontinued during tests and no images were stored.
AI Use Cases and Surveillance
Network Rail’s documents outline various AI use cases, including automated alerts for specific behaviors. Initially, systems blurred faces of potential fare dodgers but later shifted to unblurring photos and retaining images longer than initially planned.
“There is a very instinctive drive to expand surveillance,” Véliz says. “Human beings like seeing more, seeing further. But surveillance leads to control, and control to a loss of freedom that threatens liberal democracies.”
2 Comments
Why can’t people just mind their own business anymore?
Big brother vibes much!