EU Lawmakers Approve Landmark AI Regulation
The European Union has taken a significant step towards regulating artificial intelligence by officially approving the AI Act. This legislation aims to prohibit certain uses of AI technology and require transparency from providers. The act categorizes AI systems based on their level of risk to society, with higher-risk applications facing stricter requirements.
Implications for Businesses
The AI Act is expected to have far-reaching implications for businesses looking to sell AI services to the public. However, the EU’s transparent and heavily debated development process has given the AI industry a sense of what to expect. According to Lothar Determann, a data privacy and information technology partner at Baker McKenzie, the fact that the AI Act builds on existing data rules could encourage governments to review their current regulations.
OneTrust‘s chief strategy officer, Blake Brannon, noted that more mature AI companies have already established privacy protection guidelines in compliance with laws like GDPR and in anticipation of stricter policies. For these companies, the AI Act serves as an additional layer to their existing strategies.
Contrasting Approaches: EU vs. US
While the EU has made significant progress in AI regulation, the United States has largely failed to get similar legislation off the ground, despite being home to major players like Meta, Amazon, Adobe, Google, Nvidia, and OpenAI. The Biden administration’s most notable action has been the introduction of a voluntary AI “bill of rights,” signed by large AI players. The few bills introduced in the Senate have primarily focused on deepfakes and watermarking, while closed-door AI forums held by Sen. Chuck Schumer (D-NY) have provided little clarity on the government’s direction in governing the technology.
Lessons from the EU’s Approach
As the EU moves forward with the AI Act, policymakers in other regions may look to learn from their approach. While the US may not adopt the same risk-based strategy, it could consider expanding data transparency rules or allowing GPAI models more leniency.
Navrina Singh, founder of Credo AI and a national AI advisory committee member, believes that while the AI Act is a significant moment for AI governance, change will not happen rapidly, and there is still much work to be done. She emphasizes the importance of regulators on both sides of the Atlantic assisting organizations of all sizes in the safe design, development, and deployment of AI that is both transparent and accountable.
The focus for regulators on both sides of the Atlantic should be on assisting organizations of all sizes in the safe design, development, and deployment of AI that are both transparent and accountable.
Looking Ahead
Although the AI Act does not retroactively regulate existing models or apps, future versions of AI systems like OpenAI’s GPT, Meta’s Llama, or Google’s Gemini will need to consider the transparency requirements set by the EU. While the act may not produce dramatic changes overnight, it clearly demonstrates the EU’s stance on AI and sets the stage for future developments in AI governance.
2 Comments
So, the EU’s putting a leash on AI, huh? Let’s see how that plays out.
Finally, Europe’s stepping up its AI game, what took them so long