EU Lawmakers Approve Landmark AI Regulation
The European Union has taken a significant step towards regulating artificial intelligence by officially approving the AI Act. This legislation aims to prohibit certain uses of AI technology and require transparency from providers. The act categorizes AI systems based on their level of risk to society, with higher-risk applications facing stricter requirements.
Implications for Businesses
The AI Act is expected to have far-reaching implications for businesses looking to sell AI services to the public. However, the EU’s transparent and heavily debated development process has given the AI industry a sense of what to expect. According to Sam Aaronson, a partner at Baker McKenzie, the provisional text shows that the EU has listened and responded to public concerns surrounding the technology.
Building on Existing Data Rules
Lothar Determann, a data privacy and information technology partner at Baker McKenzie, believes that the AI Act’s foundation on existing data rules could encourage governments to assess their current regulations. Blake Brannon, chief strategy officer at OneTrust, noted that more mature AI companies have already established privacy protection guidelines in compliance with laws like GDPR and in anticipation of stricter policies. For these companies, the AI Act serves as an additional layer to their existing strategies.
The US Approach to AI Regulation
In contrast to the EU, the United States has struggled to make significant progress in AI regulation, despite being home to major players such as Meta, Amazon, Adobe, Google, Nvidia, and OpenAI. The Biden administration’s most notable action has been a blueprint for an AI Bill of Rights, which has been signed by large AI players. The few bills introduced in the Senate have primarily focused on deepfakes and watermarking, while closed-door AI forums held by Sen. Chuck Schumer (D-NY) have provided little clarity on the government’s direction in governing the technology.
Lessons from the EU’s Approach
As the EU moves forward with the AI Act, policymakers in the US may look to the EU’s approach and draw lessons from it. While the US may not adopt the same risk-based approach, it could consider expanding data transparency rules or allowing GPAI models more leniency.
The Road Ahead
Navrina Singh, founder of Credo AI and a national AI advisory committee member, believes that while the AI Act is a significant moment for AI governance, change will not happen rapidly, and there is still a considerable amount of work ahead. In an interview with The Zero Byte in December, Singh emphasized the need for regulators on both sides of the Atlantic to focus on assisting organizations of all sizes in the safe design, development, and deployment of AI that is both transparent and accountable. She also highlighted the lack of standards and benchmarking processes, particularly around transparency.
Impact on Existing Models and Apps
The AI Act does not retroactively regulate existing models or apps. However, future versions of AI systems like OpenAI’s GPT, Meta’s Llama, or Google’s Gemini will need to consider the transparency requirements set by the EU. While the act may not produce dramatic changes overnight, it clearly demonstrates the EU’s stance on AI.
The focus for regulators on both sides of the Atlantic should be on assisting organizations of all sizes in the safe design, development, and deployment of AI that are both transparent and accountable.
Update March 12th, 8:30ET AM: Updated the original article following the EU Act being officially adopted.