In a groundbreaking move, the European Union’s three branches have tentatively agreed on the AI Act. This landmark regulation could reshape the landscape of artificial intelligence within the economic bloc. While this marks a significant step towards governing AI technologies, the details of the changes required from AI companies remain hazy, and enforcement is likely years away.
Proposed in 2021, the AI Act has yet to receive full approval. Last-minute compromises softened some of its stringent regulatory measures, and the enforcement is expected to commence around 2025. According to Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, the compromise’s immediate impact on established AI designers in the US may be limited.
Major AI players such as OpenAI, Microsoft, Google, and Meta are anticipated to continue their competitive pursuits, especially as they navigate regulatory uncertainties in the US. The AI Act’s inception predates the surge in general-purpose AI (GPAI) tools like OpenAI’s GPT-4, presenting complexities in regulating these advanced systems.
The AI Act categorizes rules based on the societal risk posed by AI systems, emphasizing that “the higher the risk, the stricter the rules.” However, concerns from member states, including France, Germany, and Italy, led to compromises during negotiations, resulting in a two-tier system and law enforcement exceptions for prohibited uses like remote biometric identification.
Despite these compromises, French President Emmanuel Macron criticized the AI Act, claiming it creates a regulatory environment that hampers innovation. Some argue that the current rules may pose challenges for new European AI companies in raising capital, potentially providing an advantage to American counterparts.
Notably, the provisional rules do not introduce new laws on data collection, leaving AI models trained on publicly available but sensitive and potentially copyrighted data as a point of contention. The AI Act requires transparency summaries or data nutrition labels, but it doesn’t significantly alter companies’ behaviour around data, according to Susan Ariel Aaronson, director of the Digital Trade and Data Governance Hub.
The AI Act’s potentially stiff fines won’t apply to open-source developers, researchers, and smaller companies further down the value chain, a move applauded by open-source developers. GitHub’s chief legal officer, Shelley McKinley, sees this as a positive development for open innovation.
As the EU sets the stage for AI regulation, observers believe it could influence other political figures globally, urging them to accelerate their regulatory efforts. While the AI Act is not finalized, it signals the EU’s stance on AI governance and highlights the need for global standards and benchmarking processes.
In contrast, the US has struggled to implement comprehensive AI regulation, with its biggest move being an executive order directing government agencies to develop safety standards. The AI Act serves as a reminder of the EU’s commitment to transparency and accountability in AI development, providing a glimpse into the future landscape of AI governance on both sides of the Atlantic.