International governments have come together at the UK’s AI Safety Summit to sign a landmark agreement aimed at addressing the potentially catastrophic risks associated with artificial intelligence (AI). The summit, hosted at Bletchley Park, attracted tech experts, global leaders, and representatives from 27 countries and the European Union.
The “Bletchley Declaration on AI Safety” was signed by 28 countries, including major players like the US, China, and the EU. This world-first agreement focuses on mitigating the risks posed by advanced AI models, particularly frontier AI, which encompasses powerful language models such as ChatGPT.
The declaration seeks to identify shared AI safety concerns and establish risk-based policies across participating countries. It acknowledges the immense potential for serious and unintentional harm stemming from these advanced AI models, which can cause significant harm.
While this agreement is seen as a significant milestone, some experts believe it needs more concrete policies and accountability mechanisms. Open source and open science approaches are among the strategies that experts and scientists advocate for in the pursuit of AI safety.
The UK government also unveiled its plans to invest in an AI supercomputer known as Isambard-AI, which is expected to be ten times faster than the country’s current fastest machine. This investment aligns with the UK’s aspirations to lead in AI, although the regulatory approach is yet to be fully determined.
Notably, tech entrepreneur Elon Musk, co-founder of OpenAI, emphasized the need for a “referee” for tech companies while advocating for cautious AI regulation to avoid inhibiting its positive potential.
The European Commission, represented by Ursula von der Leyen, called for objective scientific checks and balances, global AI safety standards, and an independent scientific community. European AI regulations, including the AI Act, are in the final stages of the legislative process.
Meanwhile, US Vice President Kamala Harris emphasized the importance of addressing the full spectrum of AI risks. King Charles III of Britain highlighted AI’s potential for clean energy and called for a collaborative approach to mitigate its significant risks.
In the tech community, concerns were raised about potential moral panic regarding new technologies, with calls for open, responsible, and transparent approaches to AI safety. The discussion on AI safety continues as countries and organizations work towards securing the future of AI technologies.
The UK government also announced plans for future AI safety summits, with South Korea and France set to host upcoming events in their respective formats.