Main Image: Bletchley Park, where the AI Summit is being hosted
The Bletchley Declaration on AI Safety by countries attending the UK’s AI Safety Summit sees them agreeing to the urgent need to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community.
The government, which has positioned the UK as an AI mediator as well as practitioner, said the countries involved – represented by politicians and organisations including the US, China, India, Nigeria and Brazil, along with the European Union – had reached a ‘world-first agreement’ today at Bletchley Park ‘establishing a shared understanding of the opportunities and risks posed by frontier AI and the need for governments to work together to meet the most significant challenges’.
Countries agreed substantial risks may arise from potential intentional misuse or unintended issues of control of frontier AI, with particular concern caused by cybersecurity, biotechnology and disinformation risks.
The declaration also recognises the need to deepen the understanding of risks and capabilities that are not fully understood, and attendees have also agreed to work together to support a network of scientific research on Frontier AI safety.
Part of the declaration said: “Recognition of the transformative positive potential of AI, and as part of ensuring wider international cooperation on AI, we resolve to sustain an inclusive global dialogue that engages existing international fora and other relevant initiatives and contributes in an open manner to broader international discussions, and to continue research on frontier AI safety to ensure that the benefits of the technology can be harnessed responsibly for good and for all.”
Rashik Parmar, CEO of BCS, The Chartered Institute for IT, said: “The declaration takes a more positive view of the potential of AI to transform our lives than many thought, and that’s also important to build public trust.
“I’m also pleased to see a focus on AI issues that are a problem today – particularly disinformation, which could result in ‘personalised fake news’ during the next election – we believe this is more pressing than speculation about existential risk. The emphasis on global co-operation is vital, to minimise differences in how countries regulate AI.
“After the summit, we would like to see government and employers insisting that everyone working in a high-stakes AI role is a licensed professional and that they and their organisations are held to the highest ethical standards. It’s also important that CEOs who make decisions about how AI is used in their organisation, are held to account as much as the AI experts; that should mean they are more likely to heed the advice of technologists.
“We also need to see a greater emphasis on the role of AI in education because young people have the right to be taught about its capabilities, risks and potential to ensure they can thrive in life and work.”
Last week Prime Minister Rishi Sunak said the UK would establish the world’s first AI Safety Institute to ‘create an evidence base for managing the risks while unlocking the benefits of the technology, including through the UK’s AI Safety Institute which will look at the range of risks posed by AI’.
As part of agreeing a forward process for international collaboration on frontier AI safety, The Republic of Korea has agreed to co-host a mini virtual summit on AI in the next 6 months.
France will host the next in-person summit in autumn 2024.