top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

The CogX blog

Thought leadership on the most pressing issues of our time

The UK AI Safety Summit | 2023 - Review


Everything you need to know about the AI Safety Summit 


TLDR: The UK AI Safety Summit in November 2023, hosted by PM Rishi Sunak at Bletchley Park, was a landmark event focusing on frontier AI safety. It gathered global leaders and experts to discuss risks and strategies for managing advanced AI. Highlights included the signing of the Bletchley Declaration for international AI collaboration, with notable contributions from Elon Musk and others. The Summit was crucial for global AI safety consensus but faced criticism for its narrow focus and lack of immediate action on regulation.


What was it?


The AI Safety Summit took place over 2 days, from 1-2 November 2023, and was hosted by UK PM Rishi Sunak. The Summit’s objective was to discuss AI safety, specifically frontier AI, to gain an informed understanding from a diverse group of experts, on how to best approach frontier AI risks. This event, hosted by the UK government, marked the first global Summit dedicated to this subject. 


What is Frontier AI, and why was it the focus? 


Frontier AI models are distinguished by their status as highly capable foundational models, exceeding that of current advanced AI. The advanced capabilities of these models, while offering immense opportunities for innovation and problem-solving, also offer the potential of severe risk to public safety. Some risks include manipulation and misuse from bad actors, such as cybersecurity, biothreats, and terrorism. 


This has led to a growing focus on the need for regulation and careful management of these emerging risks to ensure that the development and deployment of frontier AI is done responsibly and safely.


The Summit aimed to define these risks, and establish how to mitigate them.


Where was it? 


The Summit took place at Bletchley Park. It honoured the site’s historical technological significance paying homage to Alan Turing, the renowned computer scientist known for his success in cracking Enigma and the infamous Turing Test. 


Who was there?


The AI Safety Summit's guest list was international, with a bias  to UK-based organisations, blending a rich mix of academics, civil society members, lab representatives, and global leaders. This included notable figures such as Elon Musk, a taped address from King Charles, US Vice President Kamala Harris, EU Commission President Ursula von der Leyen, and award-winning computer scientists and executives from top AI firms.


There was a notable lack of start-up attendees, attributed to the 150-person limit placed upon the Summit. 


What was said?


The first day saw eight roundtables, which revolved around: the safety implications of frontier AI misuse, particularly in biosecurity and cybersecurity; the unforeseen rapid advancements in AI capabilities; the potential for humans to lose control over advanced AI systems; and the societal ramifications of integrating AI. 


The Summit also touched upon the strategies AI developers should adopt for scaling responsibly, national and international policymaking recommendations, and the scientific community's approach to ensure AI safety. 


The second day consisted of an address from the PM, following a private conversation with Elon Musk. In the talk, Elon Musk emphasised the need for a third-party referee to monitor AI development companies and raise concerns if needed. He stressed the importance of insight before oversight and expressed concerns about governments prematurely enforcing rules without sufficient understanding.


In a taped address, King Charles stressed the urgency of addressing AI risks collectively, stating the risks of AI must be tackled with "a sense of urgency, unity and collective strength", advocating for a collective approach, drawing parallels to the fight against climate change.


What is the Bletchley Declaration, and why is it important?


The Bletchley Declaration represented a rare moment of international agreement, signed by 28 governments including major nations like the UK, US, EU, Australia, and China. 


The declaration is a crucial step for global recognition of the potentially catastrophic risks associated with frontier AI. It emphasises the need for international collaboration in AI safety research, recognizing the borderless nature of AI and its impacts, and advocating for a collective approach to effective management and regulation.


The Bletchley Declaration could serve as a basis for developing international policies and regulatory frameworks aimed at safely managing the advancement and application of AI technologies. It sets a precedent for adopting preventive measures against potential misuse or unintended consequences of AI.


This declaration is likely to stimulate more in-depth discussions and actions among a diverse group of stakeholders in the AI field, including governments, technology companies, academia, and civil society. It underscores the need to balance technological innovation with ethical considerations, public safety, and human rights.


What’s next? 


Different countries are still approaching AI regulation at their own pace. While the EU is advancing its AI act, and the US has issued an executive order on AI, UK officials are more hesitant, viewing the industry as moving too fast for regulation. 


However, there's a general agreement on the importance of international Summits for defining and tackling AI challenges: 2024 will see two new Summits, to be hosted by South Korea in March, and France next November, representing the commitment to maintaining momentum, and responding to technological change.


The US has unveiled plans to start its own AI Safety Institute to assess risks associated with frontier AI models. The announcement urged those in academia and industry, to join this effort, emphasising the importance of private sector involvement. It also  highlighted a forthcoming partnership with the UK Safety Institute. And President Biden's new Executive Order set standards for AI safety, equity, and privacy, while fostering innovation and global collaboration.


Will it be enough? 


Whilst this landmark Summit was the first to promote international cooperation, it highlighteda distinct lack of regulation. 


There is also controversy about the Summit focusing purely on AI frontier risk — as opposed to more immediately pressing matters such as job losses, bias, and geopolitical control. The Summit revealed a division in opinions regarding existential risks posed by AI, with Nick Clegg of Meta expressing concerns about the immediate threat to democratic processes.


The Summit has drawn some criticism also for its guest list and focus. Many civil society representatives were absent, leading to over 100 signatories of an open letter voicing their concerns.


 

If you enjoyed this post, you’ll love our weekly briefings on Preparing for Ai. Check out some previous editions here, or just cut straight to the chase and subscribe to our newsletters exploring AI, net zero, the future of work, investing, cinema, and deeptech. 

Previous
Next
bottom of page