Preparing for AI
Your weekly CogX newsletter on AI Safety and Ethics
AI Safety Summit Special | 03.11.23
If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes
The AI world descended to Bletchley Park this week for the first international AI Safety Summit. Across 2 days, world leaders, CEOs of AI labs, academics and research organisations discussed the risks of frontier AI. The result? A tangible first step toward tackling AI risks, and real optimism going forward.
28 countries from across the globe including the USA, China and the EU signed the landmark Bletchley Declaration, an agreement on the risks from AI and a desire to collaborate on solving them.
And the UK’s legacy of this Summit was ensured with the announcement of the next mini-Summit in Korea in early 2024, and France in November 2024, showing a real sense of momentum.
Read on for more on the Summit — from King Charles’s intervention to the US’s AI Safety Institute — in the CogX Must Reads.
I hope you enjoy it!
Charlie and the Research and Intelligence team
P.S. After an incredible CogX Festival 2023, we're gearing up for another year of cutting-edge innovation, game-changing breakthroughs, and endless inspiration. Don't miss out – grab your super early bird tickets now and secure your spot at CogX Festival 2024 today!
CogX Must Reads
The Bletchley Declaration
Countries from around the world signed The Bletchley Declaration: an agreement recognising the importance of developing safe, human-centric and trustworthy AI. This landmark agreement marks the first time major powers — including the US and China— have jointly recognized and committed to addressing the challenges posed by AI together. Commitments also include building a shared scientific and evidence-based understanding of the risks. (GOV.UK)
Government to test AI models
Leading AI labs including OpenAI, Google DeepMind and Anthropic signed an agreement to allow governments to test their models for national security risks before they are released to the public. The UK, US and Singaporean governments are all already setting up capabilities to conduct rigorous evaluations of model capabilities. (Financial Times)
🚀 CogX 2024 Super Early Bird Tickets Don't miss your chance to secure your spot at the CogX Festival 2024! A limited number of super early bird tickets are now up for grabs at a 75% discount.
AI Safety in 2024
Korea and France lead on next steps
Upcoming Safety Summits have been scheduled, with South Korea hosting a mini Summit in April 2024, followed by France next November, representing the commitment to maintaining momentum on international collaboration. The speed of technological change and urgency of action resulted in agreeing to hold a Summit twice a year. (Reuters)
Yoshu Bengio to lead ‘State of Science’ report
One of the difficulties with regulating AI is the lack of shared consensus on its capabilities and technical inputs. To help address that, the UK has commissioned Yoshua Bengio to lead a group of experts that will collate the best existing research and identify new research priorities, to be published ahead of the next Summit. The report will be independent and form a shared basis of understanding for future discussions. (GOV.UK)
US follows UK in setting up an AI Safety Institute
The US unveiled plans to set up an AI Safety Institute to assess the risks associated with frontier AI models. The announcement urged those in academia and industry to join the effort, emphasising the importance of private sector involvement and highlighting a forthcoming partnership with the UK Safety Institute. The timing of the announcement raised some suspicion, however, that the US was seeking to overshadow the UK’s Summit. (Financial Times)
The importance of all AI risks, not just existential ones
US Vice President Kamala Harris gave a speech in London outlining the importance in tackling all AI risks, from misinformation to discrimination. A key debate surrounding the UK Summit is whether its focus has been too niche on frontier risks. However, many argue that these frontier risks are paramount and that different forums can address varying risks. (Reuters)
King Charles calls for urgency and unity
In a taped address, King Charles stressed the urgency of addressing AI risks collectively, stating the risks of AI must be tackled with “a sense of urgency, unity and collective strength.” He advocated for a collective approach, drawing parallels with the fight against climate change. (Wired)
Rishi Sunak x Elon Musk
The Prime Minister interviewed Elon Musk at Lancaster House following the conclusion of the Summit. Musk predicted that in the future, people will only have jobs if they want them, after AI replaces all work. He also praised the role of the UK on AI safety and advocated for continued international collaboration to solve AI challenges. (BBC)
✍️ Enjoying this newsletter? Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work.