top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics

OpenAI launches Superalignment

OpenAI has launched a Superalignment team armed with 20% of their compute to solve technical alignment within 4 years. They’ll build AI systems to perform alignment research and evaluate other AIs. Will DeepMind and Anthropic follow suit? And can AI really save us from AI?

Meanwhile, China is tightening its AI regulation, requiring companies to obtain licences before releasing generative AI models.

Beijing is trying to balance becoming an AI superpower with retaining control over content - but is this possible?

Read more about this and more in the CogX Must Reads - including research from GovAI on frontier model regulation, a UN security council meeting on AI and racial bias in AI images.

Top Stories
OpenAI’s Superalignment plan

OpenAI have created a Superalignment Team led by Ilya Sutskever to develop ways to control and direct superintelligent AI systems.

The team will have access to 20% of OpenAI’s current compute, and aims to solve the technical challenges of alignment in four years. It will train AI systems to help evaluate other AI systems and ultimately build an AI system that can do alignment research itself.

What is the UK’s approach to AI safety?

The UK’s approach to AI policy has frequently changed over the past few months with a series of new announcements, policies and structures. Given that the UK is a world leader on AI safety, what happens in Whitehall really matters.

How does the Foundational Model Taskforce really work? What is the AI Safety Summit? And what can we expect to see from UK policy in coming months?

Politics and Regulation
China clamps down on AI

China is planning a shift in its AI regulation as it seeks to censor the technology and retain a tight grip on what content AI models produce.

A new licensing regime being drawn up will require generative AI developers to obtain a licence before releasing them, though some worry this will slow down innovation.

UN Security Council to discuss AI risks

The UK has organised the UN Security Council’s first ever meeting on the threat of AI to global peace, chaired by Foreign Secretary James Cleverly.

Secretary General Antonio Guterres has hinted support for a new UN agency on AI and plans to appoint an advisory board on AI risks this September.

Latest Research
How should we regulate frontier AI models?

Frontier AI models could possess dangerous capabilities sufficient to pose risks to public safety. While self-regulation is important, government regulation will be needed.

New research by GovAI and others looks at how this should work, covering three building blocks: i) set clear requirements of frontier AI developers, ii) ensure registration and reporting of models to regulators and iii) mechanisms to ensure compliance.

Will AI increase biological risk?

New research from Oxford highlights how AI is increasing biological risk through LLMs and design tools.

Chatbots like ChatGPT are increasing access to knowledge and resources for creating dangerous biological weapons, while AI programmes like AlphaFold could make these weapons more lethal than ever.

AI Thinkers
What should the UK’s Foundational Model Taskforce do?

Co-founder of Anthropic, Jack Clark, has proposed that the AI Taskforce should focus on sampling frontier models to check private sector claims and perform pre-deployment analysis.

It should also invest in pioneering ways to evaluate models for risks from both misuse and alignment, and develop broader societal impact evaluations of models.

Dan Hendrycks’ approach to saving humanity from AI catastrophe

Co-founder of the Center for AI Safety, Dan Hendrycks, recently organised a statement signed by Sam Altman, Demis Hassabis and others on mitigating the risk of extinction from AI.

He believes it’s plausible that AI could hack systems or develop weapons within a year, and within two years could have sufficient runway that it’s hard to pull back. Dan is pushing for urgent action on AI safety.

Use Cases
Black artists claim AI systems are biased

Black artists such as Stephanie Dinkins have regularly found AI image generators to mangle facial features or hair textures for black people.

They argue that bias is embedded deep in AI systems given unrepresentative training data. Leading AI Labs have pledged to reduce bias and mitigate harmful output.

AI cheating detection industry booming

A recent survey found 30% of college students used ChatGPT on written assignments, shaking up traditional education assessment procedures.

New AI detection companies such as Winston AI are now seeing huge demand for their services as educators look to push back on AI plagiarisation.

CogX Must Reads of the week

In case you missed it

Geoffrey Hinton outlines how Government and Big Tech should work together to safeguard humanity

We'd love to hear your thoughts on this week’s Issue and what you’d like to see more of.

bottom of page