7 OCTOBER | LONDON 2024
SEPTEMBER 12TH - 14TH
The O2, LONDON
Preparing for AI
Your weekly CogX newsletter on AI Safety and Ethics
Will Elon Musk dethrone OpenAI?
Elon Musk launched xAI to challenge OpenAI and “seek to understand the nature of the universe.” More well-funded LLM developers will speed up AI race dynamics and risk creating unaligned AI. Will xAI prioritise safety or pursue speed to overtake OpenAI?
Meanwhile Anthropic has red-teamed their latest model, Claude 2, with leading safety, alignment and capabilities evaluations.
While policymakers are grappling with regulatory frameworks, Anthropic is taking the lead on safety. Can we trust AI labs to self-evaluate? Or do governments need to impose standards?
Don’t miss out on Bill Gates' optimistic take on AI risks, China’s regulatory crackdown, and Mustafa Suleyman’s new Turing Test in the CogX Must Reads.
Top Stories
Musk launches xAI
The cost of compute is currently the limiting factor for AI model development. Given Musk’s wealth, xAI could quickly become a leading AI developer.
Director of the Center for AI Safety, Dan Hendrycks, is advising xAI, suggesting that the startup will also focus on AI safety.
Anthropic red teams Claude 2
Anthropic performed extensive model evaluations on Claude 2, and any versions posing national security or significant safety risks were not deployed.
Anthropic is actively working with policymakers and labs to share findings, as well as the Alignment Research Centre, which is aiming to standardise the auditing of models.
Politics and Regulation
China clamps down on AI
New Chinese rules to retain control of AI content will come into effect on 15 August.
Generative AI developers will need a licence to operate and models must adhere to the “core values of socialism”. Given how difficult that will be to achieve, there’s a risk the regulations will stymie Chinese AI innovation.
OECD launches AI expert group
CogX Festival 2023 keynote speaker Stuart Russell will co-chair an OECD AI expert group.
It will forecast future trajectories of AI systems and find practical ways of mitigating risks, factoring in timelines and operational challenges. The OECD joins other leading multilateral agencies jostling to lead on AI policy and influence domestic regulation.
Latest Research
AI aces creative thinking test
Creative thinking was long thought to be the hardest human ability to automate.
But new research from the University of Montana found that GPT-4 can match the top 1% of human thinkers on a standard test for creativity, adding to the risk for more automation.
Optimal governance model
DeepMind researchers have investigated governance models, including intergovernmental commissions or public-private partnerships, to tackle AI risks.
We need to better understand which risks must be managed, the governance functions required, and which organisations can best provide those functions.
AI Thinkers
Humanity can overcome AI risks
Bill Gates optimistically argues that AI risks are manageable. History is littered with examples of humanity facing profound challenges and collectively overcoming them, he said, and we can learn from what’s worked in the past.
Gates also argues that we can use AI to help manage problems arising from AI.
Mustafa Suleyman’s new Turing Test
The Turing test is often passed in experiments and is no longer an effective test for AI potential.
Suleyman argues that AI systems should be challenged to make $1m on a retail web platform in a few months with just a $100k investment. It would need to design products, interface with manufacturers and negotiate contracts. If achieved, this revised test would demonstrate that we’ve crossed a threshold.
Use Cases
Generative AI leads to a surge in child abuse images
The Internet Watch Foundation (IWF) has warned the Prime Minister that AI-generated child abuse images are increasing, and as models improve capabilities, the potential for abuse will only increase.
An explosion of fake images would also make it more difficult for police to save real children suffering abuse, as it will detract from time and resources.
US military experiments with LLMs
The Pentagon is experimenting with using LLMs trained on classified operational information to develop military strategy and scenario responses.
It represents a major shift in the digital capacity of the military, where so little is digitised or connected and even simple data requests can take staffers hours to complete.
CogX Must Reads
In case you missed it
Hear Elon Musk outlining his vision for xAI
We'd love to hear your thoughts on this week’s Issue and what you’d like to see more of.