7 OCTOBER | LONDON 2024
SEPTEMBER 12TH - 14TH
The O2, LONDON
Preparing for AI
Your weekly CogX newsletter on AI Safety and Ethics
White House AI Summit, Deepfake detectors, Superforecasting AI risk
Leading AI labs such as OpenAI and Inflection AI joined a White House AI Summit to pledge voluntary security and transparency commitments. Watermarks will be added to AI-generated content to reduce misinformation ahead of elections. But are voluntary commitments enough?
Meanwhile, the EU is lobbying Asian countries to follow its regulatory model, whilst think tanks have called for the UK to address gaps in the AI white paper and bolster regulation.
Countries are openly jostling for AI leadership, with attention turning to the AI Safety Summit where the regulatory trajectory will be shaped.
Read on for more on these topics - as well as superforecasting AI risk, Intel’s deepfake detector and Anthropic’s financial restructure in the CogX Must Reads.
Top Stories
Tech titans at the White House
Leading US tech companies agreed to internal and external security testing of AI models before release, which safety campaigners have long called for. The agreement focuses on future, more powerful models like GPT-5 or Gemini and represents the result of the White House working closely with AI labs.
The UK’s approach to AI safety
Rishi Sunak wants the UK to be an AI superpower and lead the world on ethics and safety. But that ambition needs to be matched with action. The UK should prioritise developing an evaluations framework, balancing regulation with innovation and combating misinformation.
Politics and Regulation
EU looks to export AI Act
The EU is lobbying Asian countries including India, Japan and South Korea to follow its regulatory framework, and help the AI Act become a global benchmark. However, the response has been lukewarm; countries are waiting to see how AI develops and want flexibility to pivot.
Gaps in UK regulatory approach
The Ada Lovelace Institute called for more clarity over the UK’s AI approach, with clearer rights and new institutions. If the UK wants to be pro-innovation, it needs to clear up legal grey areas, such as the use of data in training and how bias should be evaluated.
Latest Research
Best practice on risks
Researchers from GovAI analysed risk interventions in safety critical industries to learn lessons for AI labs. They pinpoint risk identification, analysis and evaluation techniques. They also make best in class recommendations for AGI companies to adopt, including scenario analysis and the fishbone method - a visual root cause analysis tool.
Superforecasting AI risk
A research project saw superforecasters and domain experts analyse and predict AI risk. The median superforecaster predicted a 9% chance of catastrophe and 1% chance of extinction by 2100, compared to 20% and 6% by experts. Historical cycles of AI optimism being unfounded convinced superforecasters to reduce probabilities, but experts believe this time is different.
AI Thinkers
How China develops AI regulation
Carnegie Fellow Matt Sheehan explains Chinese AI governance including how regulations are shaped, who has influence in the system, and what is coming next. Chinese regulations are often dismissed as irrelevant but they will be fundamental to shaping the technology, he argues.
What would make a successful AI Safety Summit?
GovAI’s Ben Garfinkel and Lennart Heim outline what deliverables would constitute success for the Summit. These include securing consensus on the level of risk from states, a commitment to create new international institutions and agreements from AI labs on tangible safety measures.
Use Cases
Intel’s deepfake detector
Intel has developed a system called “FakeCatcher” which uses Photoplethysmography (PPG) to detect changes in blood flow to detect deepfakes. It claims 96% accuracy, but testing by the BBC found issues, especially with pixelated videos and audio.
Anthropic’s new corporate structure
Anthropic’s commitment to AI safety has extended to their financial structure with a plan to create a Long Term Benefit Trust. It will hold a special class of stock which cannot be sold nor pay dividends. But it will have rights to elect three of Anthropic’s five director positions, giving the trust long-term majority control over the company.
CogX Must Reads
In case you missed it
President Biden outlines the US’s approach on AI risks
We'd love to hear your thoughts on this week’s Issue and what you’d like to see more of.