top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics

We’ve loved seeing the tech world come together for London Tech Week - congratulations to Carolyn and the team for putting on an amazing week!

The UK will host an international AI Safety Summit this Autumn to tackle alignment. Tech leaders welcomed the move from Rishi Sunak which demonstrates growing UK global AI leadership.

AI Labs also agreed to share their models with the UK Government for safety research. But a new TBI report questions the UK’s AI strategy - it calls for a new national AI lab, Sentinel, and the abolishment of the Alan Turing Institute. What should the UK do to be a successful AI leader?

Meanwhile, an essay from Marc Andreesen countered recent AI fears with an optimistic take that AI will save the world. He argues that AI risk is overblown and the real risk is losing the race to China. What do you think? Please let us know.

Read more about this and more in the CogX Must Reads - including the EU’s pushback on deepfakes, Nvidia’s software leaks and the IMF’s warning on jobs.

Top Stories
Tech giants back UK AI Safety Summit

Rishi Sunak announced that the UK will host a global AI Safety Summit this Autumn featuring key countries, leading tech companies and safety researchers.

Google DeepMind, Anthropic, Faculty and others all backed the UK’s leadership and welcomed the opportunity to agree on safety measures at the Summit.

Labs to open up models for UK Government

UK AI leadership continued with an agreement from Google DeepMind, OpenAI and Anthopic to open up their models to the UK Government for research and safety purposes.

The priority access will help build better evaluations and inform Government safety regulations.

Politics and Regulation
EU leads the fight against deepfakes

The EU is pushing AI platforms to clearly label AI generated content in a bid to counter misinformation.

Given the impact of deepfakes on fraud and trust in society, lawmakers are pushing for urgent action by July.

OpenAI calls for China’s involvement in AI safety

China has a crucial role in shaping AI safety guardrails, given the strength of Chinese AI talent and the need for global cooperation, according to Sam Altman.

It is difficult to imagine the US and China working closely together on AI given geopolitical rivalry - but China may ultimately be needed.

Latest Research
Former UK Leaders call for radical shift on AI policy

A report led by Sir Tony Blair and William Hague called for a ramp up of UK AI funding to a scale of HS2 and the replacement of the Alan Turing Institute and the AI council.

They also call for a nationally funded UK AI lab named Sentinel to lead research and explore regulations. The bold recommendations would fundamentally change UK AI policymaking.

Nvidia’s AI software at risk of leaks

Researchers at Robust Intelligence found that Nvidia’s AI software could be manipulated to ignore safety constraints and reveal private information.

It took only a few hours to find the vulnerabilities - more attention will be needed on security as AI adoption increases.

AI Thinkers
Why AI will save the world

Marc Andreesen’s viral essay made the case for why AI will save, rather than end, the world. He argues that AI will not kill us, take our jobs, or ruin society and we should invest in the technology to beat China.

He claims AI panic stems from “baptists and bootleggers” - those that genuinely believe in AI risks and the self interested opportunists who have jumped on the bandwagon.

Why the IAEA model doesn’t work for AI

Ian Stewart argues that the IAEA model won’t work as pathways for AI catastrophe are not as clear as for nuclear weapons.

He explains how nuclear regulation took decades to evolve, but we need to move much quicker on AI.

Use Cases
AI financial scams on the rise

Online scams rose to $8.8bn in the US last year and that’s predicted to significantly increase with generative AI this year.

Experts warn that AI could increase ransom fraud through voice impersonation, romance scams with image generation and phishing scams with personalised scripts.

IMF adds to siren calls on job losses

The IMF warned of substantial disruptions to the labour market as generative AI gets adopted across sectors.

We need to learn lessons from the case study of manufacturing decline in the 1990s which led to a backlash for globalisation.

CogX Must Reads of the week

In case you missed it

Gary Marcus warns that truth and reason may not survive the evolution of AI

We'd love to hear your thoughts on this week’s Issue and what you’d like to see more of.

bottom of page