top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics

Leading tech entrepreneur, Ian Hogarth, was appointed Chair of the Prime Minister’s AI Taskforce, modelled on the successful Vaccine Taskforce.

With £100m to deploy and an AI Safety Summit this Autumn to organise, he has a busy in-tray.

Meanwhile, Margrethe Vestager, Aidan Gomez and others called for focus on pressing risks like misinformation, rather than human extinction.

Ultimately, we need to tackle both - but with the media focused on existential threats, the debate is getting skewed. Which risks do you think are most important?

Read more about this and more in the CogX Must Reads - including the next stage of the EU’s AI Act, the US’s concern over Chinese regulation, a proposal for building models safely.

I hope you enjoy it!

Top Stories
Ian Hogarth to lead AI Taskforce

Tech entrepreneur and co-author of the annual State of AI report, Ian Hogarth, was appointed Chair of the Foundational Model Taskforce. His previous essays on AI Nationalism signal his view that geopolitics and AI will become intertwined.

He has publicly called for anyone interested in shaping AI safety policy to get in touch.

Focus on immediate AI risks

Margrethe Vestager pushed for more attention on pressing AI risks of discrimination and bias rather than human extinction.

She also called for pragmatism - whilst AI regulation should be a global affair, we can't wait for a UN agreed approach and need to act now.

Politics and Regulation
EU AI regulation gets one step closer

EU parliamentarians voted 499-28 in favour of the proposed text, moving the AI Act to the final stage of the EU’s regulatory process.

The proposed regulation will ban AI for biometric surveillance, emotion recognition or predictive policing and force generative AI systems like ChatGPT to disclose their content is AI generated.

China leading the world on AI rules

Senate Intelligence Chair, Mark Warner, warned that China is ahead of the game in regulating AI domestically, citing their rules to limit the spread of deep fakes.

His comments followed an intervention by Ted Cruz warning that the US cannot allow China to lead on AI advancement and regulation.

Latest Research
AI Labs and Government partnership

A research proposal from GovAI shows how an information sharing regime between AI Labs and the UK Government could work.

It would focus on model capability evaluations and compute usage for a subset of new foundational models, with information sharing throughout the training phase, and Government getting access to the final model before release.

The challenges of AI regulation

Research from the Brookings Institute discusses three key challenges of regulating AI: 1) the speed of change means regulators are playing catch up, 2) it will impact all sectors and 3) there’s not an obvious regulator who can lead.

Regulators will need to be agile and act swiftly.

AI Thinkers
AI won’t take over the world

Yann LeCun, Chief AI Scientist at Meta, described claims that AI was a threat to humanity as “preposterously ridiculous”.

He forecasted that we are decades from computers being more intelligent than humans - a view not shared by fellow Turing award winners, Geoffrey Hinton and Yoshua Bengio, who have raised the alarm on AI.

Existential risk debate is a distraction

Aidan Gomez, co-founder of Cohere, believes the extinction debate is a dangerous narrative which preys on the public’s fears.

Instead, we should focus on real risks like misinformation and human verification. Given Cohere recently raised funding at a valuation of over $2bn, Aidan’s views matter for the future of LLM development.

Use Cases
Google cautions staff on using AI

Google advised its staff not to enter confidential material into AI chatbots, and for its engineers to avoid direct use of generated computer code.

The warnings highlight concerns with the tools even at leading AI companies and the need for responsible use of the technology.

AI abuse will increase fraud

Generative AI could hit Government finances by: i) generating synthetic identities to fraudulently receive social security benefits, ii) exploiting loopholes with complicated tax returns, and iii) falsely winning procurement contracts.

Whilst it will increase fraud, AI can also increase fraud detection and we need to invest in this today.

CogX Must Reads of the week

In case you missed it

Sam Altman discusses the need for ethics in AI

We'd love to hear your thoughts on this week’s Issue and what you’d like to see more of.

bottom of page