top of page



Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics

Despite Sam Altman’s global tour calling for increased regulation, OpenAI lobbied the EU commission to water down the AI Act. Several OpenAI amendments were in the final legal text.

Is Sam Altman’s call for regulation genuine? Should AI labs be more transparent?

Meanwhile, Google DeepMind’s using techniques from AlphaGo to build Gemini, an AI model capable of problem solving, eclipsing ChatGPT.

CEO Demis Hassabis cautioned that Gemini's progress means we need urgent safeguards. He prioritised evaluations and frontier model access for academics. Do we really need regulation? If so, which regulations are most urgent?

Read more about this and more in the CogX Must Reads - including the Senate’s new regulatory framework, the launch of Pause AI and research on AI risks at work.

I hope you enjoy it!

Top Stories
Tech giants back UK AI Safety Summit

OpenAI lobbied for reduced regulation

Sam Altman spent the past month touring the world, meeting with Governments, and imploring them to introduce swift global AI regulation.

But OpenAI was also successfully lobbying the EU Commission to water down the AI Act to reduce the regulatory burden. They claimed GPT-3 is not inherently high risk and wrote a White Paper for lighter regulation

DeepMind calls for more safety efforts as it builds new AI model, Gemini

DeepMind’s new model, Gemini, will use techniques from AlphaGo combined with LLMs to improve problem solving and planning capabilities.

But Demis Hassabis called for research on evaluations and for access to frontier models for academics, hinting that DeepMind may also make its models more accessible. He argued that given exponential progress, safety action is urgently needed.

Politics and Regulation
Schumer unveils new regulatory proposal

Senate Majority leader, Chuck Schumer, announced his SAFE Innovation framework as Congress pushes ahead on AI regulation.

It includes four pillars for bipartisan collaboration on AI legislation: security, accountability, explainability and protecting our foundations.

Consumer group calls for urgent EU investigation

The EU’s largest consumer group, BEUC, called for the EU to “launch urgent investigations into the risks of generative AI”

They worry about the harms consumers are facing today from fraud, misinformation and bias and believe regulators need to move faster.

Latest Research
Why exactly is AI such a risk?

Research from the Center for AI Safety outlines the precise risks posed by AI and the likelihood of impact.

The paper also discusses the structural dynamics making these problems so difficult to solve, and the technical, social and political responses required to overcome the barriers.

Managers worry about impact of AI on work

Research from the Chartered Management Institute found 75% of managers were concerned about the security and privacy of AI technologies and 43% worried about job losses.

Worryingly, more than half admitted their organisation isn’t keeping up to date with AI advancements and only 4% said they’d received any training in AI technologies like ChatGPT.

AI Thinkers
China evades chip controls

Researchers Fist, Heim and Schneider argue that Biden’s export ban on powerful chips sent to China isn’t working due to loopholes.

Chinese firms are renting access to controlled Nvidia chips via the cloud, and using intermediaries to smuggle banned technologies. These companies will likely be outside of global regulation agreements and could pose risks to safety.

AI protest group campaigns against human extinction

Existential risk is causing real anxiety to people like Joe Meindertsma, the founder of Pause AI, who is campaigning to halt AI development.

His grassroots campaign has organised demonstrations and already been invited to meet with the EU commission - and many experts are backing his cause.

Use Cases
Google Cloud launches AI AML tool

We’ve discussed the increased risk of fraud through generative AI in previous briefings and Google is now fighting back with an AI anti-money laundering (AML) tool for banks.

HSBC found the tool reduced the number of AML alerts by 60% while the accuracy of ‘true positive’ alerts went up 2-4 times. The tool could lead to significant cost savings in banks.

Stanford Medicine launches Centre for Ethics

Stanford has launched the Responsible AI for Safe and Equitable Health centre to tackle critical ethical challenges with using AI in healthcare.

The centre aims to enhance clinical care outcomes through responsible AI integration, accelerate research, and educate patients and care providers to navigate AI advancements.

CogX Must Reads of the week

In case you missed it

Conjecture’s Connor Leahy discusses the most promising alignment solutions

We'd love to hear your thoughts on this week’s Issue and what you’d like to see more of.

bottom of page