top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics

Does AI have feelings?

OpenAI, Google, Microsoft and Anthropic came together to create an industry body to oversee frontier model development. It will fund cutting-edge research on AI safety and assess risks posed by new models.

However, there was no commitment not to deploy dangerous models, or decide how we will respond to risks. Is an industry body without guardrails enough?

Meanwhile, there was bipartisan concern about AI risk at a Senate hearing, whilst Geoffrey Hinton suggested AIs will have feelings and want political rights.

New research also examines who should be liable for AI defects and researchers found it worryingly easy to circumvent LLM safeguards. Can we trust model safety claims?

Don’t miss out on OpenAI’s misfiring AI detector, FraudGPT on the Dark Web and AI in the House of Lords in the CogX Must Reads.

Top Stories
Frontier AI safety body

In the absence of government regulation, AI labs came together to create a model for ensuring the safe development of cutting-edge models. The forum will advance AI safety research, identify best practice for model deployment, and collaborate with policymakers on risks.

OpenAI’s AI detector shut downs

OpenAI retired a tool designed to detect AI writing from human writing due to a low accuracy rate. It was meant to reduce misinformation risks, but technological challenges resulted in only 26% of AI-written texts being correctly identified in testing

Politics and Regulation
Bipartisan consensus on AI risk

At a Senate subcommittee, both Democrat and Republican senators expressed alarm about the potential for malign actors to use AI to harm society. Dario Amodei, Anthropic’s CEO, spoke at the hearing about the risk of biological weapons being developed with AI, calling it a “grave threat to US national security”.

Could AI replace the House of Lords?

Hereditary peer, Richard Denison, hypothesised that AI could outperform peers with “deeper knowledge, higher productivity and lower running costs.” AI chatbots could deliver speeches in the style of peers, and scrutinise legislation more effectively.

Latest Research
Circumventing LLM guardrails

Researchers at Carnegie Mellon developed a new attack that can get around LLM’s guardrails. By studying vulnerabilities, researchers were easily able to generate a nearly unlimited supply of “adversarial suffixes”, words and characters that cause a model’s safety controls to fail.

Who should be liable when AI goes wrong?

A new paper by the University of St Gallen examines product liability under defective AI. It proposes tailoring liability standards to the type of defect: when there is imperfect information, if a customer suffers harm, strict liability should be favoured over negligence.

AI Thinkers
Could AI develop feelings?

Geoffrey Hinton, one of the godfathers of AI, suggested that AI could develop the capacity for feelings — and be frustrated or angry with their users, for example. He also predicted that AI would need political rights in the future, and may have an electoral vote.

How does the wealth from AI get shared?

If we progress to AGI, AI companies could obtain an increasing share of GDP and build incredible wealth. One proposal is to implement a “Windfall clause” where a percentage of profits could be donated to charitable causes or fund a universal basic income scheme to ensure society benefits from AI

Use Cases
FraudGPT on the Dark Web

FraudGPT is a cyber criminal’s must-have tool, with features including crafting phishing emails, identifying vulnerable websites and creating undetectable malware. It has been circulating on the Dark Web, with subscription fees upwards of $200 a month and could result in an increase in fraud attempts.

CEOs worry about AI adoption

An EY CEO survey found nearly two-thirds of CEOs are worrying about unintended consequences from AI uptake, with prominent social, ethical and security risks. Despite this, they are still integrating AI into capital allocation and using it to drive business efficiencies.

CogX Must Reads

In case you missed it

Catch up on the full Senate hearing on AI risks, featuring CogX speaker Stuart Russell, Dario Amodei and Yoshua Bengio

We'd love to hear your thoughts on this week’s Issue and what you’d like to see more of.

bottom of page