top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics

Is ChatGPT left-wing?

Following criticism of slow progress, the UK announced the AI Safety Summit will run on 1-2 November at Bletchley Park and gather leading countries and tech companies. But will China get an invite?

Meanwhile, a new report suggests that ChatGPT has a left wing bias, while the FEC is considering stepping up regulations for AI in political ads.

New research also found AI detection bias against non-native English speakers and AI-generated text is increasingly being found in academic papers.

Explore these topics and more - from AI cracking passwords to concerns over AI cameras - in the CogX Must Reads.

CogX Must Reads
AI Safety Summit set for 1-2 November

The Summit will take place at Bletchley Park, where Alan Turing worked during WWII. It will focus on AI generally, not just generative AI, and cover the ethics of using AI systems and what guardrails we should build around them

Is too much money going to AI pessimists?

Research and advocacy groups working on mitigating short-term harms from AI are receiving little funding: the AI Now institute operates with less than $1m. But existential risk groups are getting tens of millions. Have we got the balance right?

Politics and Regulation
ChatGPT is left wing

Researchers from the University of East Anglia found ChatGPT to have a left wing bias that reflects the position of Labour and US Democrat policies. Given the influence LLMs can have, any systemic bias is a cause for concern.

FEC considers rules for AI political ads

The Federal Election Commission is under pressure to ban candidates and political parties from using AI to misrepresent their opponents. They opened up a petition for public comment as the first step to creating new rules. But with the election just over a year away, time is running out.

Latest Research
AI-generated text increasing in academic papers

Researchers have found numerous examples of generative AI text creeping into academic papers published in journals, but AI chatbots haven’t been cited as authors. Journals need to establish clear rules and respond to the growing issue

AI detection tool biased against non-native English speakers

Research from Stanford found that AI detection tools falsely accused international students of cheating. In an experiment, detectors flagged writing by non-native speakers 61% of the time - but almost never made mistakes when assessing native English speakers.

AI Thinkers
Humans and machines must work together

Professor Harrell at MIT argues that increased collaboration is needed to create culturally and ethically positive systems. We can design AI systems with the values and worldviews that we want - but is reaching consensus possible?

Ethics Institute for AI Thinkers

The Arthur L. Carter Journalism Institute has launched an ethics initiative to help students and journalists conducting research and providing thought leadership on AI. It aims to promote ethical journalism and ensure that coverage of AI ethical issues is balanced and thoughtful

Use Cases
AI can crack passwords in minutes

AI can crack 51% of common passwords in under a minute, with 71% solved in less than a day. This raises the risk of cyber fraud, and increases the importance of deploying stronger passwords.

Concern over AI cameras

New AI powered cameras deployed in South West England detected nearly 300 driving offences in the first 72 hours, from seat belts to mobile phone usage. But campaigners called it an “invasion of privacy” as worries mount over AI surveillance

Top Stories

In case you missed it

Check out this fascinating debate between leading AI safety experts:

We'd love to hear your thoughts on this week’s Issue and what you’d like to see more of.

bottom of page