top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI and regulation, explained | 15.03.24

If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~3 minutes


This week, in a landmark move, the EU, despite criticism approved the world’s most comprehensive AI legislation: the EU AI Act — set to implement strict regulations on AI applications. Will it promote safety, or stifle innovation? I guess we’ll find out…

 

Across the Atlantic, a US government-commissioned report sounded the alarm on AI, calling for urgent regulation to avert "extinction-level threats”. And the debate over international AI rights intensifies as the US seeks exemptions for private firms in a global AI rights treaty. 

 

Meanwhile recent findings reveal that efforts to mitigate racism in LLMs are inadvertently magnifying covert racial biases, particularly affecting speakers of African-American English. With LLMs playing a more significant role in decision-making, the time for mitigation is now.

 

We cover these stories, plus the experts’ take on AI vs social media, LLM reasoning, and that royal photo edit.


- Charlie and the Research and Intelligence Team


P.S. Save 50% - Limited Earlybird tickets now available for CogX Festival LA 7 May


Ethics and Governance



🇪🇺 EU approves the world’s most extensive AI legislation — despite criticism. The EU AI Act introduces strict regulations on AI, including emotion detection bans, emphasising safety in critical uses. While aiming for safe AI, critics fear it might hinder the EU's tech progress.

 

🚨 Urgent AI regulation needed to address “extinction-level threats” US government commissioned report warns. The report recommends limiting AI model training power and creating a federal AI agency to enforce these restrictions, plus tighter controls on AI chips.

 

🌍 International AI rights hangs by a thread, as US push for treaty exemptions. The US aims to exclude private companies from a global AI rights treaty, facing opposition from the EU who warn that yielding to US pressure could weaken the treaty's impact.

 

🤔 What will the EU AI Act mean for you? Whilst effects will be felt gradually — due to a 3-year roll-out period — the act will mean enhanced safety and trust in the AI tech you use. All AI tools will have to undergo thorough vetting for risks, akin to security measures in banking apps.

 

AI Dilemmas


💰 This AI scam exploits your loved one's voice using advanced voice-cloning tech that convincingly mimics voices, from minimal training data. As a result, fraudulent activities from ransom demands to financial scams are on the rise, with sufficient regulation lagging behind. 


👑 How that Royal photo edit is fuelling AI distrust. A doctored photo of Princess Catherine and her children released to dispel “disappearance” rumours, instead intensified conspiracy theories, highlighting how AI manipulations can erode public trust in digital content.


🚀Enjoying our content? We’re also on LinkedIn — Follow us to stay in the loop with the week's most impactful AI news and research!

 

Insights & Research



👥 LLMs become more racist with human intervention: efforts to reduce ‘overt racism’ have inadvertently allowed covert racial biases to strengthen. This reveals the need for improved bias-mitigation strategies in AI, considering its expanding role in decision-making.

 

📱 Will AI follow in social media’s footsteps? That’s up to us. The unchecked evolution of social media, now criticised for its role in spreading misinformation and polarising discourse, should serve as a cautionary tale for the development of AI, with which it shares remarkable similarity. 

 

🧠 LLMs can reason in structured environments. This paper introduces Reasoning PathEditing (Readi) as a state-of-the-art framework for LLMs to efficiently and faithfully reason over structured environments like knowledge graphs and tables. 


🧬 Sora’s potential to change science — and society. The tech could transform scientific discovery by making complex concepts accessible and enhancing data visualisation. Societally, however, while it may democratise content and education, it also risks spreading misinformation.



In case you missed it


“It’s too dangerous to put in the hands of everybody” Yann LeCun and Lex Fridman talk AI, AGI, Open Source, and the limits of LLMs:




✍️ Enjoying this newsletter?  Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work


P.S. Save 50% - Limited Earlybird tickets now available for CogX Festival LA 7 May

bottom of page