7 OCTOBER | LONDON 2024
SEPTEMBER 12TH - 14TH
The O2, LONDON
Preparing for AI
Your weekly CogX newsletter on AI Safety and Ethics

The week's developments on AI, safety and ethics, explained | 21.06.24
If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes
This week, a Wyoming mayoral candidate's plan to let AI govern the city stirs legal controversy, Apple delays its AI launch in Europe due to EU regulations, and Google outlines its principles for AI regulation, sparking debate over industry domination.
We cover this, plus how neo-Nazis are using AI to spread hate, and how the tech could radically transform the ‘character’ of warfare.
- Charlie and the Research and Intelligence Team
P.S You are now able to sign up for the super early bird offer (75% off) to the CogX AI Summit in London on October 7th here.
Ethics & Governance
🤖 Wyoming mayoral candidate wants AI to run the city: Victor Miller vows to let an AI chatbot make all governing decisions if elected, convinced that advanced AI can govern effectively. However this violates Wyoming state law, which requires human candidates.
🧠 How Google thinks AI should be regulated: Google has outlined their "7 principles for getting AI regulation right," emphasising effective deployment over invention, and supporting targeted regulation. Some critics warn this could lead to a few companies dominating AI.
⚖️ Lawsuit against gen-music startups is the bloodbath AI needs: Udio and Suno are facing lawsuits from the RIAA for allegedly using copyrighted music without authorisation to train their models. The legal battle might force them to expose their training data.
🚫 Apple delays AI launch in Europe, blaming EU rules: New features like Phone Mirroring and Apple Intelligence will launch in the US this fall but won't arrive in Europe until 2025, as Apple claims the EU's Act compromises device security and privacy.
AI Dilemmas
⚡ AI is exhausting the power grid: As AI's power needs surge, big tech firms like Microsoft are betting on ambitious projects like atomic fusion to generate clean energy. Microsoft hopes to harness fusion by 2028, but experts remain sceptical.
💥 Neo-Nazis are all-in on AI: Extremists are developing their own AI to spread hate speech and radicalise followers. A report from MEMRI highlights how neo-Nazis are using AI to produce propaganda and even generate 3D-printed weapon designs.
🚀Enjoying our content? We’re also on LinkedIn — Follow us to stay in the loop with the week's most impactful AI news and research!
Insights & Research
📋 UK needs system for recording AI misuse and malfunctions, thinktank says. They recommend creating a government reporting system to collate all AI-related incidents, identify gaps in current AI incident reporting, and consider a pilot AI incident database.
😟 Why ‘emotional AI’ is fraught with problems: Emotional AI's ability to interpret complex human emotions is debatable, with risks of manipulation for commercial or political gain. Its pseudoscientific nature and inherent bias is concerning, especially in sensitive industries.
🛡️ AI should never be able to deceive humans says Zhang Hongjiang, founder of the Beijing AI Academy. He states the importance of technical solutions over policy alone, and calls for global collaboration between researchers and governments, despite geopolitical tensions.
⚔️ AI will transform the character of warfare, making war faster and more opaque. Advances, driven by conflicts like the Ukraine war, are leading to intelligent killing machines and enhanced command systems, with data processing changing battlefield dynamics.
In case you missed it
AI is reliant on mass surveillance, and we should be careful - Meredith Whittaker
✍️ Enjoying this newsletter? Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work
🚀 You are now able to sign up for the super early bird offer (75% off) to the CogX AI Summit in London on October 7th here.