top of page



Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI and regulation, explained | 16.02.24

If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~3 minutes

This week the digital battleground intensified as Microsoft’s AI tools — powered by OpenAI — fell prey to sophisticated cyberattacks. The culprits? State-backed hackers from China, Russia, and Iran. In the face of such advanced threats, how can nations and corporations bolster their defences to safeguard the future of digital security?


Meanwhile, across the Atlantic, the landmark EU AI Act is moving into its final stages, following nods of affirmation from key political committees. At the same time, the UK AI Safety Institute has uncovered an unsettling truth about the vulnerability of AI safeguards… 


We cover this, plus the expert's take on UK regulation, the truth about AI’s political role, and why researchers think the key to AI safety is regulating hardware.

- Charlie and the Research and Intelligence Team

P.S. You can now register your interest for CogX Festival LA 6-9 May, where we’ll be talking all things AI, Impact, and Transformational Tech!

Ethics and Governance

🛡️ Microsoft’s AI tools are being hacked by China, Russia and Iran. The tech giant claims that state-backed hackers are exploiting its AI tools — from OpenAI — to refine their hacking techniques and deceive targets. Microsoft has implemented a ban on these groups.


🇪🇺 EU politicians have approved new rules for AI, ahead of a landmark vote for the world’s first AI act. On Tuesday, two European Parliament committees showed their support for the draft legislation that will ensure AI aligns with the protection of "fundamental rights".


🔓 UK AI Safety Institute discovered that AI safeguards can be easily bypassed. The researchers found that with minimal effort, the AI agents could create credible fake personas for disinformation, produce biased outcomes, and deceive humans in simulations.


🇺🇸 Deputy Attorney General warned AI could ‘supercharge’ election misinformation and incite violence. She announced intentions to toughen penalties for AI abuses and tackle misinformation. Collaborative actions with tech firms and international allies are in progress.


AI Dilemmas

👩‍🎓 AI plagiarism tools vs students: who should educators believe? Topinka faced a dilemma with AI software flagging a student's essay as AI-generated and chose to trust the student, noting the software's unreliability and its risk of worsening educational inequality.


🇵🇰 AI and political misinformation in the 2024 Pakistan election: Despite concerns, the transparent use of AI by PTI for rallies, notably in Imran Khan's election campaign during his imprisonment, showcased its ethical potential amidst a landscape of political misinformation.

🚀Enjoying our content? We’re also on LinkedIn — Follow us to stay in the loop with the week's most impactful AI news and research!


Insights & Research

🚫 "AI is too important to be monopolised," with antitrust agencies urged to prevent dominance by large AI firms, especially concerning resources like computing power. Increased public investments are needed to democratise access to computing power.


🔧 AI governance policies should focus on regulating hardware, particularly chips and data centres, finds a recent report. By targeting the hardware, which is more tangible and governable, experts claim AI safety efforts can be more effective in preventing disaster. 


🇬🇧 The UK needs to “set standards, not do testing”. Marc Warner, CEO of Faculty AI, suggests the UK’s AI Safety Institute should set benchmarks that other governments and companies can adopt, aiming for scalability and long-term safety measures.


🤖 Will OpenAI achieve superintelligence before they’re broke? Despite its $100bn valuation, OpenAI faces questions about its long-term revenue model, specifically its ability to balance spending with sales growth. With AGI in the works, they must seek new funding avenues.

In case you missed it

In a recent — and rather cryptic — statement, Sam Altman claimed that GPT-5 is bigger than you think. Here’s why:

✍️ Enjoying this newsletter?  Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work

🚀 Remember to register your interest for CogX Festival LA 6-9 May!

bottom of page