7 OCTOBER | LONDON 2024
SEPTEMBER 12TH - 14TH
The O2, LONDON
Preparing for AI
Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI, safety and ethics, explained | 31.05.24
If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes
This week, African AI workers are calling on President Biden to address their exploitation by US tech giants, labelling their treatment ‘modern day slavery’. Meanwhile, the FCC is considering new rules for political ads, to combat deepfake disruptions. Will governments finally take responsibility for AI? (The Seoul Summit suggests not).
Plus, big tech is under fire, after Meta appointed an all-white, all-male AI advisory council, and Microsoft’s ‘Recall’ feature is under investigation by the UK ICO over privacy concerns.
We cover this, PLUS how Google’s AI search is ruining the internet, the true cost of AI hype ($25 billion annually) and the study revealing how little people actually use AI tools.
- Charlie and the Research and Intelligence Team
P.S You are now able to sign up for the super early bird offer (75% off) to the CogX AI Summit in London on October 7th here.
Ethics and Governance
🛡OpenAI prioritising ‘shiny products’ over safety says departing researcher Jan Leike, OpenAI’s key safety researcher and co-head of superalignment, resigning just after the release of GPT-4o. His departure follows that of Ilya Sutskever, another key safety figure.
🆘 AI workers urge Biden to address 'modern day slavery': African workers who label AI data and screen social posts for US tech giants are urging the President to address their exploitation, calling for dignified, fairly paid work. Kenya's president will soon visit the US.
🛡 FCC considering AI rules for political ads: Federal Communications Chairwoman Jessica Rosenworcel is proposing new regulation that would require TV and radio political ads to disclose their use of AI, due to rising fears of deepfakes disrupting elections.
🔍 Microsoft under investigation for new ‘Recall’ feature by the UK’s ICO over privacy concerns. The feature, likened to a Black Mirror scenario, tracks and archives PC activity using AI continuously taking screenshots for a searchable index of user activity.
🇰🇷 Seoul summit flags hurdles to regulation: Despite the UK touting the ‘Bletchley effect’, critics are still arguing safety efforts are limited to observation only. When will governments show they are moving from talking about AI regulation to actually delivering it?
📜 California advances measures against deepfakes in elections and p*rnography. Lawmakers are also pushing for new AI regulations to protect jobs and fight discrimination. Proposals include oversight frameworks and protections against AI clones of performers.
AI Dilemmas
📢Meta appointed an all-white, all-male AI advisory council. Critics are highlighting the absence of women and people of colour, stressing the importance of diverse perspectives in AI development to prevent bias and harm. Meta has not responded to requests for comment.
⚠️ Google’s malfunctioning AI risks ruining the internet: The new AI search feature, which provides direct answers instead of linking to websites, is giving incorrect and bizarre responses, like advising users to eat rocks or misidentifying Barack Obama as a Muslim.
🚀Enjoying our content? We’re also on LinkedIn — Follow us to stay in the loop with the week's most impactful AI news and research!
Insights & Research
🌍 The next wave of AI hype will be geopolitical, and you're paying: Governments worldwide are investing heavily in AI, driven by national security and competition. Spending could exceed $25 billion annually, even if the return on investment remains uncertain.
🤖 AI products much hyped but not much used, reveals 12,000 person study in six countries. Only 2% of British respondents use AI tools like ChatGPT daily, and while young people are more eager adopters, there's a "mismatch" between hype and public interest.
⚖️ AI firms mustn’t govern themselves, say ex-members of OpenAI’s board, arguing that self-governance is insufficient due to profit pressures. They call for governments to actively regulate AI, emphasising that market forces alone cannot safeguard public interests.
📊 What does the public think of gen AI in news? A survey across six countries reveals the public is cautious about AI's benefits and concerned about its potential drawbacks. Most expect AI to significantly impact science and media, but trust in responsible use varies.
In case you missed it
Why The Machine God isn’t the scenario to worry about | Professor Ethan Mollick explains why “co-intelligence” may be the future of AI.
✍️ Enjoying this newsletter? Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work
🚀 You are now able to sign up for the super early bird offer (75% off) to the CogX AI Summit in London on October 7th here.