top of page



Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI, safety and ethics, explained | 24.05.24

If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes

This week OpenAI's safety lead Jan Leike resigned, criticising Altman’s focus on ‘shiny objects’ over safety after the GPT-4o release. Meanwhile, AI deepfakes disrupt India's elections spreading unchecked misinformation. With the UK’s election date now set for the 4th July, could AI manipulation pose a similar threat? 


Plus AI chatbots are invading online communities, mimicking human empathy and taking humans away from genuine connection. And AI godfather Geoffrey Hinton calls for universal basic income to counter job losses from AI.


We cover this, PLUS the ‘catastrophic’ consequences of Gemini, the boom of the word ‘delve’ (and what it tells us about AI) and how ethics is in the backseat for tech jobs…

- Charlie and the Research and Intelligence Team

P.S You are now able to sign up for the super early bird offer (75% off) to the CogX AI Summit in London on October 7th here.

Ethics and Governance


🛡OpenAI prioritising ‘shiny products’ over safety says departing researcher Jan Leike, OpenAI’s key safety researcher and co-head of superalignment, resigning just after the release of GPT-4o. His departure follows that of Ilya Sutskever, another key safety figure. 


🤖AI and deepfakes blur reality in India elections with manipulated media spreading misinformation. Fact-checkers warn of the dangers, while the lack of comprehensive regulation allows misuse, despite government warnings.


🇪🇺EU Council gives final nod to AI regulation: The EU has approved the AI Act, setting global standards for AI oversight. It bans high-risk uses like cognitive manipulation and social scoring, and defines risk categories for biometrics, facial recognition, and chatbots.


💼EU may fine Microsoft billions, up to 1% of its global annual revenue,  for not providing requested information on genAI risks. The Commission's concerns focus on AI tools in Bing, like Copilot and Image Creator, and their potential impact on civic discourse and elections. 


📰Publishers warn of ‘catastrophic’ consequences of Gemini, Google’s new AI-infused search engine, that directly answers user queries, threatening to reduce traffic to news sites. Publishers warn this could severely impact their audience and ad revenue. 


AI Dilemmas

🔞 Nonconsensual AI p*rn makers profit on Patreon. Despite Patreon’s efforts to moderate such content and removing several accounts, the issue persists. Creators have found ways to evade detection, offering thousands of fake images and videos. 

💻When AI helps you code, who owns the finished product? Using AI tools like GPT-4 for coding raises questions about copyright, as AI-created content isn't legally protected. As coding integration grows, as does the grey area in between — expect future legal battles.

🚀Enjoying our content? We’re also on LinkedIn — Follow us to stay in the loop with the week's most impactful AI news and research!


Insights & Research

🤖AI chatbots are intruding on human spaces, increasingly entering online communities. The bots provide automated responses mimicking human empathy and experience, raising concerns as these groups are necessary for genuine human support and connection.


💡AI 'godfather' calls for universal basic income. Geoffrey Hinton, the pioneer of neural networks, urges the government to implement UBI to address job losses caused by AI. He warns that the AI boom will concentrate wealth among the rich, exacerbating inequality.


📚AI writing in academic journals is booming. Research shows as much as 1% of 2023’s published articles contain AI-generated text, and 46% of the 66,158 papers with the word "delve" (a usual suspect of chatGPT) were published from January 2023 to March 2024.

🔍 Are ethics taking a backseat in AI jobs? Study reveals that across 14 OECD countries, less than 2% of AI job postings mention ethical decision-making skills. Keywords like “AI ethics” and “responsible AI” are rarely found in job ads, 0.5% in the US and 0.4% in the UK. 

🧠Anthropic’s new research sheds light on ‘black box’ AI. Using dictionary learning to map Claude's neural network, they revealed why it prioritises certain topics giving insight into how it ‘thinks’ eg. neurons for the Golden Gate Bridge triggered Alcatraz and the film Vertigo.

In case you missed it

When organisms first developed sight, we saw a spark in life and progress. AI pioneer Fei-Fei Li says a similar moment is about to happen for machines with spatial intelligence, enabling them to process visual data, predict, interact with the real world — and with us.

✍️ Enjoying this newsletter?  Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work

🚀 You are now able to sign up for the super early bird offer (75% off) to the CogX AI Summit in London on October 7th here.

bottom of page