top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI and regulation, explained | 23.02.24

If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~3 minutes


In the AI-driven job market, soft skills — like communication and empathy — are becoming increasingly vital. A recent study identified over 500 roles at risk due to AI, including jobs in software development and finance. However, roles rooted in social and emotional skills were less likely to be automated. Will the rise of AI spark a soft skills surge in the workplace? 

 

Elsewhere today, AI in recruitment might be inadvertently filtering out top talent. AI tools at work are enhancing job satisfaction for workers and major tech companies are streamlining operations in favour of AI development — so far 34,000 jobs have been cut this year.


- Charlie and the Research and Intelligence Team


P.S. You can now register your interest for CogX Festival LA 6-9 May, where we’ll be talking all things AI, Impact, and Transformational Tech!


Ethics and Governance



📜Open letter urges AI deepfake regulation. AI industry leaders, including one of the Godfathers of AI Yoshua Bengio, call for regulation on AI deepfakes in an open letter titled “Disrupting the Deepfake Supply Chain”.

 

🇮🇳 Could India slip into a “deepfake democracy”? As India's 2024 election nears, experts warn of a disturbing trend. Political parties are increasingly employing AI-generated deepfakes to sway voter opinions, marking a dangerous pivot in campaign tactics.

 

🔓 North Korean hackers use AI for more sophisticated scams. These cyber operatives are leveraging AI to support Pyongyang's illicit nuclear program and steal cutting-edge technologies from other countries, said a monitoring UN panel of experts.

 

🤖 Big tech commits to fighting 'deceptive' AI in elections. Companies like Amazon, Google, and Microsoft have pledged to deploy technology that will identify and neutralise misleading AI-generated content. Critics question the effectiveness of this agreement.


🦾 Sundar Pichai: AI can bolster cyber defences, not just undermine them. Google’s CEO called for cooperation between the private and public sectors to fully leverage AI's benefits. He said that with the right foundations, “AI has the potential over time to strengthen rather than weaken the world’s cyber defences”.

 

AI Dilemmas


🖼  Google to adjust AI image generator after diversity backlash. The tech giant is working to refine its Gemini bot's output, which faced criticism for producing historically inaccurate images.


🚨 Can AI explicit content be ethical? Developers like Ashley Neale strive for a balance in AI-powered adult content, implementing safeguards against misuse while enabling consensual, virtual interactions.


🚀Enjoying our content? We’re also on LinkedIn — Follow us to stay in the loop with the week's most impactful AI news and research!

 

Insights & Research



🔍 Google DeepMind forms a new org focused on AI safety. Google's AI research arm announced the formation of a new group called “AI Safety and Alignment”— made up of existing teams working on AI safety as well as new specialised groups of GenAI researchers.


🤖 California's AI Safety Initiative: A Model for the Future? A pioneering bill in California seeks to set safety standards for advanced AI models, aiming to mitigate potential dangers and model a path for nationwide regulation on AI safety.


🇬🇧 Microsoft believes the UK can become a "global leader in AI". Clare Barclay, Microsoft UK's CEO, believes that the UK has a prime opportunity to become a worldwide leader in AI, spanning sectors like healthcare, education, and agriculture. However, he emphasised the need for improved regulation and governance.


⚛️ Scientists announce revolutionary AI advancement for unlimited clean fusion energy. Researchers have developed an AI model to tackle plasma instability in nuclear fusion reactors, offering hope for overcoming a significant barrier to fusion-enabled limitless energy.



In case you missed it


An OpenAI employee made a concerning statement about society's unpreparedness for Sora, a text-to-video AI model — triggering widespread debate and concern over its potential negative impacts:



✍️ Enjoying this newsletter?  Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work


🚀 Remember to register your interest for CogX Festival LA 6-9 May!

bottom of page