top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI, safety and ethics, explained | 12.04.24

If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes


This week, the EU and US solidified their commitment to jointly steer AI development, focusing on safety and establishing shared international standards. Meanwhile, Sam Altman is pushing for global AI infrastructure, addressing the urgency of AI's growing resource demands… With new models released from OpenAI, Google and Mistral will our race to legislate be fast enough?

 

We cover these stories, plus Musk’s 2025 superintelligence prediction, why AI consistently fails to generate Asian men with white women, and whether American AI could kill European culture for good…


- Charlie and the Research and Intelligence Team


PS. CogX Transatlantic Accelerator Early Bird offer closes on 15th April: If you are or know a UK startup they can apply for over $20k worth of support to attend, exhibit and network at CogX Festival in LA on 7th May here

 

Share your expertise! Want to be a guest contributor in our next issue? drop us a line at: editors@cogx.live.


Ethics and Governance

 

🤝 EU and US agree to chart a common course for AI development, with a focus on safety and governance. The agreement aims to create interoperable and international AI standards, with efforts to harmonise AI regulatory environments and promote scientific exchange.

 

🌍 Altman is seeking global support for AI infrastructure, engaging with government and industry leaders worldwide to bolster resources like chips, energy, and data centres. He aims to create a "global AI coalition" to tackle the high energy consumption of AI systems.

 

🇨🇳 China is holding off on strict AI legislation — for now, to boost its domestic industry following a pattern of initially lax regulation, sudden strict enforcement, and relaxation. This approach aids China's goal of technological dominance with a focus on economic growth.

 

⚖️ New bill mandates AI companies to disclose the use of copyrighted art in their model training data, tackling legal and ethical issues while imposing financial penalties for non-disclosure. This legislation has garnered significant support from the creative industry.


⏳ Speed of AI development stretches risk assessments to breaking point rendering traditional evaluation benchmarks inadequate. Companies and governments are struggling to develop standards that can accurately gauge future AI safety and performance.

 

AI Dilemmas


😵 Will American AI kill European culture? European nations are launching efforts to produce chatbots proficient in the EU's diverse languages, aiming to preserve their cultural and economic sovereignty against the encroachment of American AI advancements.

 

🤖 AI image generators still struggle to depict certain racial pairings, particularly Asian men with white women, often defaulting or misrepresenting the races involved. Despite repeated attempts and varied prompts, these AIs exhibits persistent difficulties and biases.


🚀Enjoying our content? We’re also on LinkedIn — Follow us to stay in the loop with the week's most impactful AI news and research!

 

Insights & Research


🤒 A new effort to "vaccinate" US voters against AI misinformation has been launched by a bipartisan coalition. The initiative seeks to educate voters on recognising and resisting AI disinformation through extensive media campaigns.

 

🔍 Here's how to stop your data from being used to train AI: Many companies harvest online content for AI training, but some now offer opt-out options — although it is limited and often complex. The full list of companies, and how to opt out can be found here.

 

🎭 Can a future of undetectable deepfakes be avoided? As gen-AI improves, distinguishing real from AI becomes much harder. Efforts like watermarking offer some help, but model ‘forking’ and open-sourcing makes legal enforcement difficult. 

 

🧠 Musk predicts AI will surpass human intelligence next year, refining his earlier projection from 2029 to a much sooner timeline. However he admits this advancement may be hindered by the increasing power requirements and shortages of AI training chips.


➕ AI is advancing rapidly in pure mathematics, solving intricate problems with advanced reasoning and creativity, signalling a move toward human-like intelligence. DeepMind's model has already surpassed human performance in specific mathematical challenges.


In case you missed it


The AI race is heating up as OpenAI, Google and Mistral all release new models — are we heading towards AGI?



✍️ Enjoying this newsletter?  Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work


🚀 CogX Transatlantic Accelerator Early Bird offer closes on 15th April: If you are or know a UK startup they can apply for over $20k worth of support to attend, exhibit and network at CogX Festival in LA on 7th Mayhere

bottom of page