top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI and regulation, explained | 6.10.23

If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes


This week, the EU emphasised a cautious approach to AI regulation, warning against AI paranoia and stressing the importance of "solid analysis" over dystopian concerns. Meanwhile, Spotify CEO, Daniel Ek, cautioned that while AI laws are vital, the rapid evolution of the technology means they may soon be outdated. He urged the UK to leverage its autonomy post-Brexit for more flexible AI regulations. 


The AI safety Summit's exclusivity has raised concerns as it plans to host just 100 attendees, predominantly politicians and tech giants, sidelining AI startups and civil society. While in the U.S, states like Oklahoma and California are taking the lead in AI regulation, establishing task forces and specific directives, potentially outpacing federal initiatives in the domain. 

Explore these topics and more - from LLM ideology to the ChatGPT for business - in the CogX Must Reads. 


I hope you enjoy it!

 

Charlie and the Research and Intelligence team


P.S. Did you miss some of the AI sessions at the festival? The CogX Festival 2023 is now available on demand! Watch inspiring sessions on AI risks, opportunities and regulation from the likes of Mustafa Suleyman, Tristan Harris, and Reid Hoffman on our YouTube channel now. 


CogX Must Reads


 


Top Stories


Brussels warn against AI paranoia 

EU vice-president for values and transparency, Věra Jourová, has urged against excessive paranoia in regulating AI. Jourová emphasised the need for regulation to be rooted in ‘solid analysis’ rather than dystopian fears, and believes that over-regulation could hinder technological and business innovation. (Financial Times)


Microsoft and Amazon face UK regulators over cloud services The UK's Competition and Markets Authority (CMA) is investigating cloud providers, including Microsoft's Azure and AWS, after Ofcom identified difficulties for customers switching cloud suppliers, such as data transfer costs and technical barriers. The CMA also highlighted concerns about Microsoft's software licensing. (The Verge)


Spotify CEO warns AI laws quickly become obsoleteSpotify boss Daniel Ek is urging the UK to exercise its freedom outside of the EU’s strict AI regulation, to reduce big tech dominance. Ek stated that whilst regulation is necessary, AI is developing at such a pace that legislation will soon become obsolete. (Fortune) 


 


Politics and Regulation


AI safety summit excludes startups The upcoming summit will host only 100 attendees, comprising politicians, business leaders, and academics, excluding many prominent AI startups, says Summit sherpa Matt Clifford. Anticipated guests include U.S. Vice President Kamala Harris and top executives from major tech firms like Alphabet. (Reuters


Could US states lead the way on regulation? U.S. states are actively exploring AI regulations, setting the pace ahead of federal intervention. Oklahoma and California recently set up AI task forces to guide state-level deployment, and Maine and Washington have issued specific AI directives. At the local level, cities like Amarillo, Texas, are even leveraging AI tools for public use. (Fortune


 

Latest Research


Bias skin tone test is one dimensional Sony AI researchers are advocating for algorithms to encompass a broader range of skin hues, not just lightness or darkness. In their recent paper, they propose a “multidimensional” skin colour metric, highlighting that existing measures might overlook biases against groups like East Asians and Hispanics. (The Verge


Ideological AI The potential ideological threats of LLMs, especially in critical areas like elections, remains underexplored. This study, by Worcester Polytechnic Institute and Indiana University, delves into GPT's soft ideologization using AI-self-consciousness. Through self-conversations, AI can "understand" desired ideologies, paving the way for ideological tweaks. Compared to classic manipulation methods like censorship, the ideologization is efficient, affordable, and potent, presenting significant risks. (Arxiv)


 


AI Thinkers


Creativity? AI systems mean business  John Naughton notes that whilst LLMs are typically perceived as "stochastic parrots", their creativity is now being debated after GPT-4 outperformed most humans in creativity tests. A recent experiment showed its creativity may be valuable in business, raising concerns about AI guiding corporate decision-making. (The Guardian


Battle for the future of AI Bruce Schneier and Nathan Sanders discuss the current narrative around AI - it is not merely about the technology but also concerns power dynamics, control, and accountability in society. It's crucial to understand these nuanced perspectives, prioritising transparent and responsible AI development while ensuring strong governance to prevent misuse. (NY Times) 


 


Use Cases


AI model that knows how software works Rabbit OS is a first-of-its-kind AI interface designed to translate human language into machine commands, simplifying how users engage with software. Despite being in a competitive space with players like Google DeepMind, Rabbit differentiates itself by focusing on advanced human-machine interactions. (TechCrunch)  


Alignment through gaming? Oxford-based startup, Aligned AI, has developed the Algorithm for Concept Extraction (ACE), which improves AI safety by preventing systems from making false correlations based on its training data. The company tested ACE's efficacy on a game called CoinRun, with the algorithm correctly identifying the game's objective 72% of the time. (Fortune) 


 

In case you missed it


Hear from Tristan Harris, as he shares a vision for the principles we need to navigate the rocky road ahead - at the CogX Festival 2023.  





✍️ Enjoying this newsletter?  Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work.

bottom of page