top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI and regulation, explained | 29.9.23

If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes


This week, tech giants called for government intervention in AI regulation at the Senate. Meanwhile, in a strategic move, Amazon partnered with AI startup Anthropic, marking its entry into the arena of large foundational models.

 

The UK’s upcoming AI Safety Summit will focus on existential risk, specifically bioweapons, and cyberattacks as potential existential threats posed by AI. Similarly, the UN is laying down plans for a dedicated agency to foster international cooperation on AI governance, earmarking a significant global summit in 2024.

 

Explore these topics and more - from immediate podcast translations to the CIA’s personal ChatGPT - in the CogX Must Reads.

 

Charlie and the Research and Intelligence team


P.S. Did you miss some of the AI sessions at the festival? The CogX Festival 2023 is now available on demand! Watch inspiring sessions on AI risks, opportunities and regulation from the likes of Mustafa Suleyman, Tristan Harris, and Reid Hoffman on our YouTube channel now. 


CogX Must Reads


 


Top Stories


Tech giants seek AI regulation at Summit

Despite a debate between Zuckerberg and Tristan Harris on open source AI, the summit was largely harmonious with 'very little disagreement' and a consensus on the need for regulation. However, Musk highlighted Congress's unpreparedness to regulate AI which will need to change quickly. [Wall Street Journal]

 

What the $4bn Anthropic deal means for Amazon

Amazon will invest up to $4 billion in AI startup Anthropic - and will be their primary provider of compute, in exchange for a minority ownership position. This marks Amazon’s first step into large foundational models, as they catch up behind competitors Google and Microsoft. [Time]

 

No 10 concerned that AI could be used to create bioweapons

The UK’s upcoming AI Safety Summit will focus primarily on ‘Frontier Models’ that pose a potentially existential risk to humanity, including through bioweapons and cyberattacks. Following talks with tech experts, officials aim to draft a joint statement warning against large-scale dangers posed by rogue actors leveraging the technology. [Guardian]


 


Politics and Regulation


UN plans to shape the future of AI regulation

The UN is gearing up to start an agency tasked with helping the world cooperate in managing AI, in an ambitious effort to foster international cooperation. The UN believe their 2024 Summit of the Future will be the time to finalise agreement on this. [Time]

 

CIA builds its own AI tool

The CIA is developing an AI feature similar to ChatGPT to enhance analysts' access to open-source intelligence and original information source tracing. This initiative by the CIA's Open-Source Enterprise division is a part of larger US government effort to leverage AI, and outcompete China. [Bloomberg]


 

Latest Research


A new test for AGI

The Tong test is a first-of-its-kind AGI evaluation test, shifting focus away from traditional task-oriented evaluations, and toward ability and value-oriented measures. The test proposes five characteristics as AGI benchmarks: infinite tasks, self-driven task generation, value alignment, causal understanding, and embodiment. [Tech Explore]

 

How to catch a lying LLM

Researchers have developed a lie detector for LLMs. The detector poses unrelated follow-up questions, following a suspected false statement and analyses the responses. The detector can effectively identify deceptive information across various LLM architectures and real-world scenarios. [Arxiv]


 


AI Thinkers


The AI experts that Rishi Sunak is relying on

In March, the Government unveiled a white paper promising not to “stifle innovation” in AI. In May, the Centre for AI Safety released a paper detailing the severity of AI risk. Since then, Sunak has changed tact: the Centre is now one of a handful of organisations advising Sunak on handling AI, as the UK steps up its work on combating existential risk. [Telegraph]

 

Slowing AI development is more crucial than ever

Anthony Aguirre, Executive Director at the Future of Life Institute, argues that unchecked advancements in AI pose severe risks including misinformation, manipulation, and potential weaponization. The US must establish a registry for AI experiments that works toward global cooperation to ensure AI safety. [The Hill]


 


Use Cases


Getty builds its own AI image generator

Getty Images has partnered with NVIDIA to develop its own AI image generation tool for commercial use. The tool is trained entirely on Getty stock images, and uses Edify, NVIDIA's primary model architecture. The tool is set to rival Shutterstock, whose latest partnership with OpenAI saw their images training DALLE-3. [Wired]

 

Spotify's new AI podcast translation feature

Spotify is piloting an AI Voice Translation feature, which can translate podcast content into different languages while retaining the original podcaster's voice. This feature, underpinned by OpenAI's voice generation technology, is being tested with notable podcasters like Dax Shepard and Lex Fridman. Immediate translation could transform podcasting but also spread misinformation risks.[Newsroom]


 

In case you missed it


Hear from Mustafa Suleyman, on AI’s potential to change the world - at the CogX Festival 2023.




✍️ Enjoying this newsletter?  Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work.

bottom of page