top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics

Could AI lead to a new era of bioweapons?

This week, tech giants called for government intervention in AI regulation at the Senate. Meanwhile, in a strategic move, Amazon partnered with AI startup Anthropic, marking its entry into the arena of large foundational models.

The UK’s upcoming AI Safety Summit will focus on existential risk, specifically bioweapons, and cyberattacks as potential existential threats posed by AI. Similarly, the UN is laying down plans for a dedicated agency to foster international cooperation on AI governance, earmarking a significant global summit in 2024.

Explore these topics and more - from immediate podcast translations to the CIA’s personal ChatGPT - in the CogX Must Reads.

CogX Must Reads
Tech giants seek AI regulation at Summit

Despite a debate between Zuckerberg and Tristan Harris on open source AI, the summit was largely harmonious with 'very little disagreement' and a consensus on the need for regulation. However, Musk highlighted Congress's unpreparedness to regulate AI which will need to change quickly.

What the $4bn Anthropic deal means for Amazon

Amazon will invest up to $4 billion in AI startup Anthropic - and will be their primary provider of compute, in exchange for a minority ownership position. This marks Amazon’s first step into large foundational models, as they catch up behind competitors Google and Microsoft.

Politics & Regulation
UN plans to shape the future of AI regulation

The UN is gearing up to start an agency tasked with helping the world cooperate in managing AI, in an ambitious effort to foster international cooperation. The UN believe their 2024 Summit of the Future will be the time to finalise agreement on this.

CIA builds its own AI tool

The CIA is developing an AI feature similar to ChatGPT to enhance analysts' access to open-source intelligence and original information source tracing. This initiative by the CIA's Open-Source Enterprise division is a part of larger US government effort to leverage AI, and outcompete China.

Latest Research
A new test for AGI

The Tong test is a first-of-its-kind AGI evaluation test, shifting focus away from traditional task-oriented evaluations, and toward ability and value-oriented measures. The test proposes five characteristics as AGI benchmarks: infinite tasks, self-driven task generation, value alignment, causal understanding, and embodiment.

How to catch a lying LLM

Researchers have developed a lie detector for LLMs. The detector poses unrelated follow-up questions, following a suspected false statement and analyses the responses. The detector can effectively identify deceptive information across various LLM architectures and real-world scenarios.

AI Thinkers
The AI experts that Rishi Sunak is relying on

In March, the Government unveiled a white paper promising not to “stifle innovation” in AI. In May, the Centre for AI Safety released a paper detailing the severity of AI risk. Since then, Sunak has changed tact: the Centre is now one of a handful of organisations advising Sunak on handling AI, as the UK steps up its work on combating existential risk.

Slowing AI development is more crucial than ever

Anthony Aguirre, Executive Director at the Future of Life Institute, argues that unchecked advancements in AI pose severe risks including misinformation, manipulation, and potential weaponization. The US must establish a registry for AI experiments that works toward global cooperation to ensure AI safety.

Use Cases
Getty builds its own AI image generator

Getty Images has partnered with NVIDIA to develop its own AI image generation tool for commercial use. The tool is trained entirely on Getty stock images, and uses Edify, NVIDIA's primary model architecture. The tool is set to rival Shutterstock, whose latest partnership with OpenAI saw their images training DALLE-3.

Spotify's new AI podcast translation feature

Spotify is piloting an AI Voice Translation feature, which can translate podcast content into different languages while retaining the original podcaster's voice. This feature, underpinned by OpenAI's voice generation technology, is being tested with notable podcasters like Dax Shepard and Lex Fridman. Immediate translation could transform podcasting but also spread misinformation risks.

Top Stories

In case you missed it

Hear from Mustafa Suleyman, on AI’s potential to change the world - at the CogX Festival 2023.

We'd love to hear your thoughts on this week’s Issue and what you’d like to see more of.

bottom of page