top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI and regulation, explained | 20.10.23

If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes


Intelligence chiefs from MI5 and the FBI have expressed concerns over the potential misuse of AI by bad actors for terrorist activities such as bomb-making, spreading propaganda, and election interference. Meanwhile, the US has raised concern regarding the EU's proposed AI regulation, warning it may stifle innovation by disproportionately benefiting large tech companies, and harming smaller firms. 

 

Could workers be the ones to regulate AI? Rana Foroohar discusses the role of the Writers Guild of America in shaping AI regulations for the entertainment industry, emphasising a bottom-up approach driven by individuals with practical experience.

 

Explore these topics and more — from LLM reasoning to AI healthcare — in the CogX Must Reads.


I hope you enjoy it!

Charlie and the Research and Intelligence team


P.S. Did you miss some of the AI sessions at the festival? The CogX Festival 2023 is now available on demand! Watch inspiring sessions on AI risks, opportunities and regulation from the likes of Mustafa Suleyman, Tristan Harris, and Reid Hoffman on our YouTube channel now. 


CogX Must Reads


 


Top Stories


Intelligence chiefs warn of hostile actors using AI

The heads of MI5 and the FBI warn that terrorists and hostile states might misuse AI for building bombs, disseminating propaganda, or meddling in elections. At a Five Eyes summit, they discussed the potential for AI to bypass security measures and the risk of AI-generated political misinformation. (Guardian)

 

US warns that EU policy only benefits big tech

The US has raised concerns about the EU’s proposed AI regulation, warning that it could favour large companies, hurt smaller firms, and stifle innovation. The US fears that EU rules will reduce productivity, drive job and investment migration, and limit the competitiveness of European AI firms. (Bloomberg)


 


Politics and Regulation


UK’s pushes for creation of AI Safety Institute 

Rishi Sunak hopes to establish a new AI organisation, which will collaborate with allies on the national security implications of emerging AI technologies. During the UK's two-day AI summit, Sunak plans to convene with representatives from "like-minded countries" and top AI firms to propose the creation of an AI Safety Institute. (Politico)

 

Mustafa Suleyman and Eric Schmidt propose IPCC for AI

The founder of DeepMind, and ex Google CEO have proposed an independent body similar to the IPCC, named the International Panel on AI Safety (IPAIS). This entity would offer unbiased insights into AI's current state, risks, and future trajectories, and aim to underpin effective and informed AI regulation. (Financial Times)


 

Latest Research


Human-centred evaluation of XAI

Researchers have addressed how "black box" AI systems make decisions, particularly in classifying images. Just as humans focus on key image features to explain choices, AI has methods to do the same. This study tested three of these methods and found that all were similarly effective in helping people understand AI decisions, a huge step for transparency. (Arxiv)

 

Framework for reasoning with LLMs

A new framework called "Hypotheses-to-Theories" (HtT) addresses the hallucination issue in LLMs and enhances their reliability of reasoning. Experiments on numerical and relational reasoning problems show that HtT outperforms existing prompting methods, achieving an 11-27% accuracy improvement. (Arxiv)

 

 


AI Thinkers


Do we need a humanity defence organisation?

 Yoshua Bengio is advocating for a ‘Humanity Defense Organisation’ for AI, recognizing the dual potential of AI to benefit or harm humanity. This entity would monitor AI developments, establish ethical standards, ensure transparency, foster global collaboration, and raise public awareness about AI's potential risks and benefits. (Wired)

 

Workers need a voice in regulating AI

 Rana Foroohar discusses how the Writers Guild of America has played a pivotal role in setting AI rules for the entertainment industry. The new union deal highlights that AI can be effectively regulated bottom up by those with hands-on experience, and suggests that unions could serve as data stewards that safeguard the interests of workers. (Financial Times)


 


Use Cases


How to use AI safely in health

The World Health Organization has announced its regulatory guidelines for AI in healthcare, emphasising AI systems' safety, effectiveness and accessibility. While AI holds transformative potential for healthcare, including supplementing professionals' skills and filling gaps in specialist availability, its rapid deployment poses risks, such as misinterpretation, data privacy issues, and potential biases. (WHO)

 

UK startup deepfaking without consent

Yepic AI, which claims to use deepfake technology for positive purposes and promised not to recreate someone without their consent, created deepfake videos of a TechCrunch reporter without their permission. While Yepic AI stated that the videos and images have been deleted, this incident highlights the ethical and privacy concerns associated with the growing use of deepfake technology. (TechCrunch)


 

In case you missed it


Hear from the former Chief of the UK's Secret Intelligence Service, and Assistant Secretary General of NATO, discussing the existential threats of AI in war and cyber-terrorism - at the CogX Festival 2023:



✍️ Enjoying this newsletter?  Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work.

bottom of page