Preparing for AI
Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI and regulation, explained | 10.11.23
If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes
President Biden has issued an Executive Order on AI that emphasises bolstering safety measures like mandatory testing and advocates for data privacy legislation. Meanwhile, the UK government has unveiled plans to use AI to reduce government ministers’ workloads.
In the aftermath of last week's AI Safety Summit, the consensus is clear: regulation is a necessity. This prompts pivotal questions: Who will take the reins in shaping this regulation? And what insights are experts bringing to the table?
Explore these topics and more — from handling ChatGPT hallucinations to a recall of self-driving cars — in the CogX Must Reads.
I hope you enjoy it!
Charlie and the Research and Intelligence team
P.S. After an incredible CogX Festival 2023, we're gearing up for another year of cutting-edge innovation, game-changing breakthroughs, and endless inspiration. Don't miss out – grab your super early bird tickets now and secure your spot at CogX Festival 2024 today!
CogX Must Reads
The AI Safety Summit, explained
The UK AI Safety Summit was a landmark event, promoting global collaboration to mitigate risks from frontier AI. What was decided on — and what happens next? Our analysts break it down on the CogX blog. (CogX Blog)
Biden issued an Executive Order on AI
The President’s order prioritises safety, including mandatory safety testing sharing for powerful AI systems, and data privacy legislation. The order also highlights equity and civil rights considerations in AI, and supports responsible AI use in healthcare and education, while aiming to boost innovation and competitiveness. (Whitehouse)
UK government will trial AI to save time on decision making
UK Deputy PM Oliver Dowden has announced trials to use AI to reduce government ministers' workload by streamlining paperwork. Cabinet Office Minister Alex Burghart is among the first to test this technology. Dowden believes AI will boost productivity and cut costs, particularly in addressing government backlogs. (Times)
Safety Summit Responses
Experts react to the Summit
Most attendees shared nuanced optimism about the Summit, while acknowledging the lack of discourse on short-term challenges like disinformation. Marc Andreessen sees slowing AI as harmful; Elon Musk is optimistic about its transformative potential; and Nick Clegg compared AI concerns to previous tech panics. (Guardian)
Who will make the AI rules?
The AI Safety Summit gathered both government and industry leaders, as well as representatives from powerful AI companies. The influence these companies have over events like the Summit raises questions about how power and money will shape AI regulation. (Scientific American)
🚀CogX 2024 Super Early Bird Tickets
Don't miss your chance to secure your spot at the CogX Festival 2024! A limited number of super early bird tickets are now up for grabs at a 75% discount.
Human memory and LLMs
This study explores how LLMs exhibit memory properties similar to human memory, despite lacking a dedicated memory system. The findings suggest that human memory influences how text is constructed, and hence LLMs incorporate these traits through their training data. (Arxiv)
Dealing with hallucinations in ChatGPT
ChatGPT often provides detailed answers, but it can also produce incorrect information about famous people, places, and events. To combat this, researchers have introduced a method to validate ChatGPT statements using RDF Knowledge Graphs. The method aims to improve the accuracy and reliability of ChatGPT responses. (Arxiv)
We’re doing too much – and yet, too little — with AI regulation
Dr Tim Wu discusses the importance of Biden's executive order on AI, highlighting the need to focus on real problems like deep fakes and fake images, rather than speculative AI risks. While endorsing standardised testing and oversight, he cautioned against excessive regulation that could hinder innovation and benefit tech giants. (NYTimes)
Is AI really the problem?
John Naughton explores how the current moral panic concerning AI, driven by media, tech giants, and governments, is more harmful than AI itself. He notes that tech companies want to influence regulation to preserve their dominance, and that governments are enabling this to project an image of themselves taking action. (Guardian)
The police love AI — even more than facial recognition
Despite facial recognition software facing public backlash, American police are still embracing AI, including robotics, drones and surveillance tech. President Biden's executive order calls for fairness in AI within the criminal justice system, but state-level AI laws have yet to address policing explicitly. (Fortune)
Autonomous cars recalled
Cruise has recalled 950 autonomous vehicles following an incident where a robotaxi attempted to pull over after a collision, instead of staying stationary, dragging a pedestrian with it. Cruise is conducting a safety review and has hired experts to examine its response to the incident. (TechCrunch)
In case you missed it
Hear from Emad Mostaque — founder and CEO of Stability AI — on how generative AI will change the future:
✍️ Enjoying this newsletter? Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work.