top of page

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI and regulation, explained | 02.02.24

If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~3 minutes


This week OpenAI announced their collaboration with Common Sense developing safer AI tools for children. The aim is to develop family-friendly chatbots that cultivate trust among parents and educators. On the other side of the tech world, Google made headlines by splitting up their AI ethics team…yet again. Are tech giants truly making progress or simply taking one step forward and two steps back? 

 

Meanwhile, the US government is stepping up to the AI regulation plate following the recent proliferation of Taylor Swift deepfakes across the internet. The 'Defiance Act' will seek to criminalise the distribution of nonconsensual sexual deepfakes, offering victims a path to seek civil penalties. 

 

Across the pond, a survey revealed that over half of UK undergraduates are using ChatGPT for essay writing, and teachers aren’t far behind, with the majority exploring AI for lesson planning and material generation. Should we try to stop AI from taking over education? 

 

We also cover the experts’ take on the impending age of disinformation, what AGI actually is (spoiler, no one knows) plus the latest research on LLMs… can they outsmart economists?


- Charlie and the Research and Intelligence Team


P.S. You can now register your interest for CogX Festival LA 6-9 May, where we’ll be talking all things AI, Impact, and Transformational Tech!


Ethics and Governance



👨‍👩‍👧‍👦 OpenAI collaborates with Common Sense to create safer AI tools for children, focusing on establishing protective guidelines in family-friendly chatbots. This partnership aims to enhance trust in AI among parents and educators, amidst growing ethical concerns.

 

🔄 Google splits up primary AI ethics watchdog. The Responsible Innovation team, responsible for reviewing ethical compliance of AI products, faces restructuring following a leadership reshuffle. Most members will move to the trust and safety division.

 

🤖 Taylor Swift AI deepfakes prompt US regulation. The bill will criminalise the distribution of nonconsensual, sexual deepfakes, following widespread circulation of such images of Taylor Swift. The ‘Defiance Act’ will enable victims to seek civil penalties against perpetrators. 


🔒 US seeks AI data disclosure from tech giants amid China tech war. The government is proposing that US cloud service providers, like Amazon, Google, and Microsoft, disclose foreign clients using their platforms for AI development, targeting national security threats.

 

AI Dilemmas


🎓Over 50% of UK undergraduates are using ChatGPT for essays. A survey found that the majority use AI programs for essay writing, with a smaller number directly copying AI generated text. Teachers are also exploring AI for lesson planning and material generation.


📱Google Bard update will read all your private messages. Bard AI is set to integrate into the Messages app, and will analyse users' private messages to enhance communication and provide personalised responses, leading to worries about the privacy of sensitive content.


 

Insights & Research



🗣️ Everyone wants to build AGI, but no one can agree on what it is. The pursuit of AGI, often considered the holy grail of AI, is the big topic in Bigtech at the moment. However, it is marked by significant debate, uncertainty, and shaky timelines. 

 

📊 Can AI replace economic choice prediction labs? Research reveals that yes, LLMs excel at simulating human behaviour in economic settings. AI can predict decision-making in language-based persuasion games, and outperform models trained on human data.

 

🧠 Do LLMs have the same cognitive bias as humans? Researchers have found that LLMs exhibit similar biases in text comprehension and solution planning to humans, but not in arithmetic execution, highlighting differences in numerical computation processing.


📰 AI threatens the information age, with fake news and disinformation. The gen-AI trend is creating feedback loops, where AI-generated content is fed back into the models, potentially leading to a homogenised and less diverse information landscape.


In case you missed it


Google's AlphaGeometry AI has succeeded in surpassing International Mathematical Olympiad participants at geometry problem-solving — huge leap in AI capabilities. Checkout the full story here:



✍️ Enjoying this newsletter?  Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work


🚀 Remember to register your interest for CogX Festival LA 6-9 May!

bottom of page