top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI and regulation, explained | 01.12.23

If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes


This week, efforts toward global governance increased: 16 nations – including the US and UK – agreed to ensure AI is “secure by design" to prevent misuse. But, with AI abuse on the rise – like recent incidents of illegal AI-generated content circulating UK schools – are these commitments enough?  

 

Meanwhile, the African Unions discussed their commitment to AI safety, acknowledging its transformative impact on health, sustainability, and the economy. Plus, the results are in: the American public’s top AI concerns are the rapid pace of development, and its use in defense.

 

We also cover the experts’ take on facial-recognition and AI greed, plus the latest research on AI rationality, and temporal reasoning.

 

- Charlie and the Research and Intelligence Team

 

🚀 Don't miss your chance to secure your spot at the CogX Festival 2024! A limited number of super early bird tickets are now up for grabs at a 75% discount.


Ethics and Governance



🤝 An agreement to ensure AI is “secure by design”, was signed by the US, Britain, and 16 other nations, representing a growing global initiative to regulate AI. The accord encourages AI companies to safeguard against misuse, and prioritise safety from the outset. 

 

🧠 What is Project Q*? And why is it important? Q* – OpenAI’s AGI taskforce –  reportedly developed an AI with ‘human-like’ cognitive abilities, leading to internal alarm that may have prompted Alman’s dismissal. The reaction represents how unprepared we are for AGI. 

 

🌍 At the African Union's summit, leaders drafted their AI strategy covering economic impact, and ethical governance. The summit highlighted the challenge of Africa's limited role in global AI policy, and the transformative nature of digitization on climate and agriculture.


🇺🇸 Americans are overwhelmingly concerned at the pace of AI development, according to the AI Policy Institute’s recent poll. The survey shows broad support for preventing harmful AI content, and for international cooperation in AI defence regulation.

 

AI Dilemmas


🪖 The real military risk of AI isn’t killer robots  it’s integration into nuclear weapons. As AI improves and we become increasingly more dependent on it, its integration into warfare is more likely, and more dangerous. Will human oversight be a substantial enough guardrail? 


📱UK school students are using AI to create indecent images of other children. Whilst the content is AI-generated, it is still considered illegal under UK law. Experts express the urgency for schools to implement safeguards, and for regulators to address AI-accelerated abuse.

 

✍️ Enjoying this newsletter? Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work.


 


Insights & Research



👤 "If you have a face, you have a place in the conversation about AI," says Dr. Joy Buolamwini. In this interview, she emphasises the social consequences of biased facial recognition tech, and advocates for "biometric rights", to guard against AI misuse.

 

💰 Greed overrides ethical concerns in AIaccording to Robert Reich. Using OpenAI as a case study, Reich discusses the company’s shift from a safety-focused non-profit to a profit-driven entity – due to investor pressure, and highlights the necessity of government regulation. 

 

🧠 What does AI rationality look like? To understand AI rationality, we must also understand irrationality, as there is still no unified definition. This paper identifies, and explores the open questions posed by AI reasoning, and the societal importance of tackling them. 


⏳ AI is great, but it still can’t quite understand time. Researchers conducted extensive experiments on popular LLMs, with a f chain-of-thought-prompting. They revealed a significant performance gap between models and humans in temporal reasoning.


In case you missed it


Elon Musk discusses OpenAI ‘lies’, copyright, and digital god – in an interview with the New York Times:



✍️ Enjoying this newsletter?  Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work.

bottom of page