top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI, safety and ethics, explained | 05.04.24

If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes


This week saw the UK and US unite over AI safety, promising shared evaluations and knowledge exchange. Meanwhile, the use of AI in warfare is increasing: Russian military robots were rolled out in Ukraine (but bested), and Israel's 'Lavender' AI stirred ethical debates over its role in civilian casualties. What steps should we take to ensure AI serves humanity without crossing ethical boundaries?

 

Meanwhile, AI concerns extend to animal welfare as experts advocate to broaden our aim of ‘human alignment’ to include all sentient beings. And Google's new AI search feature is under fire for inaccuracies and reliability issues, challenging trust in AI-driven search engines.

 

We cover these stories, plus how to approach unanticipated LLM bias, why OpenAI won’t release their voice-cloning tech, and the project Q* leak behind a mysterious deleted tweet…


- Charlie and the Research and Intelligence Team


Ps. We’ve just launched The CogX Transatlantic Accelerator; a joint campaign with the UK Government to connect the most innovative UK startups with US markets. If you are or know a UK startup they can apply for over $20k worth of support to attend, exhibit and network at CogX Festival in LA on 7th May here


Share your expertise! Want to be a guest contributor in our next issue? drop us a line at: editors@cogx.live.


Ethics and Governance


🤝 The UK and the US have signed an agreement to collaborate on AI safety. The partnership involves sharing information, conducting joint safety evaluations, and performing personnel exchanges between the AI Safety Institutes of both countries. 

 

🤖 Russia's first-ever robotic ground assault ended badly for the robots, with Ukrainian forces destroying at least two Russian armed drones near Bakhmut. This incident underscores the combat challenges for ground robots, such as navigation issues and susceptibility to signal jamming.

 

💥 Israel used AI system ‘Lavender’ to identify 37,000 Hamas targets leading to significant civilian casualties. The system allowed for quick targeting decisions, revealing the stark realities and both moral and legal dilemmas of modern AI-assisted combat.

 

⚖️ US judge blocks the use of AI-enhanced video as evidence in a triple murder case. The judge cited concerns about the technology's opaque methods and the potential for confusion and misleading information in court. 

 

🔊 Why won’t OpenAI release their voice cloning tech? OpenAI is withholding their Voice Engine technology, a text-to-speech model that generates synthetic voices from brief audio samples, due to ethical concerns and the potential for misuse.

 

AI Dilemmas


🐾What if AI serves us too well? If we focus only on human alignment, AI may worsen conditions for animals. With the increasing use of AI in mass-farming, there's fear that maximising efficiency could harm animal welfare, highlighting the need for ‘sentient alignment’ instead.

 

💻China turns to AI to mourn loved ones via digital clones during the tomb-sweeping festival, allowing interactions with virtual ancestors for a small fee. This raises the ethical issue of consent and autonomy, as the deceased cannot agree to their digital recreation or use of their likeness.


🚀Enjoying our content? We’re also on LinkedIn — Follow us to stay in the loop with the week's most impactful AI news and research!

 

Insights & Research



🔍 The new AI Google ‘Search Generative Experience’, is worse, providing inaccurate information, often misinterpreting questions, relying on low-quality sources, and fabricating facts. Despite extensive testing, it still struggles with accuracy and relevancy, raising user trust concerns.

 

🤖 We’re still focusing on the wrong kind of AI apocalypse, prioritising existential threats while overlooking the immediate and practical concerns. The attention should shift to the mundane changes AI brings to work and society: addressing "little apocalypses" is crucial.

 

🧠 How can we detect unanticipated bias in LLMs? Recent studies show that, like their predecessors, LLMs possess inherent biases. To uncover and comprehend the less obvious, implicit biases, new methods like Uncertainty Quantification and Explainable AI, are being explored.


🌐 Meta forum positively shifted opinions on AI, with plans for more such events in the future. The forum discussed AI policy and the role of chatbots, revealing a nuanced view of AI's impact, helping to demystify concerns about bias, privacy, and human interaction.


In case you missed it


What this mysteriously deleted tweet from an OpenAI employee reveals about project Q*:



✍️ Enjoying this newsletter?  Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work


🚀 We’ve just launched The CogX Transatlantic Accelerator; a joint campaign with the UK Government to connect the most innovative UK startups with US markets. If you are or know a UK startup they can apply for over $20k worth of support to attend, exhibit and network at CogX Festival in LA on 7th May here

bottom of page