top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics

UK signs the first international treaty to implement AI safeguards


 By the CogX R&I team

September 13, 2024 


The UK government has put pen to paper on the world's first legally binding AI treaty, joining forces with the EU, US, and Israel to tackle the looming threats of AI.


 

The treaty establishes a robust framework built on three key safeguards:


  • Protecting human rights: Ensuring responsible data use, privacy protection, and non-discriminatory AI systems.

  • Safeguarding democracy: Mandating proactive measures to prevent AI from undermining public institutions and democratic processes.

  • Upholding the rule of law: Obligating signatories to craft ironclad AI-specific regulations and shield citizens from potential harm.


It also lays out some ground rules for AI systems: they need to protect personal data, avoid discrimination, be developed safely, and respect human dignity. In practice, this means governments will need to put safeguards in place. Think stopping AI from spreading fake news or making sure AI systems aren't trained on biassed data that could lead to unfair hiring or benefits decisions.

 

If you're using AI systems that fall under this treaty, you'll need to check how they might impact human rights, democracy, and the rule of law – and make that information public. People will have the right to challenge AI decisions and file complaints. Oh, and if you're talking to an AI, you need to be told it's not a human.

 

What's next for the UK? The government's got some homework to do. They need to check if existing laws already cover parts of the treaty and where new rules might be needed. There's talk of a new AI bill in the works. Once everything is officially approved, current laws will likely get a bit of an upgrad


 

 Now read the rest of the CogX Newsletter


US, China and other nations convene in Seoul for summit on AI use in military.


Image Credits:YK / Unsplash


South Korea is hosting a two-day international summit starting Monday, bringing together representatives from over 90 nations, including the U.S. and China, to hammer out a blueprint for the responsible use of AI in military applications.

 

Setting the AI Guardrails: As reported by Reuters, the summit aims to establish minimum guardrails and principles for AI deployment in defence, building on discussions from last year's gathering in The Hague.

The push for clearer guidelines comes as recent conflicts, like the Russia-Ukraine war, showcase the growing role of AI in military operations. Ukraine's use of AI-enabled drones has been likened to "David's slingshot" by South Korean Defense Minister Kim Yong-hyun.

 

From discussion to action? While the summit's outcome isn't expected to have legal binding power, it represents a significant step in multi-stakeholder discussions on military AI. The event will cover crucial topics such as ensuring AI compliance with international law and preventing autonomous weapons from making life-and-death decisions without human oversight.



 

AI Dilemmas

AI-Generated music scam nets millions: The scheme involved using AI to produce music and then employing bots to artificially inflate stream counts.

 

Confessions of a chatbot helper: As journalists and writers help train chatbots to write like a human does, they face the irony of potentially contributing to their own obsolescence

 

Insights & Research


AI Deepfakes continue to threaten US elections: A recent poll reveals that Republican voters are less confident than Democrats in identifying AI-generated deepfakes. Overall, only 25% of registered US voters feel strongly confident in their ability to distinguish between real and AI-created visual content.

 

AI outperforms humans in generating research ideas: A new study reveals that AI language models can generate more creative and original research ideas than human experts. While human experts may have an edge in feasibility, AI's ability to explore unconventional avenues could lead to groundbreaking discoveries.

 

Taiwan's struggle to maintain semiconductor dominance: The competition for semiconductor supply chains has become a geopolitical battleground, with the US and other countries seeking to limit China's access to these critical components.

 

Peter Thiel warns against AI overregulation: Tech billionaire Peter Thiel has expressed concerns about the growing push for AI regulation. He argues that while AI itself poses risks, excessive government control could lead to a "global totalitarian character".



In case you missed it


Watch Yuval Noah Harari discuss the revolutionary impact of AI in his new book, "Nexus: A Brief History of Information Networks".







✍️ Enjoying this newsletter?  Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work


🚀 You are now able to sign up for an exclusive 25% off to the CogX Leadership Summit in London on October 7th here.





bottom of page