7 OCTOBER | LONDON 2024
SEPTEMBER 12TH - 14TH
The O2, LONDON
Preparing for AI
Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI and regulation, explained | 23.11.23
If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes
This week in AI has been a rollercoaster: Sam Altman was rehired just days after being ousted by OpenAI's board of directors, bringing the company’s stance on financial motivation vs responsible AI into question. Meanwhile, the EU is potentially softening its stance on AI regulation, and Meta's reshuffle of its AI ethics team towards generative AI ventures is turning heads, sparking discussion on the balance between innovation and responsibility.
Plus, as the world debates the ethics of AI drones in warfare, the healthcare sector is wrestling with the potential chaos deep fakes could unleash, and researchers are making a case for universal basic compute.
- Charlie and the Research and Intelligence Team
P.S. After an incredible CogX Festival 2023, we're gearing up for another year of cutting-edge innovation, game-changing breakthroughs, and endless inspiration. Don't miss out – grab your super early bird tickets now and secure your spot at CogX Festival 2024 today!
Ethics and Governance
🤖 Responsible AI is a myth, if OpenAI is anything to go by. Recent events involving the now reinstated CEO Sam Altman demonstrate that financial motivations may have overshadowed OpenAI’s commitment to building responsible AI.
🇪🇺 The EU's AI Act faces softening, with major members like France, Germany, and Italy advocating for self-regulation of foundational AI models, rather than the European Commission's stricter regulatory approach.
🔄 Meta is disbanding its Responsible AI team, redirecting members to generative AI and AI infrastructure roles. This move, intended to integrate staff into core product development, raises concerns about their commitment to AI safety and ethics.
🤝 Despite improved US-China AI relations, tech leaders remain concernedabout computer chips, especially US export controls. The ongoing tension continues to strain the tech industry, which relies on China for its workforce and supply.
AI Dilemmas
🛸 Should we regulate AI killer drones? Despite growing concerns about the use of AI drones in global conflict, major powers like the US, Russia, and China are resisting new regulations – they argue that existing laws suffice.
🏥 Deepfakes could worsen healthcare misinformation. Experts state that, despite AI’s benefits to healthcare, the tech could undermine public health efforts through widespread misinformation, and jeopardise patient data security.
✍️ Enjoying this newsletter? Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work.
Insights & Research
🌏The case for Universal Basic Computing Power is to provide global, free access to a set amount of computing power for AI research and development. The initiative calls on large platforms, open-source contributors, and policymakers, to prioritise and support UBCP.
🎓Is AI the best teacher for AI? This framework proposes training AI models with guidance from LLM ‘teachers’. The approach distills the LLM's knowledge into the student model and enhances its capabilities through continuous feedback.
📜Nancy Kim critiques Biden's executive order on AI for failing to address 2 key issues: social media platforms avoiding responsibility for AI-generated content, and companies bypassing regulatory standards with the use of complex terms of service.
💸Josh Tyrangiel explores how AI is making banking smarter, highlighting the tech’s adoption by major banks for tasks like credit decisions, fraud detection, and enhancing operational efficiency. The challenge lies in managing these risks through careful regulation.
🚀CogX 2024 Super Early Bird Tickets
Don't miss your chance to secure your spot at the CogX Festival 2024! A limited number of super early bird tickets are now up for grabs at a 75% discount.
In case you missed it
Bennet Borden, chief data scientist, breaks down Biden’s executive order, and what it means for the future of AI regulation:
✍️ Enjoying this newsletter? Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work.