Preparing for AI
Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI and regulation, explained | 17.11.23
If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes
The EU's ambitious AI Act is facing significant challenges, with internal disagreement over the inclusion of foundational AI models like OpenAI's GPT series. Meanwhile, the US is increasing international collaboration, with talks in the work with China on AI risk and safety. Will the EU maintain their role as a global leader in AI regulation?
The Telegraph has introduced a stringent policy for the use of AI in editorial content. The policy demands clear labeling and senior-level approval, and mirrors concerns about plagiarism, legal risks, and data protection.
Explore these topics and more — from how to spot AI in election campaigns, to understand ChatGPT’s political bias — in the CogX Must Reads.
I hope you enjoy it!
Charlie and the Research and Intelligence team
P.S. After an incredible CogX Festival 2023, we're gearing up for another year of cutting-edge innovation, game-changing breakthroughs, and endless inspiration. Don't miss out – grab your super early bird tickets now and secure your spot at CogX Festival 2024 today!
CogX Must Reads
EU AI Act negotiations under strain
The EU's AI Act is at a critical juncture due to internal disagreements led by France and Germany over including foundational AI models, like OpenAI's GPT models, in scope. The EU was set to be a world leader in regulating AI - what will be the global impact if they fail to pass legislation? (Bloomberg)
Journalists using ChatGPT to result in same sanctions as plagiarism
The Telegraph has implemented a strict policy on using generative AI, allowing its use for back office tasks but heavily restricting it in editorial content. Their policy requires clear labeling and approval from senior editors and legal teams when used, reflecting concerns about plagiarism, legal risks, and data protection. (PressGazette)
Politics and Regulation
Biden announces AI safety talk with China
Following a meeting between US President Joe Biden and Chinese President Xi Jinping, a new initiative for US-China dialogue on the risks and safety of AI was announced. This discussion may potentially include the use of AI in nuclear command-and-control systems. (BreakingDefense)
The UK will refrain from regulation - in the short term
The UK has opted not to introduce AI legislation for now, diverging from the regulatory paths of the EU, US, and China. This decision, aimed at preventing the stifling of industry growth, will see AI supervision divided among existing regulatory bodies rather than establishing a new one. (FinancialTimes)
🚀CogX 2024 Super Early Bird Tickets
Don't miss your chance to secure your spot at the CogX Festival 2024! A limited number of super early bird tickets are now up for grabs at a 75% discount.
Understanding political bias in LLMs
This study investigates the decision-making and biases in LLMs, focusing on their handling of political debates. The researchers use Activity Dependency Networks to reveal how LLMs determine "good arguments", and aim to understand, not critique, the interpretative processes of LLMs. (Arxiv)
Proposed architecture for self-motivated systems
This study proposes integrating LLMs with other deep learning systems, to create an advanced architecture for cognitive language agents. This new architecture aims to enable agency, self-motivation, and certain aspects of meta-cognition, which are currently beyond the capabilities of LLMs alone. (Arxiv)
AI is cooking but it shouldn’t be overdone - Mustafa Suleyman
CogX Festival 2023 speaker, Mustafa Suleyman, emphasises the rapid development and potential of AI, while cautioning against overhyping its capabilities. He notes significant advancements in NLP but suggests using objective measures for evaluating AI's progress rather than current public perceptions. (WSJ)
Sunak’s AI plan has ‘no teeth’ - Georg Riekeles and Max von Thun
Georg Riekeles and Max von Thun critique the UK's AI strategy, arguing that the approach lacks effective regulation against the dominance of large tech corporations. They criticise the government's focus on existential AI risks over immediate harms, and advocate for robust competition policies and regulatory obligations targeting corporate power. (Guardian)
AI in election season
Meta and YouTube have preempted the likely chaos in next year’s elections with AI misinformation by crafting disclosure policies on AI usage. For example, Google requires prominent disclosure if an ad contains altered images. But is this enough to combat fake AI generated news? (The Hill)
Does AI contribute to wrongful convictions?
Police officers under strained budgets and pressure to arrest criminals can too often succumb to relying on facial recognition searches, without a wider investigation. There have been examples of the police ignoring contradictory evidence when they find a facial recognition match, but systems aren’t 100% accurate - they should contribute to investigations not be the sole evidence. (NewYorker)
In case you missed it
Hear from Dr. Ebtesam Almazrouei on the fusion of generative AI, LLMs, and open-access models for good – at the CogX Festival 2023:
✍️ Enjoying this newsletter? Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work.