7 OCTOBER | LONDON 2024
SEPTEMBER 12TH - 14TH
The O2, LONDON
Preparing for AI
Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI, safety and ethics, explained | 21.06.24
If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes
This week, AI still can’t give a straight answer on the 2020 election research finds, increasing misinformation concerns as we near election season. Meanwhile, Meta pauses its AI training plans in the UK and EU after regulatory backlash, and OpenAI is considering ignoring its roots moving toward a ‘for-profit’ structure.
We cover this, plus the death of Silicon Valley and our analysts take on BigTech cutting corners in the race to develop LLMs. Keep reading for a short summary, or read the full piece here on the CogX Blog.
- Charlie and the Research and Intelligence Team
P.S You are now able to sign up for the super early bird offer (75% off) to the CogX AI Summit in London on October 7th here.
The Top Story
🛡️ Is BigTech prioritising speed over AI safety?
Recent weeks have seen a string of botched tech rollouts and an exodus of safety experts, eroding public trust. Last month, key members of OpenAI’s safety team, including Ilya Sutskever and Jan Leike, resigned, accusing the company of prioritising "shiny products" over safety.
The industry-wide pressure for short-term gains over long-term safety is growing. Meta recently sparked outrage with its new data collection policy, while Google and Adobe faced backlash over controversial data practices.
Big Tech’s relentless pursuit of AI supremacy raises unsettling questions about ethical practices and user trust. The need for transparent data practices and robust safety measures has never been more critical as these companies push the boundaries of AI innovation.
… Want to learn more? Read the full piece on the CogX Blog
Ethics & Governance
🗳 AI still doesn’t know who won the 2020 elections: Alexa often fails to state that Biden won, and Microsoft and Google chatbots refuse to answer, directing users to search engines instead. With elections impending and misinformation on the rise, accurate AI is crucial.
🔍 Meta bow to regulatory pressure, and pause training plans in the UK and EU. This comes after Meta attempted to update its privacy policy to use public content from Facebook and Instagram for AI training, facing backlash and privacy complaints.
🌐 OpenAI expands lobbying team to influence AI regulation, bolstering its global affairs team from 3 to 35 staff members, as governments scrutinise AI safety and regulatory compliance. OpenAI is trailing behind giants like Meta and Google in lobbying expenditures.
🔧 Labour will commit to ‘binding regulation’ for tech giants if they win the General Election. The party plans to ban sexually explicit deepfakes and establish a Regulatory Innovation Office to help regulators keep up with tech advances.
💼 OpenAI may move to a ‘for-profit’ structure: CEO Sam Altman mentioned to shareholders that the company might shift its governance structure to a for-profit model, to allow greater flexibility and mirrors structures used by rivals like Anthropic.
AI Dilemmas
📝 AI detectors get it wrong, writers are being fired anyway: Freelancers are being fired due to false accusations of using AI, despite claims of innocence. Critics argue the tech is unreliable and flawed, leading to professional reputations being damaged.
📱 Apple is bringing AI into your personal life, whether you like it or not. The tech will run on iPhones, handling your emails, texts and notifications. Despite its limited scope, Apple Intelligence marks a significant step into daily life, making AI a permanent fixture.
🚀Enjoying our content? We’re also on LinkedIn — Follow us to stay in the loop with the week's most impactful AI news and research!
Insights & Research
🚜 Ukraine is using AI to remove Russian landmines. Clearing these could take 700 years, so scientists have developed a model to identify high-priority de-mining areas, analysing data from satellite images, agricultural maps, military logs to determine urgency.
🌐 Chinese firm sought AI for military use via UK university: Emails reveal that China’s Jiangsu Automation Research Institute aimed to use a partnership with Imperial College London to access AI technology for "smart military bases." facing scrutiny over security risks.
🔧 Bigtech is killing silicon valley innovation: Silicon Valley once thrived on disruption, but today's tech giants are co-opting potential competitors. Companies like Google, Microsoft, and Amazon have acquired or heavily invested in AI startups to curb competition.
📚 Education is adopting AI at an exponential rate. 79% of teachers and 48% of students use OpenAI weekly, the majority of students and parents view the tech positively. However, concerns about cheating remain, 56% students using AI for assignments and 52% for tests.
In case you missed it
McDonald’s ended its AI drive-thru experiment after customers were left frustrated.
✍️ Enjoying this newsletter? Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work
🚀 You are now able to sign up for the super early bird offer (75% off) to the CogX AI Summit in London on October 7th here.