Preparing for AI
Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI and regulation, explained | 19.01.24
If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~3 minutes
This week OpenAI made significant strides in AI governance with the introduction of a 'Collective Alignment' team aimed at incorporating public feedback to align AI models with human values. However, they also raised concerns by quietly removing the ban on military usage; are they heading in the right direction or have they taken one step forward and two steps back?
Meanwhile a flaw in millions of GPUs poses a huge risk for AI privacy while the IMF predicts 40% of jobs globally will be impacted by AI — is yours at risk?
We also cover the experts’ take on the deepfakes vs democracy, the upcoming year of ‘AI disappointment’ plus the latest research on LLM personalities.
- Charlie and the Research and Intelligence Team
P.S. You can now register your interest for CogX Festival LA 6-9 May, where we’ll be talking all things AI, Impact, and Transformational Tech!
Ethics and Governance
🤖OpenAI forms ‘Collective Alignment’ team for crowdsourced AI governance. The team will integrate public feedback into OpenAI’s models, to ensure they reflect human values. The initiative includes diverse projects for AI model audits and behaviour fine-tuning.
🌍The IMF predicts 40% of jobs globally will be impacted by AI. In response,Ghana have advanced a national AI strategy to responsibly harness the tech, whilst safeguarding security, privacy and human rights. Will the rest of the world follow suit?
🚫OpenAI quietly removes its explicit ban on military usage from its policy, sparking concerns. The change may open doors for military applications of their technology, potentially influencing the dynamics of AI use in warfare and defence sectors.
🇦🇺Australia is considering a new policy to watermark or label AI-generated content, to regulate "high-risk" AI applications. This move responds to public concerns about AI safety and responsibility, aiming to enhance transparency and trust in AI usage.
🔓Rushed GPU development risks AI data privacy. Researchers have identified a vulnerability in Apple, AMD, and Qualcomm GPUs, allowing data theft from devices like iPhones and Macs. The flaw makes it ‘straightforward’ to eavesdrop on AI interactions.
🗳️Deepfakes threaten democracy: Experts fear that in 2024, sophisticated deepfakes could undermine democratic processes, with viral, undetectable misinformation impacting crucial votes. Will AI-driven manipulations become the new norm in information warfare?
Insights & Research
🎭Exploring AI's ability to mimic human personalities, researchers assess LLMs with the Myers-Briggs Type Indicator. The study uncovers distinct personality traits inherent in each model, and explores how specific prompts influence these AI personalities.
📉Will 2024 be the year of AI disappointment? Generative AI might be about to face a reality check, with its shortcomings in accuracy and false information becoming more evident, potentially recalibrating the overblown expectations around it.
🗺️AI-as-Exploration: This paper explores AI as a tool to uncover diverse forms of intelligence. It focuses on developing and analysing systems that highlight intelligence's building blocks, and its unique presentation in both biological and artificial entities.
⏳What the AI death calculator says about how we live. The viral AI death prediction tool sparks more concern about our current lifestyles than mortality itself. Is this a wake-up call on how we live, or just another wack piece of AI tech?
In case you missed it
Mustafa Suleyman believes we’ve hit the AI hype peak: hear his thoughts on AI induced job losses, IQ vs EQ, and the future of AI, at Davos 2024:
✍️ Enjoying this newsletter? Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work
🚀 Remember to register your interest for CogX Festival LA 6-9 May!