Preparing for AI
Your weekly CogX newsletter on AI Safety and Ethics
Could AI become conscious?
Google DeepMind has developed SynthID, a new AI image watermark that could reduce misinformation on social media. Meanwhile, a parliamentary committee has urged the UK government to introduce urgent AI legislation to tackle twelve key risks. With AI progress developing at pace, can the government keep up?
New research found GPT-4 rivalled human creativity, scoring in the top 1% for originality of ideas whilst researchers have questioned whether AI could develop consciousness.
Explore these topics and more - from risks of AI monitoring employees to the merits and risks of open source - in the CogX Must Reads
CogX Must Reads
AI aces creativity test
Creativity is a distinctly human trait, right? New research may show otherwise. When tested using the Torrance Tests of Creative Thinking (TTCT), GPT-4 scored in the top 1% for the originality of its ideas, rivalling human creativity levels
Google DeepMind’s new AI image watermark
SynthID will identify images generated by AI by embedding changes to individual pixels which aren’t visible to the human eye but can be detected by computer systems. The watermark is being trialled, and if successful, it could be a big step in the fight against misinformation
Politics and Regulation
Schumer invites tech CEOs to summit
U.S. Senator Chuck Schumer invited tech leaders including Elon Musk, Mark Zuckerberg, and Sam Altman to an AI forum on Sept 13. The forum aims to accelerate the development of AI policy amidst concern that the technology is moving faster than the government response.
UK government urged to introduce AI legislation
A parliamentary committee called for the Government to introduce an AI bill as a priority to legislate for combatting AI risks. The committee report sets out 12 challenges of AI governance including bias, privacy and misrepresentation
Transformers possess capability for AI consciousness
A groundbreaking study concluded that whilst no current AI has achieved consciousness, we cannot rule out the possibility. Researchers evaluate the Transformer architecture with key consciousness theories, deducing that whilst they’re not yet fully aligned, they are compatible.
Understanding the ethics of artificial agents
Training artificial agents to pursue rewards may inadvertently promote power-seeking and deceptive behaviours, similar to how language models might lean into toxicity. But do these agents naturally adopt Machiavellian tendencies?
Could AI become the boss from hell?
Professor Wooldridge urged society to shift its attention away from existential risk, which is speculative, and focus instead on genuine immediate risks, like workplace oversight. Woolridge worries that AI could be used to monitor employees’ emails, offer constant feedback on work, and potentially even decide who gets fired.
Should AI models have restricted access?
Who should control AI? The open-source vs restricted debate continues, as Meta releases their Llama models with zero-access restrictions in a controversial move. With prominent AI labs recently likening the risk of AI to nuclear war, should we be advocating for restricted access to advanced models to prevent misuse?
The next scam email might be from your boss
AI-driven scams, including voice and video deep fakes, are leading to increases in cybercrime - and fake messages from your “boss” may be the next phenomena. The CEO of CyberArk Security warns of phishing risks after being stunned by chillingly realistic deep fake video footage of himself, created by an employee using publicly available AI tools.
AI’s disruption of the labour market
Sanctuary AI is developing Phoenix, a humanoid robot that will learn human needs and execute tasks based on them. Sanctuary AI CEO believes the long-term potential market for such robots stretches across the entire labour market, which could disrupt industries like mobile phones or cars.
In case you missed it
CogX Festival 2023 speaker Mustafa Suleyman discusses the risks of AI progress with Sam Harris
We'd love to hear your thoughts on this week’s Issue and what you’d like to see more of.