7 OCTOBER | LONDON 2024
SEPTEMBER 12TH - 14TH
The O2, LONDON
Preparing for AI
Your weekly CogX newsletter on AI Safety and Ethics
This week, a US defence official confirmed AI-powered algorithms helped identify potential targets in the Middle East. This technology — which can teach itself how to identify objects and narrow down targets — was used in over 85 strikes in the Middle East launched on February 2nd.
🚨 Red alert: AI warfare is already here
Issue 119 • Published March 1, 2024
This week, AI deepfake tensions spiked in India as election strategies turned to “generative trickery”. Various political parties, including the ruling BJP and opposition Congress, were caught employing AI-generated deep fakes to sway voter opinions. With the 2024 general election approaching, could India slip into a deepfake democracy?
Deepfake democracy: Are India’s AI election tactics a prelude to the future of politics?
Issue 118 • February 23, 2024
This week the digital battleground intensified as Microsoft’s AI tools — powered by OpenAI — fell prey to sophisticated cyberattacks. The culprits? State-backed hackers from China, Russia, and Iran. In the face of such advanced threats, how can nations and corporations bolster their defences to safeguard the future of digital security?
Urgent Security Breach: Global Powers Hack Microsoft's AI
Issue 117 • February 16, 2024
This week, ex-Google CEO Eric Schmidt’s secret AI drone project “White Stork” was leaked. The billionaire technologist has based these operations in Ukraine and plans to build military kamikaze drones so fast they’re ‘nearly impossible to shoot down’.
🚨Leaked: Ex-Google CEO building secret AI “kamikaze” drones… find out more here
Issue 116 • February 9, 2024
This week OpenAI announced their collaboration with Common Sense developing safer AI tools for children. The aim is to develop family-friendly chatbots that cultivate trust among parents and educators. On the other side of the tech world, Google made headlines by splitting up their AI ethics team…yet again. Are tech giants truly making progress or simply taking one step forward and two steps back?
🔐 Taylor Swift is doing more for AI regulation than BigTech, here’s how:
Issue 115 • February 2, 2024
This week deepfakes are taking center stage: are they 2024’s greatest threat to democracy?
In the UK a significant majority of MPs are raising the alarm to stop AI-generated misinformation from disrupting the democratic process. This concern is not unfounded, as seen in the recent incident in the US where a robocall, eerily imitating President Biden, urged New Hampshire residents not to vote — a suspected tactic of voter suppression.
🤖 Are deepfakes smart enough to deceive voters?
Issue 114 • January 26, 2024
This week OpenAI made significant strides in AI governance with the introduction of a 'Collective Alignment' team aimed at incorporating public feedback to align AI models with human values. However, they also raised concerns by quietly removing the ban on military usage; are they heading in the right direction or have they taken one step forward and two steps back?
🚫Why did OpenAI secretly remove their military usage ban?
Issue 113 • January 19, 2024
Last year, secret Sino-US diplomacy talks took place between OpenAI, Anthropic and leading Chinese AI experts. The discussions, centered around AI risk and global cooperation, were a success. Despite this, US regulations restricting high-performance chip exports have been introduced for China, raising crucial questions about AI’s role in international diplomacy and geopolitical tensions.
💸 Should we tax Elon Musk over AI-related job-losses?
Issue 112 • January 12, 2024
As the US gears up for elections, the potential of gen-AI to spread misinformation may put democracy under threat. Experts are calling for enhanced support and stringent security measures from federal authorities. Across the pond, a UK anti-extremism think tank urged the government to develop anti-AI terrorism laws, as the pace of development will outgrow current legislative measures by 2025.
🤖Is AI the biggest threat to democracy?
Issue 111 • January 5, 2024
This week, efforts toward global governance increased: 16 nations – including the US and UK – agreed to ensure AI is “secure by design" to prevent misuse. But, with AI abuse on the rise – like recent incidents of illegal AI-generated content circulating UK schools – are these commitments enough?
🧐 Will efforts toward global AI regulation be enough?
Issue 110 • December 1, 2023
This week in AI has been a rollercoaster: Sam Altman was rehired just days after being ousted by OpenAI's board of directors, bringing the company’s stance on financial motivation vs responsible AI into question. Meanwhile, the EU is potentially softening its stance on AI regulation, and Meta's reshuffle of its AI ethics team towards generative AI ventures is turning heads, sparking discussion on the balance between innovation and responsibility.
Is AI safety just a myth? The OpenAI saga unfolds...
Issue 109 • November 23, 2023
The EU's ambitious AI Act is facing significant challenges, with internal disagreement over the inclusion of foundational AI models like OpenAI's GPT series. Meanwhile, the US is increasing international collaboration, with talks in the work with China on AI risk and safety. Will the EU maintain their role as a global leader in AI regulation?
Will the EU AI Act survive?
Issue 108 • November 17, 2023
President Biden has issued an Executive Order on AI that emphasises bolstering safety measures like mandatory testing and advocates for data privacy legislation. Meanwhile, the UK government has unveiled plans to use AI to reduce government ministers’ workloads.
What experts say about the AI Safety Summit
Issue 107 • Published October 20, 2023
The AI world descended to Bletchley Park this week for the first international AI Safety Summit. Across 2 days, world leaders, CEOs of AI labs, academics and research organisations discussed the risks of frontier AI. The result? A tangible first step toward tackling AI risks, and real optimism going forward.
AI Safety Summit Special
Issue 106 • Published November 2, 2023
Intelligence chiefs from MI5 and the FBI have expressed concerns over the potential misuse of AI by bad actors for terrorist activities such as bomb-making, spreading propaganda, and election interference. Meanwhile, the US has raised concern regarding the EU's proposed AI regulation, warning it may stifle innovation by disproportionately benefiting large tech companies, and harming smaller firms
The AI Intelligence threat
Issue 105 • Published October 20, 2023
This week, the EU emphasised a cautious approach to AI regulation, warning against AI paranoia and stressing the importance of "solid analysis" over dystopian concerns. Meanwhile, Spotify CEO, Daniel Ek, cautioned that while AI laws are vital, the rapid evolution of the technology means they may soon be outdated. He urged the UK to leverage its autonomy post-Brexit for more flexible AI regulations.