top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics

This week the UK cracked down on non-consensual deepfake creation, banning a sex offender from using any AI tools for 5 years (with much stricter legislation in the works) — the move has prompted two prolific explicit AI websites to block UK access. Will other countries follow suit?

🚨 New UK law could jail AI abusers
Issue 127 • Published April 26, 2024

This week in AI: The UK has finally begun tiptoeing toward AI regulation, the DSIT drafting new legislation despite Sunak's previous 'no rush' stance. Meanwhile, as bigtech pumps the brakes on their push for immediate regulation, UK watchdog CMA raises ‘real concerns’ over their cosy AI alliance — and ensuing market dominance. Can the UK keep up without tripping over its own feet?

🇬🇧 The UK is *finally* regulating AI
Issue 126 • Published April 19, 2024

This week, the EU and US solidified their commitment to jointly steer AI development, focusing on safety and establishing shared international standards. Meanwhile, Sam Altman is pushing for global AI infrastructure, addressing the urgency of AI's growing resource demands… With new models released from OpenAI, Google and Mistral will our race to legislate be fast enough?

😵Will American AI kill European culture for good?
Issue 125 • Published April 12, 2024

This week saw the UK and US unite over AI safety, promising shared evaluations and knowledge exchange. Meanwhile, the use of AI in warfare is increasing: Russian military robots were rolled out in Ukraine (but bested), and Israel's 'Lavender' AI stirred ethical debates over its role in civilian casualties. What steps should we take to ensure AI serves humanity without crossing ethical boundaries?

Urgent 🚨Escalating military-AI tactics from Russia and Israel
Issue 124 • Published April 5, 2024

This week, the UN unanimously passed its first AI resolution, backed by over 120 countries including the US, aiming to safeguard human rights and privacy against the growing risks of AI. Simultaneously the Pentagon is advancing plans to deploy thousands of AI military drones, and pursue fully automated AI surveillance along the US-Mexico border. I wonder if this technology will adhere to the UN's 'secure by design' principles?...

🤖Is your name Wi-Fi? Because I’m really feeling a connection...
Issue 123 • Published March 29, 2024

Today’s newsletter features guest contributor Dr. Joy Buolamwini — acclaimed computer scientist, author, artist, and founder of the Algorithmic Justice league — in her quest to ‘unmask the coded gaze’.

🗣 AI innovation shouldn’t demand humiliation
Issue 122 • Published March 22, 2024

This week, in a landmark move, the EU, despite criticism approved the world’s most comprehensive AI legislation: the EU AI Act — set to implement strict regulations on AI applications. Will it promote safety, or stifle innovation? I guess we’ll find out…

Why are LLMs becoming more racist?
Issue 121 • March 15, 2024

The UK's economic future may very well hinge on AI adoption. Today's newsletter dives into the crucial insights from our guest contributor Adrian Joseph OBE on why the nation can't afford to fall behind the curve.

🔐 🟡 Unlocking the UK’s AI-powered opportunity
Issue 120 • Published March 8, 2024

This week, a US defence official confirmed AI-powered algorithms helped identify potential targets in the Middle East. This technology — which can teach itself how to identify objects and narrow down targets — was used in over 85 strikes in the Middle East launched on February 2nd.

🚨 Red alert: AI warfare is already here
Issue 119 • Published March 1, 2024

This week, AI deepfake tensions spiked in India as election strategies turned to “generative trickery”. Various political parties, including the ruling BJP and opposition Congress, were caught employing AI-generated deep fakes to sway voter opinions. With the 2024 general election approaching, could India slip into a deepfake democracy?

Deepfake democracy: Are India’s AI election tactics a prelude to the future of politics?
Issue 118 • February 23, 2024

This week the digital battleground intensified as Microsoft’s AI tools — powered by OpenAI — fell prey to sophisticated cyberattacks. The culprits? State-backed hackers from China, Russia, and Iran. In the face of such advanced threats, how can nations and corporations bolster their defences to safeguard the future of digital security?

Urgent Security Breach: Global Powers Hack Microsoft's AI
Issue 117 • February 16, 2024

This week, ex-Google CEO Eric Schmidt’s secret AI drone project “White Stork” was leaked. The billionaire technologist has based these operations in Ukraine and plans to build military kamikaze drones so fast they’re ‘nearly impossible to shoot down’.

🚨Leaked: Ex-Google CEO building secret AI “kamikaze” drones… find out more here
Issue 116 • February 9, 2024

This week OpenAI announced their collaboration with Common Sense developing safer AI tools for children. The aim is to develop family-friendly chatbots that cultivate trust among parents and educators. On the other side of the tech world, Google made headlines by splitting up their AI ethics team…yet again. Are tech giants truly making progress or simply taking one step forward and two steps back? 

🔐 Taylor Swift is doing more for AI regulation than BigTech, here’s how:
Issue 115 • February 2, 2024

This week deepfakes are taking center stage: are they 2024’s greatest threat to democracy?
In the UK a significant majority of MPs are raising the alarm to stop AI-generated misinformation from disrupting the democratic process. This concern is not unfounded, as seen in the recent incident in the US where a robocall, eerily imitating President Biden, urged New Hampshire residents not to vote — a suspected tactic of voter suppression. 

🤖 Are deepfakes smart enough to deceive voters?
Issue 114 • January 26, 2024

This week OpenAI made significant strides in AI governance with the introduction of a 'Collective Alignment' team aimed at incorporating public feedback to align AI models with human values. However, they also raised concerns by quietly removing the ban on military usage; are they heading in the right direction or have they taken one step forward and two steps back?

🚫Why did OpenAI secretly remove their military usage ban?
Issue 113 • January 19, 2024

Last year, secret Sino-US diplomacy talks took place between OpenAI, Anthropic and leading Chinese AI experts. The discussions, centered around AI risk and global cooperation, were a success. Despite this, US regulations restricting high-performance chip exports have been introduced for China, raising crucial questions about AI’s role in international diplomacy and geopolitical tensions.

💸 Should we tax Elon Musk over AI-related job-losses?
Issue 112 • January 12, 2024

As the US gears up for elections, the potential of gen-AI to spread misinformation may put democracy under threat. Experts are calling for enhanced support and stringent security measures from federal authorities. Across the pond, a UK anti-extremism think tank urged the government to develop anti-AI terrorism laws, as the pace of development will outgrow current legislative measures by 2025.

🤖Is AI the biggest threat to democracy?
Issue 111 • January 5, 2024

This week, efforts toward global governance increased: 16 nations – including the US and UK – agreed to ensure AI is “secure by design" to prevent misuse. But, with AI abuse on the rise – like recent incidents of illegal AI-generated content circulating UK schools – are these commitments enough?

🧐 Will efforts toward global AI regulation be enough?
Issue 110 • December 1, 2023

This week in AI has been a rollercoaster: Sam Altman was rehired just days after being ousted by OpenAI's board of directors, bringing the company’s stance on financial motivation vs responsible AI into question. Meanwhile, the EU is potentially softening its stance on AI regulation, and Meta's reshuffle of its AI ethics team towards generative AI ventures is turning heads, sparking discussion on the balance between innovation and responsibility.

Is AI safety just a myth? The OpenAI saga unfolds...
Issue 109 • November 23, 2023

The EU's ambitious AI Act is facing significant challenges, with internal disagreement over the inclusion of foundational AI models like OpenAI's GPT series. Meanwhile, the US is increasing international collaboration, with talks in the work with China on AI risk and safety. Will the EU maintain their role as a global leader in AI regulation?

Will the EU AI Act survive?
Issue 108 • November 17, 2023

President Biden has issued an Executive Order on AI that emphasises bolstering safety measures like mandatory testing and advocates for data privacy legislation. Meanwhile, the UK government has unveiled plans to use AI to reduce government ministers’ workloads.

What experts say about the AI Safety Summit
Issue 107 • Published October 20, 2023

The AI world descended to Bletchley Park this week for the first international AI Safety Summit. Across 2 days, world leaders, CEOs of AI labs, academics and research organisations discussed the risks of frontier AI. The result? A tangible first step toward tackling AI risks, and real optimism going forward.

AI Safety Summit Special
Issue 106 • Published November 2, 2023

Intelligence chiefs from MI5 and the FBI have expressed concerns over the potential misuse of AI by bad actors for terrorist activities such as bomb-making, spreading propaganda, and election interference. Meanwhile, the US has raised concern regarding the EU's proposed AI regulation, warning it may stifle innovation by disproportionately benefiting large tech companies, and harming smaller firms

The AI Intelligence threat
Issue 105 • Published October 20, 2023

The UK’s upcoming AI Safety Summit will focus on existential risk, specifically bioweapons, and cyberattacks as potential existential threats posed by AI.

Could AI lead to a new era of bioweapons?
Issue 103 • Published September 29, 2023

This week, the EU emphasised a cautious approach to AI regulation, warning against AI paranoia and stressing the importance of "solid analysis" over dystopian concerns. Meanwhile, Spotify CEO, Daniel Ek, cautioned that while AI laws are vital, the rapid evolution of the technology means they may soon be outdated. He urged the UK to leverage its autonomy post-Brexit for more flexible AI regulations.

AI safety summit: Big names in, startups out
Issue 104 • Published October 6, 2023
bottom of page