top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics

The UK government has put pen to paper on the world's first legally binding AI treaty.

UK signs the first international treaty to implement AI safeguards
Issue 136 • Published September 13, 2024

This week, Meta struggles with accurately labelling AI-generated content and concerns grow over Big Tech’s stealthy changes on their T&Cs.

Zuckerberg has something to say about AI
Issue 134 • Published July 5, 2024

This week, a Wyoming mayoral candidate's plan to let AI govern the city and Google outlines its principles for AI regulation.

Would AI make a good mayor?
Issue 134 • Published June 28, 2024

This week, AI still can’t give a straight answer on the 2020 election and OpenAI is considering ignoring its roots moving toward a ‘for-profit’ structure.

🕵️‍♂️AI still can’t give a straight answer
Issue 133 • Published June 21, 2024

Tim Estes, Founder and CEO of Angel AI, shares his thoughts in this exclusive OpEd, calling for fundamental redesign of digital spaces to prioritise children's safety and well-being.

👶How do we protect children from the internet?
Issue 132 • Published June 14, 2024

OpenAI, Anthropic and DeepMind employees warn AI could lead to "human extinction" without stronger safeguards, and Google’s AI search is copying people’s work.

📉Is ‘BigTech’ a concern?
Issue 131 • Published June 7, 2024

This week, African AI workers are calling on President Biden to address their exploitation by US tech giants, labelling their treatment ‘modern day slavery’.

🗣️Are AI workers ‘modern day slaves’?
Issue 130 • Published May 31, 2024

This week OpenAI's safety lead Jan Leike resigned, criticising Altman’s focus on ‘shiny objects’ over safety after the GPT-4o release.

🔮OpenAI puts ‘shiny objects’ over safety
Issue 129 • Published May 24, 2024

On May 7th, we will be joined by EY at CogX LA. In preparation, we sat down with Jay Persaud to discuss five key questions about EY’s commitment to the global emerging tech ecosystem.

🔓EY: The #1 strategy to scale with AI?
Issue 128 • Published May 03, 2024

This week the UK cracked down on non-consensual deepfake creation, banning a sex offender from using any AI tools for 5 years.

🚨 New UK law could jail AI abusers
Issue 127 • Published April 26, 2024

This week in AI: The UK watchdog CMA raises ‘real concerns’ over their cosy AI alliance — and ensuing market dominance. Can the UK keep up without tripping over its own feet?

🇬🇧 The UK is *finally* regulating AI
Issue 126 • Published April 19, 2024

This week, the EU and US solidified their commitment to jointly steer AI development, focusing on safety and establishing shared international standards.

😵Will American AI kill European culture for good?
Issue 125 • Published April 12, 2024

This week saw the UK and US unite over AI safety, promising shared evaluations and knowledge exchange.

Urgent 🚨Escalating military-AI tactics from Russia and Israel
Issue 124 • Published April 5, 2024

This week, the UN unanimously passed its first AI resolution, backed by over 120 countries including the US, aiming to safeguard human rights and privacy against the growing risks of AI.

🤖Is your name Wi-Fi? Because I’m really feeling a connection...
Issue 123 • Published March 29, 2024

Today’s newsletter features guest contributor Dr. Joy Buolamwini — acclaimed computer scientist, author, artist, and founder of the Algorithmic Justice league.

🗣 AI innovation shouldn’t demand humiliation
Issue 122 • Published March 22, 2024

This week, in a landmark move, the EU approved the world’s most comprehensive AI legislation: the EU AI Act.

Why are LLMs becoming more racist?
Issue 121 • March 15, 2024

The UK's economic future may very well hinge on AI adoption. Today's newsletter dives into the crucial insights from our guest contributor Adrian Joseph OBE on why the nation can't afford to fall behind the curve.

🔐 🟡 Unlocking the UK’s AI-powered opportunity
Issue 120 • Published March 8, 2024

This week, a US defence official confirmed AI-powered algorithms helped identify potential targets in the Middle East. This technology — which can teach itself how to identify objects and narrow down targets — was used in over 85 strikes in the Middle East launched on February 2nd.

🚨 Red alert: AI warfare is already here
Issue 119 • Published March 1, 2024

This week, AI deepfake tensions spiked in India as election strategies turned to “generative trickery”. Various political parties, including the ruling BJP and opposition Congress, were caught employing AI-generated deep fakes to sway voter opinions. With the 2024 general election approaching, could India slip into a deepfake democracy?

Deepfake democracy: Are India’s AI election tactics a prelude to the future of politics?
Issue 118 • February 23, 2024

This week the digital battleground intensified as Microsoft’s AI tools — powered by OpenAI — fell prey to sophisticated cyberattacks. The culprits? State-backed hackers from China, Russia, and Iran. In the face of such advanced threats, how can nations and corporations bolster their defences to safeguard the future of digital security?

Urgent Security Breach: Global Powers Hack Microsoft's AI
Issue 117 • February 16, 2024

This week, ex-Google CEO Eric Schmidt’s secret AI drone project “White Stork” was leaked. The billionaire technologist has based these operations in Ukraine and plans to build military kamikaze drones so fast they’re ‘nearly impossible to shoot down’.

🚨Leaked: Ex-Google CEO building secret AI “kamikaze” drones… find out more here
Issue 116 • February 9, 2024

This week OpenAI announced their collaboration with Common Sense developing safer AI tools for children. The aim is to develop family-friendly chatbots that cultivate trust among parents and educators. On the other side of the tech world, Google made headlines by splitting up their AI ethics team…yet again. Are tech giants truly making progress or simply taking one step forward and two steps back? 

🔐 Taylor Swift is doing more for AI regulation than BigTech, here’s how:
Issue 115 • February 2, 2024

This week deepfakes are taking center stage: are they 2024’s greatest threat to democracy?
In the UK a significant majority of MPs are raising the alarm to stop AI-generated misinformation from disrupting the democratic process. This concern is not unfounded, as seen in the recent incident in the US where a robocall, eerily imitating President Biden, urged New Hampshire residents not to vote — a suspected tactic of voter suppression. 

🤖 Are deepfakes smart enough to deceive voters?
Issue 114 • January 26, 2024

This week OpenAI made significant strides in AI governance with the introduction of a 'Collective Alignment' team aimed at incorporating public feedback to align AI models with human values. However, they also raised concerns by quietly removing the ban on military usage; are they heading in the right direction or have they taken one step forward and two steps back?

🚫Why did OpenAI secretly remove their military usage ban?
Issue 113 • January 19, 2024

Last year, secret Sino-US diplomacy talks took place between OpenAI, Anthropic and leading Chinese AI experts. The discussions, centered around AI risk and global cooperation, were a success. Despite this, US regulations restricting high-performance chip exports have been introduced for China, raising crucial questions about AI’s role in international diplomacy and geopolitical tensions.

💸 Should we tax Elon Musk over AI-related job-losses?
Issue 112 • January 12, 2024

As the US gears up for elections, the potential of gen-AI to spread misinformation may put democracy under threat. Experts are calling for enhanced support and stringent security measures from federal authorities. Across the pond, a UK anti-extremism think tank urged the government to develop anti-AI terrorism laws, as the pace of development will outgrow current legislative measures by 2025.

🤖Is AI the biggest threat to democracy?
Issue 111 • January 5, 2024

This week, efforts toward global governance increased: 16 nations – including the US and UK – agreed to ensure AI is “secure by design" to prevent misuse. But, with AI abuse on the rise – like recent incidents of illegal AI-generated content circulating UK schools – are these commitments enough?

🧐 Will efforts toward global AI regulation be enough?
Issue 110 • December 1, 2023

This week in AI has been a rollercoaster: Sam Altman was rehired just days after being ousted by OpenAI's board of directors, bringing the company’s stance on financial motivation vs responsible AI into question. Meanwhile, the EU is potentially softening its stance on AI regulation, and Meta's reshuffle of its AI ethics team towards generative AI ventures is turning heads, sparking discussion on the balance between innovation and responsibility.

Is AI safety just a myth? The OpenAI saga unfolds...
Issue 109 • November 23, 2023

The EU's ambitious AI Act is facing significant challenges, with internal disagreement over the inclusion of foundational AI models like OpenAI's GPT series. Meanwhile, the US is increasing international collaboration, with talks in the work with China on AI risk and safety. Will the EU maintain their role as a global leader in AI regulation?

Will the EU AI Act survive?
Issue 108 • November 17, 2023

President Biden has issued an Executive Order on AI that emphasises bolstering safety measures like mandatory testing and advocates for data privacy legislation. Meanwhile, the UK government has unveiled plans to use AI to reduce government ministers’ workloads.

What experts say about the AI Safety Summit
Issue 107 • Published October 20, 2023

The AI world descended to Bletchley Park this week for the first international AI Safety Summit. Across 2 days, world leaders, CEOs of AI labs, academics and research organisations discussed the risks of frontier AI. The result? A tangible first step toward tackling AI risks, and real optimism going forward.

AI Safety Summit Special
Issue 106 • Published November 2, 2023

Intelligence chiefs from MI5 and the FBI have expressed concerns over the potential misuse of AI by bad actors for terrorist activities such as bomb-making, spreading propaganda, and election interference. Meanwhile, the US has raised concern regarding the EU's proposed AI regulation, warning it may stifle innovation by disproportionately benefiting large tech companies, and harming smaller firms

The AI Intelligence threat
Issue 105 • Published October 20, 2023

The UK’s upcoming AI Safety Summit will focus on existential risk, specifically bioweapons, and cyberattacks as potential existential threats posed by AI.

Could AI lead to a new era of bioweapons?
Issue 103 • Published September 29, 2023

This week, the EU emphasised a cautious approach to AI regulation, warning against AI paranoia and stressing the importance of "solid analysis" over dystopian concerns. Meanwhile, Spotify CEO, Daniel Ek, cautioned that while AI laws are vital, the rapid evolution of the technology means they may soon be outdated. He urged the UK to leverage its autonomy post-Brexit for more flexible AI regulations.

AI safety summit: Big names in, startups out
Issue 104 • Published October 6, 2023
bottom of page