top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI, safety and ethics, explained | 08.03.24

If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~3 minutes


The UK's economic future may very well hinge on AI adoption. Today's newsletter dives into the crucial insights from our guest contributor Adrian Joseph OBE on why the nation can't afford to fall behind the curve.

 

As a distinguished former member of the UK AI Council and a speaker at multiple CogX Festivals, Adrian's compelling narrative is a must-read for anyone seeking to understand the pivotal role AI will play in shaping the UK's economic landscape — and the steps the country must take to avoid lagging behind. Find his opinion piece below.

 

In other news, Microsoft's AI image generator sparks safety concerns, over 100 AI experts have signed an open letter calling on tech giants to ease restrictions on independent research, and an ex-Google engineer faces arrest over allegations of filtering AI secrets to Chinese firms.

 

We cover this, and more below.


- Charlie and the Research and Intelligence Team


P.S. You can now register your interest for CogX Festival LA 7 May, where we’ll be talking all things AI, Impact, and Transformational Tech!


The Top Story



AI Adoption: The Overlooked Existential Risk

 

By Adrian Joseph OBE

 

The UK government's recent AI Safety Summit raised the profile of AI in the UK and internationally. The primary focus was on biosecurity and cybersecurity as the existential threats posed by GenAI, particularly the frontier models developed by large technology firms, primarily in the US. 

 

The Summit certainly generated considerable media coverage.  David Benigson, CEO of Signal.AI which tracks media coverage and sentiment analysis with AI, says that “the AI Summit definitely had a big impact on putting the spotlight on AI in the UK. November was the month with the most coverage on AI ever - 2.5 times higher than September.“ 

 

In the same week, Kamala Harris’s London speech on AI Safety clearly affirmed the US definition of a second, and significantly wider, category of existential risks which explicitly included bias, discrimination, fake news, and misinformation. With over 2 billion people voting in the next 12 months, misinformation poses a significant and urgent threat, particularly for the top 3 democracies going to the polls this year.

 

However, there is a third category of AI existential risk that is particularly important for the UK, considering the lack of productivity growth in the last decade (ranked 23rd out of 38 OECD countries), stemming in part from a lack of concerted focus on AI adoption.  Bart van Ark, head of the UK Productivity Institute, recently expressed similar concerns in an FT article (9/11/23), noting the UK's alarming trend in productivity. He stated, "It ultimately implies we are not making any progress on translating technological change and innovation into better results for the economy."

 

While the UK now ranks fourth globally in AI, according to Tortoise Media, this is largely due to the strength of its talent pool, R&D capabilities, and AI start-up community. However, the UK ranks much lower in operating environment, infrastructure, and government strategy.

 

The government has taken some noteworthy steps recently with renewed emphasis on hiring private sector talent from AI, data, and digital backgrounds, appointing an AI Minister in every Government Department, hiring an AI “hit squad” of 30 people and holding AI focussed hackathons.  This included one to improve call centre efficiency and getting better value for public contracts.  Some businesses have seen 25%+ reductions in call centre times from AI initiatives. 

 

However, these are small steps in the context of the potential opportunity across Government departments and the UK economy.  Further actions should be considered including the appointment of a national Head for AI adoption to programmatically scale AI, like the temporary appointment of the Government’s Head of AI Safety.

 

It is also critical to embrace and capture a broader definition of AI value that extends beyond the current narrow focus on AI start-ups and tech company valuations. There is a much larger prize, for the public and private sectors, in driving faster adoption across the health service, education sector, policing, HMRC, and UK Plc as a whole.  This is one of the reasons why McKinsey recently ranked GenAI as the number 1 agenda item for CEO’s in 2024. 

 

Some enlightened FTSE Chairs, at Pets at Home for example, have recognised the importance of diversifying their Boards and have specifically targeted NEDs with applied AI expertise. More UK private and public sector boards need to follow suit and pay more attention to AI adoption, in addition to the commendable focus on AI Safety.  As Dr Hayaatun Sillem highlights, “productivity in UK companies already lags behind key comparators and sluggish adoption of today’s innovative technology will further erode our competitiveness – a prospect we can ill afford…”

 

Read the full opinion piece on Cogxfestival.com

 

Share your expertise! Want to be a guest contributor in our next issue? drop us a line at: editors@cogx.live.


Ethics and Governance


🔓 Microsoft AI image generator under fire for safety concerns. Shane Jones, a Microsoft AI engineer, has raised concerns over the company's AI image generator. In a public letter, Jones alleged the tool lacks safeguards against generating violent and sexual content.

 

🤖 AI image generators pose a threat to elections, study reveals. A new study by the Center for Countering Digital Hate found that popular AI image generators, including Midjourney, ChatGPT, DreamStudio, and Microsoft's Image Creator, are producing election disinformation in an alarming 41% of test cases.

 

⛓️ Ex-Google engineer arrested for alleged theft of AI secrets for Chinese firms. Authorities allege that the suspect secretly worked for two Chinese AI companies while employed at Google, using his position to syphon off sensitive information.


🚩 AI-generated images of Trump with Black voters being spread by supporters, a BBC investigation reveals. These fabricated visuals, including one created by a Florida radio host who acknowledged its artificial origin, are circulating widely on social media.

 

AI Dilemmas


🎰 Concerns over the gambling industry's use of AI. The gambling industry is turning to AI, promising a more personalised experience. But will this tech fuel addiction, or can it be used responsibly?


📝 AI tools like ChatGPT are making it easier for applicants to write applications, but should they be allowed to? Big Four firms like KPMG, Deloitte, and PwC are cracking down on AI-generated applications, fearing they give applicants an unfair advantage.


🚀Enjoying our content? We’re also on LinkedIn — Follow us to stay in the loop with the week's most impactful AI news and research!

 

Insights & Research



💰The UK and France are strengthening their research and AI ties. This collaboration will see them invest £800,000 in joint research projects, establish a new partnership focused on responsible AI development — and secure more joint funding opportunities.

 

⚔️ The US Army is testing using AI chatbots as advisors in war game simulations. While the AI showed promise in a simplified scenario, experts warn against using the technology in real-world situations due to ethical, legal, and technical concerns.

 

📜 Over 100 AI experts signed an open letter urging tech giants like OpenAI and Meta to loosen their grip on independent research. They argue that strict company policies, designed to prevent misuse of AI models, are unintentionally hindering safety testing.


In case you missed it


Anthropic's Claude 3 is causing a stir after seeming to realise when it was being tested by one of Anthropic's prompt engineers. Could this be a sign of true awareness? Watch the full story here:



✍️ Enjoying this newsletter?  Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work


🚀 Remember to register your interest for CogX Festival LA 7 May!

bottom of page