top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

The CogX blog

Thought leadership on the most pressing issues of our time

The UK’s approach to AI policy has frequently changed over the past few months with a series of new announcements, policies and structures.


Given that the UK is a world leader on AI safety, what happens in Whitehall really matters, and it can be hard to navigate the current state of play.


This post outlines the UK’s current approach to AI safety, how the thinking has changed, and how the Foundational Model Taskforce works.


The second part of this post will cover what the UK should prioritise on AI Safety and what needs to change.


What is the current landscape of UK AI safety policy?


In July 2022, AI sat within the Department for Culture, Media and Sport (DCMS), with the Office for AI, Alan Turing Institute and Council of AI all feeding into AI policy.


12 months on, everything has changed. The new Department for Science, Innovation and Technology (DSIT) launched in February 2023 and leads on AI policy within government, with the Office for AI housed in the department.


It’s been announced that the AI Council will be shortly disbanded and effectively replaced with a wider group of expert advisors, while the Alan Turing Institute is losing prominence.

And there is a new organisation, the Foundational Model Taskforce, which was created in April 2023 to lead on AI safety and research.


What is the Foundation Model Taskforce?


The Prime Minister and Technology Secretary set up the Foundational Model Taskforce in April 2023, backed with £100m funding. The Task Force's mission is to examine the risks surrounding Foundational AI models, bringing together expertise from industry and academia, and prioritising a safe approach to large-scale models.


Foundational Models are large scale, pre-trained systems which can be fine-tuned to a wide domain of tasks, such as language, vision and human interaction. Popular examples include GPT-4 and DALL-E. These models are susceptible to a diverse risk set, exacerbated by their sheer compute power, scalability and domain adaptability.


The Taskforce will research AI safety risks and inform broader work on the development of international guardrails, such as infrastructure and security standards. Which safety research the foundation prioritises, and how effective it is at developing international solutions will determine its success.


Following a short period where Matt Clifford was interim chair, Ian Hogarth is now leading the taskforce, and comes with a background as a leading entrepreneur, investor and AI specialist. He has real credibility in the AI community, with a history of thought leadership on AI nationalism and the geopolitics of AGI, and was an excellent hire by the Government.


The Taskforce is modelled on the successful Vaccine delivery taskforce, led by Kate Bingham during the pandemic. This means that it operates outside of the usual Whitehall structures and will report directly to the Prime Minister, empowering it to move quickly and not be obstructed by departmental wrangling. Given the speed of developments in AI and the need to move at pace, this is essential.


Ian is looking to build a team with the right mix of technical and policy expertise to help deliver real impact, and has put out an open call for anyone interested in working at the taskforce to get in touch. Technical expertise will be especially important given the recent announcement that leading AI labs will give the UK Government access to their models. The Government will need the technical ability to scrutinise and evaluate the models.


There’s an open question as to how the Taskforce will best use its £100m budget. Options include building sovereign capacity, buying compute for safety researchers or conducting safety research experiments on existing models. Given the spiralling costs of chips and compute, £100m may not be enough in the medium term.

Finally, the Taskforce will also play a leading role in the AI Safety Summit this Autumn.


So what is the AI safety summit?


Launched with much fanfare during the Prime Minister’s visit to Washington D.C. the AI Safety Summit will bring together like minded countries, leading tech companies and researchers to the UK for a Summit later this year.


The Summit will consider the risks of AI, including frontier systems, and discuss how international coordination and solutions can tackle the challenges we face. It will provide an opportunity for countries to come together and share their approaches for domestic regulation in this space and agree how to work together.


A successful summit would see an agreement on safety measures to evaluate and monitor significant risks being signed. Much work will be needed to make that a reality.


What can we expect from the UK in the coming months?


The Prime Minister has identified AI regulation as an opportunity for Global Britain, and for him personally to play a leading international role. He has raised the issue at the G7 in Japan, in his bilateral with President Biden in the White House, and in a speech he gave at London Tech Week. It’s clear that the Prime Minister has a personal interest in the subject and sees the opportunity to make a personal mark on a hugely important area. We should expect to see the Prime Minister continue to make interventions on the subject and potentially give a keynote speech on his vision for an international AI safety approach before the Summit.


We will see continued UK policy action in the lead up to the Safety Summit. Behind the scenes, UK diplomats will need to engage their counterparts in like minded countries to find common ground on principles for AI regulation. They will look to find consensus on what concrete actions the international community could take in the very short timelines until the Summit. Lots of work will also take place on the organisation of the Summit - decisions on which specific countries to invite, how to best integrate diplomacy and technical research, and what role AI Labs should play will be important.


We can also expect the Taskforce to set out their approach to evaluations and safety research in the Autumn and to develop a plan for using their £100m budget. There have been rumours about specific AI legislation that the Government may seek to introduce. More details on this, and the approach the Government will take on regulating against shorter term risks like AI generated misinformation will come out over the next few months.


How does the UK’s approach on AI Safety compare to other countries?


In March 2023 the UK released their white paper on AI regulation. Their pro-innovation approach is guided by 5 core principles, inspired by the OECD, that seeks to prioritise innovation and adaptability under regulation.

Since then it has focussed more on AI Safety and has been leading on international coordination, as described above.


The EU have instead focussed on domestic regulation, primarily the AI Act, as part of their push to set the regulatory model for others to follow. The AI Act imposes strict legislation at all stages of the AI development pipeline, with a sector-wide approach based on risk thresholding. Non Compliance with the legislation is subject to penalties of up to EUR 30 million. The EU has also increased their regulation presence, with a network of member state specific representatives, chaired by the newly implemented Central European AI Board.


Meanwhile, the U.S. has taken a sector-specific, non-centralised approach to AI regulation. Their strategy comprises of federal regulations, parented by the 2022 AI Advisory Committee and the NIST AI Risk Management Framework, upholding Biden’s Bill of Rights - a list of principles to guide companies in their development of AI technologies. Whilst their strategy mirrors the UK’s White Paper in their sector-specific approach, they lack clear state level regulation to enforce adherence to their initiatives.


There is a disparity between the UK, EU and US strategies toward AI regulation, and efforts to bring international coordination may increase at the AI Safety Summit and beyond. Given recent corporate pushback against the EU’s AI Act, it will be particularly interesting to see if they water down the requirements or press ahead with a comprehensive yet stringent regulatory approach.

Previous
Next
bottom of page