7 OCTOBER | LONDON 2024
SEPTEMBER 12TH - 14TH
The O2, LONDON
The CogX Blog
Thought leadership on the most pressing issues of our time
The UK’s approach to AI safety - part II
What should the UK’s Foundation Model Taskforce prioritise in its AI safety strategy? We outline five key points
Rishi Sunak wants the UK to lead the charge on AI governance and safety. With the industry moving at light-speed, it’s essential that the government acts swiftly on regulation and international governance to capitalise on the post-Brexit Global Britain opportunity, and ensure the UK can reap the benefits of AI deployment safely.
In part I of this post we discussed the landscape of AI safety in the UK and how the Foundation Model Taskforce and AI Safety Summit will work. As we wrote, the appointment of serial entrepreneur Ian Hogarth reflects how seriously the UK is taking the task at hand. But with the Summit mere months away, the Taskforce has little time to come up with a plan that addresses the winding complexities of AI ethics. In this instalment, we’ll outline what the Government should prioritise as it formulates its strategy.
1. Find and foster the right AI expertise
AI systems are complex. Designing appropriate evaluations, understanding model capabilities, and testing developers’ claims require robust technical skills. Unfortunately, AI and machine learning experts are in short supply. The government is competing not just with the domestic private sector for talent, but with countries like the US that offer top-tier compensation packages to AI experts. To successfully regulate and manage AI, the government will need to both upskill civil servants and attract technical talent.
Upskilling civil servants
There are few areas of government policy that will be left untouched by AI. It has the potential to revolutionise the way we deliver healthcare and education and transform how citizens interact with the state. But for AI to be used effectively in public services, civil servants need an understanding of how it works and what the risks are. If AI is making recommendations that have a major impact on humans — be it through algorithmic bias in sentencing, or predicted exam results — we need to ensure civil servants have the right skills to understand it and deploy it safely. Understanding AI risks and best practice from deployment should become part of training curricula across Government departments, including for senior civil servants.
Technical expertise for the Taskforce
To properly regulate, audit, monitor and learn from AI systems, regulators and the Taskforce needs the right technical experts to investigate and evaluate models. Foundation models are powerful; evaluating their safety and reliability is crucial. The Taskforce’s technical experts need to understand the complexities, limitations, and risks of these systems, and ensure that they align with safety guidelines.
As aforementioned, compensation is a hurdle: why join the Taskforce when private companies can offer much higher salaries? Government will need to lean in to the prestige of the work — how Taskforce members will contribute to frameworks that dictate the future of AI infrastructure and standards. This may mean developing an AI fellowship as think tanks have called for, or partnering with academia, research institutions and the private sector to promote knowledge exchange — perhaps in the form of secondments to the Taskforce.
It will be crucial that the Taskforce is able to move quickly, and not be shackled by the bureaucracy and siloes that can thwart productivity in the public sector. As the Taskforce is modelled after the Vaccine Taskforce, it operates outside of Whitehall structures and reports directly to the Prime Minister. The hope is that this will allow the Taskforce to keep pace with the speed of developments in AI.
2. Settle the regulation versus innovation debate
In March, the UK released its white paper on AI regulation. It outlines five principles, for a “pro-innovation approach”. This is in contrast to the EU’s stricter, more heavy-handed AI Act, which will require companies to disclose the content that their AI systems are trained on. But while the UK’s strategy may enable innovation, it could also enable abuses of power: as the principles outlined are non-binding (and vague), there’s no way to enforce them. The EU, for example, has set out penalties proportionate to company value if regulations are not adhered to.
Regulatory frameworks are critical to safe and sustainable experimentation; guidelines alone won’t achieve this. At the same time, rigidity should be avoided; regulations should be imbued with flexibility, as we can’t predict how AI will evolve months, let alone years, from now. As the EU was the first to establish comprehensive guidelines for AI, it has set the benchmark for regulation, though it faces pushback from businesses worried that it will stifle innovation. A balance between innovation, flexibility and futureproofing is key for the UK as it develops its regulatory framework — and tries to carve out its space as an AI leader.
3. Ensure that the Summit prioritises collaboration
The Summit will be the first international forum on AI safety. The goal is to bring together “like-minded” countries to discuss and develop actions that ensure AI doesn’t become an existential threat. With the UK not included in key forums like the Trade and Technology Council, this is the government’s chance to prove it can convene stakeholders on AI. However, attempts to achieve consensus may be undermined by the countries that aren’t in attendance: the Guardian reports that China is “likely to be excluded” from the Summit.
Like climate change, AI is a borderless issue that calls for international cooperation. Efforts should be made to make standards as consistent as possible across countries. Without cohesion between countries — particularly those that are leaders in AI, like China, the US and the UK — any agreement made will be less impactful. The key to the UN-brokered Treaty on the Non-Proliferation of Nuclear Weapons was its widespread buy-in: 191 countries are signatories. Like nuclear energy, AI could prove to be a threat to humanity if left unabated in the wrong hands. A framework similar to the nuclear non-proliferation agreement could contain and manage risks that may develop from the ongoing arms race for AI dominance.
At the Summit, the UK should aim to reach consensus on key areas, such as developing an international body to monitor and police inter-country regulation, and a base-level agreement on the minimum evaluations AI companies should run. It’s important that the Summit achieves the right balance between consensus and ambition. Strong diplomacy will be essential behind the scenes in the coming months to make it a success.
4. Develop an evaluations framework
We need to ensure that AI systems are trustworthy and reliable and don’t have unintended consequences, capabilities or flaws. Best practices for evaluating LLMs currently don’t exist. The Taskforce should develop standards for effective safety measurements and evaluations to ensure that AI companies are held accountable. Jack Clark, co-founder of Anthropic, points out that without adequate evaluations, monopolies of power in the industry will develop further, leaving AI labs to dictate how these powerful tools are used. Now is the time for governments to ensure that they are the ones setting standards — rather than AI companies, who are often motivated by profit over public safety.
To achieve this, SaferAI cofounder Simeon Campos and his colleagues suggest that a risk assessment methodology be adopted to evaluate the claims AI developers make about their models. Such a methodology would evaluate current and future threats and risks through system modelling, predicting LLMs’ ability to develop bioweapons, launch cyber attacks, spread misinformation, deceive humans and more. A similar system was used to predict threats in the nuclear industry, Campos points out. Currently, policymakers lack an understanding of AI systems and their potential for misuse. The development of a robust evaluation framework should be one of the most urgent objectives for the UK government.
5. Get to the crux of the misinformation problem
We know that misinformation is effective: studies show that fake news spreads faster and farther than the truth, with the effects most pronounced for false political news. AI-generated disinformation is even cheaper and faster to create. It’s also more effective than misinformation written by humans, new research shows. There are calls to regulate social media platforms — where misinformation is most likely to proliferate — for failing to police nefarious uses of their sites, and for these platforms to develop tech solutions to find and eliminate these fake stories.
But social media platforms were struggling to police fake news before AI tools were made available to the general public. It’s only going to become harder for these platforms to track down and eliminate misinformation as AI proliferates. The EU has urged platforms like Google and Facebook to label AI-generated content, and at the White House last week, tech companies pledged to add watermarks to AI-generated content. With elections on the horizon in 2024, it’s essential that the UK Government also prioritises combatting the dangerous spread of AI-perpetuated disinformation.
But to tackle the crux of this problem, the UK may take a cue from Finland, which overhauled its entire education system in 2016 when it started seeing a rapid rise in fake news coming from Russian propagandists and bots. Now, students are taught to identify fake news from kindergarten through secondary school. In maths classes, they learn how statistics can be manipulated; in art, they study deepfakes; in history, they are taught how propaganda can upend civil discourse. There’s also a core class on literacy across digital platforms, where they study emerging technologies and the psychological impacts of fake news, and learn how to evaluate and fact-check information. Today, the country tops the Media Literacy Index.
According to UK regulator Ofcom, 40% of adults in the UK don’t have the skills to assess whether news is fake or real. And just 2% of children under 15 can distinguish misinformation. If the UK makes AI and digital literacy a core part of school curricula, it could help counteract the negative effects of misinformation.
The Prime Minister has made it clear that he wants the UK to be an AI superpower, but there are risks involved in such an approach. Tying the UK’s brand and reputation to AI — a technology we still know little about — means that the country may also be associated with any future fallout.
If the Summit is successful in developing consensus around transparency, evaluations and accountability in AI, the hope is that this fate will be avoided. By promoting itself as a leader and convener of ethical AI, the UK has an opportunity to shape the future of the technology. Now, it must deliver.