• The UK and the US have signed a co-operation agreement aimed at reducing the risks of AI, with the two governments aiming to establish a common approach to AI safety testing and share the ability to respond effectively to risks.
  • This collaboration between the UK and the US aims to create a common scientific basis for AI safety testing for adoption by researchers worldwide.

The UK and US have signed a Memorandum of Understanding to collaborate on rigorous testing for advanced AI systems. The partnership aims to align scientific approaches and develop robust evaluation methods for AI models, systems, and agents. Industry experts see this collaboration as crucial for promoting trust and safety in AI development and adoption across sectors.

A new milestone

Under the pact, the UK’s Institute for AI Safety and the US organisation will emulate the GCHQ-NSA security collaboration by exchanging research expertise on reducing AI risk.

The UK and US have signed a landmark pact to tackle technological challenges, according to UK Minister for Technology Michelle Donelan. The pact aims to capitalise on technology’s potential for easier and healthier lives and tackle risks.

Also read:Major Chinese telecom companies shift focus to AI

Also read:Taco Bell and Pizza Hut apply AI to fast food

The institutes are partnering to develop a common approach to AI safety testing, share capabilities to effectively tackle risks, conduct joint public testing exercises on an openly accessible AI model, and explore personnel exchanges.

US Secretary of Commerce Gina Raimondo highlighted the importance of a partnership between the US and UK to address emerging risks related to AI. Both governments acknowledge the rapid advancements in AI and the need for a global approach to security. The partnership aims to further the US-UK relationship and lay the groundwork for securing AI now and in the future.

Laying the foundations for AI

In addition to joint testing and capability sharing, the UK and the US are working together to exchange information on AI model capabilities, risks, and technical research, with the aim of establishing a common scientific basis for AI safety testing. Despite the focus on risk, the UK has no plans to regulate AI more broadly, unlike US President Joe Biden’s stricter stance on AI models that threaten national security and the stricter provisions of the EU’s Artificial Intelligence Act. Industry experts believe that such cooperation is vital to promoting trust and safety in the development and application of AI in industries such as marketing, finance, and customer service.