NIST launches Dioptra to test AI model security

  • NIST has re-released Dioptra, an open-source tool designed to measure the impact of malicious attacks on AI systems, particularly those targeting training data. 
  • The tool aims to help companies and users assess and track AI risks, serving as a platform for benchmarking and testing AI models.

OUR TAKE
The National Institute of Standards and Technology (NIST) has reintroduced Dioptra, an open-source web-based tool that assesses the vulnerabilities and performance degradation of AI systems due to malicious attacks, especially those that “poison” training data. This tool is intended to assist organisations in evaluating and managing AI risks, providing a platform for benchmarking and testing AI models against simulated threats.

-Rae Li, BTW reporter

What happened

The National Institute of Standards and Technology (NIST) has re-released Dioptra, an open-source web-based tool that was initially launched in 2022. Dioptra is designed to measure the impact of malicious attacks on the performance of AI systems. This modular tool can help companies and users assess, analyse, and track AI risks, serving as a platform for benchmarking and researching models, as well as exposing them to simulated threats in a “red-teaming” environment. NIST emphasises that Dioptra can provide insights into the types of attacks that might degrade AI system performance and quantify this impact.

NIST has also published documents from its newly created AI Safety Institute, which outline strategies to mitigate the dangers of AI, such as its potential misuse in generating nonconsensual pornography. This effort is part of a broader initiative following the executive order on AI by President Joe Biden. The EO mandates that companies developing AI models, such as Apple, must notify the federal government and share the results of all safety tests before deploying these models publicly. Thus, Dioptra’s development and release are significant steps in the ongoing collaboration between the U.S. and the U.K. to advance AI model testing and safety.

Also read: NIST launches platform for assessing generative AI

Also read: Singapore minister emphasises the necessity of world AI framework

Why it’s important 

It marks a substantial advancement in the field of AI security and risk management, and the re-release of the Dioptra tool provides a significant resource for AI system developers and users to better understand and assess the vulnerability of AI models to malicious attacks. Through simulated attacks and “red-teaming” testing, Dioptra helps to identify and quantify potential security threats, thereby facilitating the design and deployment of more secure AI systems. This is critical for protecting user data, maintaining privacy and preventing misuse of AI technologies.

In addition, the launch of Dioptra is a response to US President Joe Biden’s executive order on AI, which emphasises the importance of AI security and transparency and requires companies developing AI models to share the results of security tests with the government. This will not only help boost public trust in AI technology, but also set the standard for global AI governance.

Rae-Li

Rae Li

Rae Li is an intern reporter at BTW Media covering IT infrastructure and Internet governance. She graduated from the University of Washington in Seattle. Send tips to rae.li@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *