U.S. supports open AI models and proposes risk oversight

  • The U.S. Department of Commerce released a report supporting open-weight generative AI models as helpful to small companies and individual developers, while recommending that the government develop the ability to monitor the potential risks of these models.
  • The report calls on the government to collect and assess the risks and benefits of open models and, if necessary, impose restrictions to comply with President Biden’s AI Executive Order.

OUR TAKE
The U.S. Department of Commerce has released a report in support of open-weighted generative AI models, such as Meta’s Llama 3.1, and recommends that the government develop new capabilities to monitor the potential risks of these models. The report highlights that open models increase accessibility to generative AI for small companies, researchers, non-profits, and individual developers, and recommends that no restrictions should be placed on access to these models until research is done to limit the harm that could be done to the market. At the same time, the report calls on the government to establish an ongoing process to gather evidence of the risks and benefits of open models, evaluate that evidence, and take action based on the results of that evaluation, including imposing restrictions on model availability if necessary.

-Rae Li, BTW reporter

What happened

The National Telecommunications and Information Administration (NTIA), a division of the U.S. Department of Commerce, has released a report that supports open-weighted generative AI models. The report argues that open-weighted models can make generative AI technologies more accessible and usable by small companies, researchers, nonprofits, and individual developers, thus fostering innovation and competition in the marketplace. The report suggests that before considering imposing restrictions on access to these models, the government should examine whether these restrictions would harm the market.

The report claims that the government should develop new capabilities to monitor the potential risks of open models and recommends that an ongoing process be established to gather evidence on the risks and benefits of these models. This includes evaluating the evidence, researching the safety of AI models, supporting risk mitigation research, and developing risk-specific metrics to adjust policy as necessary. These measures are designed to balance the innovative potential and potential risks of AI technologies, and are consistent with the Biden administration’s executive order on AI.

Also read: Apple commits to AI safety in White House Initiative

Also read: Trump pledges national bitcoin stockpile to counter China

Why it’s important

It provides clear direction and recommendations for U.S. government policymaking in the area of AI. By supporting generative AI models with open weights, the report highlights the importance of fostering technological innovation and competition in the marketplace, which can help ensure that small businesses and independent developers are able to access and utilise advanced AI technologies. This can help promote the democratisation of the technology and stimulate more innovation and competition, thereby accelerating the development and adoption of AI technologies.

By establishing a continuous monitoring and assessment mechanism, the government is able to identify and respond in a timely manner to the risks that may be brought about by AI technologies, such as privacy infringement, dissemination of misleading information and employment impacts brought about by automation. This helps ensure the healthy development of AI technology while protecting public interests and social stability.

Rae-Li

Rae Li

Rae Li is an intern reporter at BTW Media covering IT infrastructure and Internet governance. She graduated from the University of Washington in Seattle. Send tips to rae.li@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *