Hiroshima AI Process: AI ‘code of conduct’ to be ready by end of 2023

  • The voluntary code of conduct, known as the Hiroshima AI Process, sets out how major countries should govern Artificial Intelligence models, combatting privacy concerns and security risks
  • The EU has been at the forefront of regulating the emerging technology with its hard-hitting AI Act, while Japan, the United States and countries in Southeast Asia have taken a more hands-off approach

Back in May 2023, the Group of Seven (G7) leading economies agreed to establish a code of conduct for companies developing advanced Artificial Intelligence (AI) systems, called ‘The Hiroshima AI Process’. This agreement, in line with government efforts, will reduce the risk that Generative AI technology will be misused and is considered an important milestone for the management of AI in major countries.

Recently at the Internet Governance Forum in Kyoto, Japan, it was stated that the establishment of this set of guideline is on track, and will be finalised and released before the end of 2023.

“Generative AI is about to change this history of mankind,” said Fumio Kishida, Prime Minister of Japan, at the summit. “The Hiroshima AI Process will provide guidelines by the end of this year to ensure we can provide the support that SMEs need.”

Also read: IGF 2023: AI plus big data could take over the world – and also save it

Towards a ‘safe, secure, and trustworthy’ AI future

The process was launched in May by leaders of the G7 nations – Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States – and the European Union at a ministerial forum called the Hiroshima AI Process.

The 11-point code “aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organisations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems,” according to the G7 document.

In addition, the code “will also help to fully harness the benefits of these technologies and address the risks and challenges that come with them.” It urges companies to take appropriate steps to identify, assess and mitigate risks throughout the AI lifecycle; and to address incidents and patterns of abuse once AI products are on the market.

Companies should publicly publish reports on the capabilities, limitations, use, and abuse of AI systems, and invest in robust security controls.

Also read: China unveils stricter regulations for AI training data

EU sets example for the world

The European Union has taken the lead in regulating this emerging technology with its strict AI legislation. Meanwhile, Japan, the United States and Southeast Asian countries have taken a less interventionist approach than the EU to boost economic growth. Earlier this month, Vera Jourova, the European Commission’s head of digital affairs, said at the Internet Governance Forum in Kyoto, Japan, that the code of conduct is a solid foundation for ensuring security. It will act as a bridge until regulation is in place.

Ivy-Wu

Ivy Wu

Ivy Wu was a media reporter at btw media. She graduated from Korea University with a major in media and communication, and has rich experience in reporting and news writing.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *