OpenAI improves AI safety through U.S. AI Safety Institute

  • OpenAI CEO Sam Altman has announced that the company is working with the AI Safety Institute to provide the organisation with early access to its next major generative AI model for security testing.
  • OpenAI has committed to eliminating restrictive non-disparagement clauses, creating a security committee, and dedicating 20% of its computing resources to security research.

OUR TAKE
OpenAI has announced a partnership with the US-based AI Safety Institute to provide early access to its upcoming advanced generative AI models for in-depth security testing. The move is intended to respond to concerns that it may be neglecting security issues in its pursuit of more powerful AI technologies, and shows OpenAI’s proactive stance on AI Safety research. Meanwhile, the company’s spending on federal lobbying activities has increased significantly this year, suggesting that OpenAI is striving to play a greater role in national AI policymaking to ensure that its technology development keeps pace with security standards.

-Rae Li, BTW reporter 

What happened

Sam Altman, CEO of OpenAI, claims that the company is working with the AI Safety Institute in the US to provide early access to its upcoming advanced generative AI models for in-depth security testing. The move is intended to respond to concerns that it may be neglecting security issues in its pursuit of more powerful AI technologies, and indicates OpenAI’s proactive stance on AI Safety research. At the same time, the company’s spending on federal lobbying activities has increased significantly this year, showcasing that OpenAI is striving to play a greater role in national AI policymaking to ensure that its technology development keeps pace with security standards.

OpenAI has committed to eliminating restrictive non-disparagement clauses, establishing a security committee, and dedicating 20% of its computing resources to security research. These measures are designed to strengthen the company’s internal security research and development processes to ensure the security and reliability of AI technology. Meanwhile, Jason Kwon, OpenAI’s chief strategy officer, responds to the five senators’ questions about the company’s policies, developing the company’s commitment to implementing strict security protocols at every stage. 

Also read: Coinbase adds three board members, including OpenAI executive 

Also read: OpenAI’s SearchGPT: Challenging Google’s search dominance

Why it’s important 

OpenAI’s collaboration with the AI Safety Institute in the US and its focus on safety testing indicates the AI industry’s growing concern for safety and ethical issues alongside rapid technological development. This collaboration will help boost public confidence in the safety of AI technology and set new standards and norms for the healthy development of AI technology. By working closely with government agencies, OpenAI has developed its leadership in AI safety and its commitment to industry responsibility.

OpenAI’s increased investment in federal lobbying activities and its commitment to AI Safety research reveals the company’s active participation and influence in the AI policymaking process. Through these initiatives, OpenAI is attempting to ensure that technological advancements are accompanied by their positive impact on society.

Rae-Li

Rae Li

Rae Li is an intern reporter at BTW Media covering IT infrastructure and Internet governance. She graduated from the University of Washington in Seattle. Send tips to rae.li@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *