Close Menu
    Facebook LinkedIn YouTube Instagram X (Twitter)
    Blue Tech Wave Media
    Facebook LinkedIn YouTube Instagram X (Twitter)
    • Home
    • Leadership Alliance
    • Exclusives
    • Internet Governance
      • Regulation
      • Governance Bodies
      • Emerging Tech
    • IT Infrastructure
      • Networking
      • Cloud
      • Data Centres
    • Company Stories
      • Profiles
      • Startups
      • Tech Titans
      • Partner Content
    • Others
      • Fintech
        • Blockchain
        • Payments
        • Regulation
      • Tech Trends
        • AI
        • AR/VR
        • IoT
      • Video / Podcast
    Blue Tech Wave Media
    Home » OpenAI improves AI safety through U.S. AI Safety Institute
    08-01-openai2
    08-01-openai2
    AI

    OpenAI improves AI safety through U.S. AI Safety Institute

    By Rae LiAugust 1, 2024No Comments3 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    • OpenAI CEO Sam Altman has announced that the company is working with the AI Safety Institute to provide the organisation with early access to its next major generative AI model for security testing.
    • OpenAI has committed to eliminating restrictive non-disparagement clauses, creating a security committee, and dedicating 20% of its computing resources to security research.

    OUR TAKE
    OpenAI has announced a partnership with the US-based AI Safety Institute to provide early access to its upcoming advanced generative AI models for in-depth security testing. The move is intended to respond to concerns that it may be neglecting security issues in its pursuit of more powerful AI technologies, and shows OpenAI’s proactive stance on AI Safety research. Meanwhile, the company’s spending on federal lobbying activities has increased significantly this year, suggesting that OpenAI is striving to play a greater role in national AI policymaking to ensure that its technology development keeps pace with security standards.

    -Rae Li, BTW reporter 

    What happened

    Sam Altman, CEO of OpenAI, claims that the company is working with the AI Safety Institute in the US to provide early access to its upcoming advanced generative AI models for in-depth security testing. The move is intended to respond to concerns that it may be neglecting security issues in its pursuit of more powerful AI technologies, and indicates OpenAI’s proactive stance on AI Safety research. At the same time, the company’s spending on federal lobbying activities has increased significantly this year, showcasing that OpenAI is striving to play a greater role in national AI policymaking to ensure that its technology development keeps pace with security standards.

    OpenAI has committed to eliminating restrictive non-disparagement clauses, establishing a security committee, and dedicating 20% of its computing resources to security research. These measures are designed to strengthen the company’s internal security research and development processes to ensure the security and reliability of AI technology. Meanwhile, Jason Kwon, OpenAI’s chief strategy officer, responds to the five senators’ questions about the company’s policies, developing the company’s commitment to implementing strict security protocols at every stage. 

    Also read: Coinbase adds three board members, including OpenAI executive 

    Also read: OpenAI’s SearchGPT: Challenging Google’s search dominance

    Why it’s important 

    OpenAI’s collaboration with the AI Safety Institute in the US and its focus on safety testing indicates the AI industry’s growing concern for safety and ethical issues alongside rapid technological development. This collaboration will help boost public confidence in the safety of AI technology and set new standards and norms for the healthy development of AI technology. By working closely with government agencies, OpenAI has developed its leadership in AI safety and its commitment to industry responsibility.

    OpenAI’s increased investment in federal lobbying activities and its commitment to AI Safety research reveals the company’s active participation and influence in the AI policymaking process. Through these initiatives, OpenAI is attempting to ensure that technological advancements are accompanied by their positive impact on society.

    AI OpenAi U.S. AI Safety Institute
    Rae Li

    Rae Li is an intern reporter at BTW Media covering IT infrastructure and Internet governance. She graduated from the University of Washington in Seattle. Send tips to rae.li@btw.media.

    Related Posts

    CoreWeave acquires Core Scientific in $9bn AI infrastructure deal

    July 9, 2025

    OpenAI tightens security amid DeepSeek ‘copy’ allegations

    July 9, 2025

    Comcast moves more data with less energy used

    July 9, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    CATEGORIES
    Archives
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023

    Blue Tech Wave (BTW.Media) is a future-facing tech media brand delivering sharp insights, trendspotting, and bold storytelling across digital, social, and video. We translate complexity into clarity—so you’re always ahead of the curve.

    BTW
    • About BTW
    • Contact Us
    • Join Our Team
    TERMS
    • Privacy Policy
    • Cookie Policy
    • Terms of Use
    Facebook X (Twitter) Instagram YouTube LinkedIn

    Type above and press Enter to search. Press Esc to cancel.