Close Menu
    Facebook LinkedIn YouTube Instagram X (Twitter)
    Blue Tech Wave Media
    Facebook LinkedIn YouTube Instagram X (Twitter)
    • Home
    • Leadership Alliance
    • Exclusives
    • Internet Governance
      • Regulation
      • Governance Bodies
      • Emerging Tech
    • IT Infrastructure
      • Networking
      • Cloud
      • Data Centres
    • Company Stories
      • Profiles
      • Startups
      • Tech Titans
      • Partner Content
    • Others
      • Fintech
        • Blockchain
        • Payments
        • Regulation
      • Tech Trends
        • AI
        • AR/VR
        • IoT
      • Video / Podcast
    Blue Tech Wave Media
    Home » OpenAI’s next model to undergo safety checks by the U.S. Government
    08-02-OpenAI
    08-02-OpenAI
    AI

    OpenAI’s next model to undergo safety checks by the U.S. Government

    By Rebecca XuAugust 2, 2024Updated:December 20, 2024No Comments3 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    • OpenAI CEO Sam Altman has said that the next model developed by OpenAI will undergo safety checks and evaluations by the U.S. Government before its release to the public.
    • This move is a major stride in ensuring the careful development and use of advanced AI, addressing concerns over risks and ethical issues.

    OUR TAKE
    OpenAI’s initiative to subject its next model to government safety checks sets a precedent for other tech companies and research institutions to prioritise safety, ethics, and accountability in their AI projects. As AI continues to advance rapidly, ensuring the responsible and beneficial use of these technologies will be essential for building trust and confidence among stakeholders and the public.

    –Rebecca Xu, BTW reporter

    What happened

    OpenAI has become a resounding name in the AI industry, thanks to ChatGPT and the suite of foundation models developed by the company. Under Altman‘s leadership, the lab has actively promoted the development of new products, but this fast-paced approach has also attracted criticism. Including its former co-head of safety, claim that the lab has overlooked safety issues in advanced AI research.

    In light of these concerns, five U.S. Senators recently wrote to Altman, questioning OpenAI’s commitment to safety and the cases of potential retaliation against former employees who publicly raised concerns, based on the non-disparagement clauses in their employment contracts.

    In a post on X, Sam Altman revealed that OpenAI has been collaborating with the U.S. AI Safety Institute, a federal entity, to establish an arrangement for granting early access to the upcoming foundation model. This partnership aims to advance the scientific understanding and evaluation of AI technologies.

    Altman also highlighted that the organisation has revised its non-disparagement policies, now permitting both current and former staff to openly voice concerns regarding the company and its projects. OpenAI maintains its dedication to allocating a minimum of 20% of its computational resources towards AI safety research.

    Also read: What is OpenAI?

    Also read: OpenAI improves AI safety through U.S. AI Safety Institute

    Also read: OpenAI supports legislation to shape the future of AI

    Why it’s important

    OpenAI, known for its cutting-edge research in AI and its commitment to promoting safe and beneficial AI for society, has partnered with government agencies to conduct rigorous safety assessments of its upcoming model. The collaboration aims to address potential risks such as unintended biases, security vulnerabilities, and ethical considerations that may arise from the model’s usage.

    By subjecting the AI model to thorough safety checks by the U.S. Government, OpenAI seeks to demonstrate its commitment to transparency, accountability, and responsible innovation in the field of artificial intelligence. The evaluation process will involve experts from various disciplines, including AI researchers, ethicists, policymakers, and representatives from civil society.

    The decision to involve government oversight in the development of the AI model reflects the growing recognition of the importance of regulatory frameworks and safety mechanisms for advanced technologies. It also highlights the need for collaborative efforts between industry, academia, and government to address the complex challenges posed by AI development and deployment.

    OpenAi Sam altman U.S. AI Safety Institute
    Rebecca Xu

    Rebecca Xu is an intern reporter at Blue Tech Wave specialising in tech trends. She graduated from Changshu Institute of Technology. Send tips to r.xu@btw.media.

    Related Posts

    T-Mobile drops DEI programmes as FCC scrutiny intensifies

    July 14, 2025

    Britain and France back Eutelsat with $1.65B to rival Starlink

    July 14, 2025

    Hong Kong boosts rural 5G with $25M rollout

    July 14, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    CATEGORIES
    Archives
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023

    Blue Tech Wave (BTW.Media) is a future-facing tech media brand delivering sharp insights, trendspotting, and bold storytelling across digital, social, and video. We translate complexity into clarity—so you’re always ahead of the curve.

    BTW
    • About BTW
    • Contact Us
    • Join Our Team
    TERMS
    • Privacy Policy
    • Cookie Policy
    • Terms of Use
    Facebook X (Twitter) Instagram YouTube LinkedIn

    Type above and press Enter to search. Press Esc to cancel.