Close Menu
    Facebook LinkedIn YouTube Instagram X (Twitter)
    Blue Tech Wave Media
    Facebook LinkedIn YouTube Instagram X (Twitter)
    • Home
    • Leadership Alliance
    • Exclusives
    • Internet Governance
      • Regulation
      • Governance Bodies
      • Emerging Tech
    • IT Infrastructure
      • Networking
      • Cloud
      • Data Centres
    • Company Stories
      • Profiles
      • Startups
      • Tech Titans
      • Partner Content
    • Others
      • Fintech
        • Blockchain
        • Payments
        • Regulation
      • Tech Trends
        • AI
        • AR/VR
        • IoT
      • Video / Podcast
    Blue Tech Wave Media
    Home » Google Play tightens rules on AI apps amid deepfake nude scandal
    Google Play
    Google Play
    AI

    Google Play tightens rules on AI apps amid deepfake nude scandal

    By Lydia LuoJune 7, 2024No Comments3 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    • Google updates its guidelines for AI apps on Google Play to prevent inappropriate and harmful content.
    • AI apps must not generate sexual content, violence, or other restricted material and should provide user reporting features.
    • Developers must rigorously test AI tools for user safety and adhere to new promotional guidelines to avoid misleading claims.

    Our Take
    Google’s stringent new policies on AI apps are a vital response to the misuse of AI for creating deepfake nudes. These measures are crucial for curbing unethical app practices and protecting user privacy and safety on Google Play.

    —Lydia Luo, BTW reporter

    Google is tightening regulations for AI applications available on Google Play. The company announced new measures on Thursday aimed at curbing the distribution of inappropriate content generated by AI apps. Developers must now adhere to stricter rules that ensure AI-generated material remains appropriate, with new provisions for user feedback and content moderation.

    Preventing inappropriate content

    Google’s revised guidelines mandate that AI apps must be designed to prevent the generation of restricted content, including sexual material, violence, and other prohibited outputs. Developers are now required to implement robust content filters and provide users with mechanisms to report any offensive material. This initiative is part of Google’s broader strategy to maintain a safe digital environment and uphold user privacy on its platform. Additionally, apps must be rigorously tested to ensure they comply with these content standards before they can be distributed on Google Play.

    Also read: Unmasking deepfake illusions and guarding against deception

    Crackdown on misleading marketing

    Google is also addressing issues with misleading marketing practices in AI apps. Some applications have been promoted with claims of generating non-consensual deepfake nudes or engaging in other unethical activities. For instance, ads for such apps have appeared on social media, falsely suggesting they could create nude images of public figures or individuals without their consent. Google’s new policies prohibit the promotion of AI apps with such misleading use cases, and any app found violating these marketing guidelines risks removal from the Google Play store, regardless of its actual capabilities.

    Also read: Why Google’s AI overviews often go wrong

    Developer accountability and best practices

    Under the updated regulations, developers are held accountable for ensuring their AI applications cannot be manipulated to generate harmful or offensive content. This includes rigorous testing and feedback processes, where developers can utilise Google’s closed testing feature to gather user input on early versions of their apps. Google recommends that developers document these tests thoroughly, as the company may request to review them. To further support developers, Google is providing resources like the People + AI Guidebook, which offers best practices for creating responsible AI applications that align with these new guidelines.

    AI apps deepfake Google Play
    Lydia Luo

    Lydia Luo, an intern reporter at BTW media dedicated in IT infrastructure. She graduated from Shanghai University of International Business and Economics. Send tips to j.y.luo@btw.media.

    Related Posts

    CoreWeave acquires Core Scientific in $9bn AI infrastructure deal

    July 9, 2025

    OpenAI tightens security amid DeepSeek ‘copy’ allegations

    July 9, 2025

    Comcast moves more data with less energy used

    July 9, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    CATEGORIES
    Archives
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023

    Blue Tech Wave (BTW.Media) is a future-facing tech media brand delivering sharp insights, trendspotting, and bold storytelling across digital, social, and video. We translate complexity into clarity—so you’re always ahead of the curve.

    BTW
    • About BTW
    • Contact Us
    • Join Our Team
    TERMS
    • Privacy Policy
    • Cookie Policy
    • Terms of Use
    Facebook X (Twitter) Instagram YouTube LinkedIn

    Type above and press Enter to search. Press Esc to cancel.