- Google updates its guidelines for AI apps on Google Play to prevent inappropriate and harmful content.
- AI apps must not generate sexual content, violence, or other restricted material and should provide user reporting features.
- Developers must rigorously test AI tools for user safety and adhere to new promotional guidelines to avoid misleading claims.
Our Take
Google’s stringent new policies on AI apps are a vital response to the misuse of AI for creating deepfake nudes. These measures are crucial for curbing unethical app practices and protecting user privacy and safety on Google Play.
—Lydia Luo, BTW reporter
Google is tightening regulations for AI applications available on Google Play. The company announced new measures on Thursday aimed at curbing the distribution of inappropriate content generated by AI apps. Developers must now adhere to stricter rules that ensure AI-generated material remains appropriate, with new provisions for user feedback and content moderation.
Preventing inappropriate content
Google’s revised guidelines mandate that AI apps must be designed to prevent the generation of restricted content, including sexual material, violence, and other prohibited outputs. Developers are now required to implement robust content filters and provide users with mechanisms to report any offensive material. This initiative is part of Google’s broader strategy to maintain a safe digital environment and uphold user privacy on its platform. Additionally, apps must be rigorously tested to ensure they comply with these content standards before they can be distributed on Google Play.
Also read: Unmasking deepfake illusions and guarding against deception
Crackdown on misleading marketing
Google is also addressing issues with misleading marketing practices in AI apps. Some applications have been promoted with claims of generating non-consensual deepfake nudes or engaging in other unethical activities. For instance, ads for such apps have appeared on social media, falsely suggesting they could create nude images of public figures or individuals without their consent. Google’s new policies prohibit the promotion of AI apps with such misleading use cases, and any app found violating these marketing guidelines risks removal from the Google Play store, regardless of its actual capabilities.
Also read: Why Google’s AI overviews often go wrong
Developer accountability and best practices
Under the updated regulations, developers are held accountable for ensuring their AI applications cannot be manipulated to generate harmful or offensive content. This includes rigorous testing and feedback processes, where developers can utilise Google’s closed testing feature to gather user input on early versions of their apps. Google recommends that developers document these tests thoroughly, as the company may request to review them. To further support developers, Google is providing resources like the People + AI Guidebook, which offers best practices for creating responsible AI applications that align with these new guidelines.