Trends
Microsoft’s Worldwide Ban: Facial Recognition Out of Azure OpenAI
Microsoft bans U.S. police and global law enforcement agencies from using OpenAI models for facial recognition in surveillance cameras through its Azure OpenAI Service. The company’s updated policy also prohibits the use of AI for manipulative purposes, creating romantic chatbots, and social scoring…

Headline
Microsoft bans U.S. police and global law enforcement agencies from using OpenAI models for facial recognition in surveillance cameras through its Azure OpenAI Service. The company’s updated policy also prohibits the use of AI for manipulative purposes, creating romantic…
Context
Microsoft bans the use of its Azure OpenAI Service models for facial recognition purposes by law enforcement agencies worldwide. This move aims to address privacy concerns and prevent potential misuse of AI technology. The ban affects all uses of Microsoft’s AI tools, including GPT-4 Turbo and DALL-E, in applications that involve identifying individuals through facial recognition, reflecting growing scrutiny over AI ethics and governance. Microsoft globally proscribes the use of its Azure OpenAI Service for facial identification by law enforcement agencies. This policy update aims to curtail the potential misuse of AI technologies and underscores Microsoft’s commitment to ethical AI practices. The ban encompasses all models within the Azure OpenAI Service, such as GPT-4 Turbo and DALL-E, specifically targeting applications that involve identifying individuals through facial identification technology.
Evidence
Pending intelligence enrichment.
Analysis
Also read: Microsoft signs deal with Swedish partner to capture carbon Also read: Microsoft commits to building cloud and AI infrastructure in Thailand Microsoft’s Azure OpenAI Service no longer permits its AI models to be used for facial identification by police departments worldwide. This measure highlights the industry’s shift towards prioritizing public privacy and enhancing trust in AI technologies. This ban is part of Microsoft’s broader initiative to set ethical standards for deploying AI systems in sensitive sectors such as law enforcement and surveillance.
Key Points
- Microsoft bans the use of Azure OpenAI Service models like GPT-4 Turbo and DALL-E for facial recognition by police departments, reflecting concerns over AI misuse.
- The policy affects law enforcement worldwide, including French police who are restricted from using facial recognition at the Paris Olympics.
- The updated code of conduct expands prohibitions to include using models for manipulation, creating romantic chatbots, social scoring, and identifying individuals by their physical or behavioral traits.
Actions
Pending intelligence enrichment.





