Microsoft has removed public access to a number of AI-powered facial analysis tools known as Azure Face, including a tool that claims to identify a person’s emotion from videos and photos.
The actions reflect efforts by major cloud providers to rein in sensitive technologies themselves, as lawmakers in the United States and Europe continue to assess sweeping legal boundaries.
Experts have criticized emotion recognition tools, saying it is unscientific to equate the external form of emotion with the internal one. Facial expressions that are believed to be generic also vary across population groups.
And since at least last year, Microsoft has been reviewing whether its emotion-recognition systems are rooted in science. The new decision is part of an overhaul of the company’s policies on the ethics of artificial intelligence.
The company’s updated responsible AI standards emphasize accountability for who uses its services and more human oversight over where these tools are applied.
This means that Microsoft limits access to some features of facial recognition services while removing others entirely.
Users must apply to use Azure Face for facial recognition while telling Microsoft how and where their systems are deployed.
Some of the less harmful use cases (such as automatically blurring faces in photos and videos) remain open.
The company is also working to terminate Azure Face’s ability to identify traits such as gender, age, smile, facial hair, hair and makeup.
“Experts highlighted the lack of scientific consensus on defining emotions, challenges in how inferences are generalized across use cases, regions and demographics, and growing privacy concerns about this type of capability,” the company said.
The company will stop offering these features to new customers starting June 21. Existing customer access is canceled on June 30, 2023.
Microsoft limits face recognition
Microsoft has discontinued public access to these features. But she continues to use it in her product called Seeing AI. This application uses computer vision to describe the world for people with visual impairments.
” Tools like sentiment recognition can be valuable when used for a range of controlled accessibility scenarios ,” Microsoft said .
The Custom Neural Voice feature, which allows customers to create AI voices based on recordings of real people, also had similar limitations.
The company explained that the tool has great potential in education, accessibility and entertainment. But it is easy to imagine how they could be used to improperly impersonate speakers and deceive listeners.
Microsoft limits access to the feature to managing customers and partners, while ensuring the active participation of the speaker when creating an artificial voice.