A.I. that guesses your emotions could be misused and shouldn’t be available to everyone, Microsoft decides

Microsoft said Tuesday it plans to halt sales of facial recognition technology that predicts a person’s emotions, gender, or age, and restrict access to other A.I. services, because it risks “subjecting people to stereotyping, discrimination, or unfair denial of services.”

The move follows intense criticism of such technology, which has been used by companies for monitoring job applicants during interviews. Facial recognition systems are often trained on predominately white and male databases, so their findings can become biased when used on other cultures or groups. 

In a blog post, Microsoft referenced its work internally and with outside researchers to develop a standard for using the technology. The post acknowledged the work found serious problems with the technology’s reliability. 

“These efforts raised important questions about privacy, the lack of consensus on a definition of ’emotions,’ and the inability to generalize the linkage between facial expression and emotional state across use cases, regions, and demographics,” said Sarah Bird, principal group product manager at Microsoft’s Azure A.I. unit. 

Companies like Uber currently use Microsoft’s technology to help ensure that drivers behind the wheel match their accounts on file.

Two years ago Microsoft undertook a review process to develop a “Responsible A.I. Standard” and guide the building of more equitable and trustworthy artificial intelligence systems. The company released the result of those efforts in a 27-page document Tuesday. 

“By introducing Limited Access, we add an additional layer of scrutiny to the use and deployment of facial recognition to ensure use of these services aligns with Microsoft’s Responsible A.I. Standard and contributes to high-value end-user and societal benefit,” Bird wrote in the blog post Tuesday.

“We recognize that for A.I. systems to be trustworthy, they need to be appropriate solutions to the problems they are designed to solve,” wrote Natasha Crampton, chief responsible A.I. officer at Microsoft, in another blog post.

Crampton went on to say the company would retire A.I. capabilities that “infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup” as a requirement of its new standard. But the tech will still be incorporated into the company’s accessibility tools, such as its Seeing A.I., which describes objects for people with visual impairments.

The decision comes as U.S. and E.U. legislators debate legal and ethical questions around the use of facial recognition technology. Certain jurisdictions already place limits on the technology’s deployment. Beginning next year New York City employers will face increased regulation on use of automated tools to screen candidates. In 2020, Microsoft joined other tech giants in pledging to not sell its facial recognition systems to police departments until federal regulation exists.

But academics and experts have criticized tools like Microsoft’s Azure Face API that claim to identify emotions from videos and pictures for years. Their work has shown that even top-performing facial recognition systems disproportionately misidentify women and people with darker skin tones. 

New customers can no longer use Microsoft’s features to detect emotions and will have to apply for approval to use other services in Azure’s Face API. Returning customers have one year to gain approval if they want to continue using the software.

Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.