Microsoft retires facial analysis capability under responsible AI


Brandon How
Reporter

Microsoft is ending access for new customers to its facial analysis capabilities immediately and is restricting access to its facial recognition technology as the company introduced upgraded responsible AI standards for itself.

On Tuesday, Microsoft chief responsible AI officer Natasha Crampton unveiled Version Two of the multinational’s Responsible AI Standard. Notably, it includes the retirement of its identify and verify features from the Azure Face API.

General purpose facial analysis is being retired from the Azure Face API for existing customers on June 30 next year and has been made unavailable to new Azure customers since Tuesday. This includes “capabilities that purport to infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup”.

Ms Crampton said the updated standard was devised by a multidisciplinary group including researchers, engineers, and policy experts over a year.

“Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of ‘emotions’, the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability,” Ms Crampton said.

“We also decided that we need to carefully analyse all AI systems that purport to infer people’s emotional states, whether the systems use facial analysis or any other AI technology.”

It was determined generalising emotions across diverse groups and use cases is too difficult. Concerns around misuse, such as subjecting people to stereotyping, discrimination, or unfair denial of services, also informed the decision.

Features still generally available on Azure Face are facial detection and facial redaction. Detection is used to find the location and attributes of a face including blur, exposure, glasses, head pose, landmarks, noise, occlusion, and facial bounding box. Redaction enables the blurring of faces on video recordings for privacy purposes.

Facial analysis will still be available to users through a Limited Access arrangement. Customers will have until June 30, 2023 to apply for approval to continue using the tech.

Limited Access was also introduced for celebrity recognition features in Computer Vision, and Video Indexer as well as face identification in Video Indexer. All features of voice identifier Speaker Recognition, and some of synthetic voice generator Custom Neural Voice are also subject to Limited Access.

Ms Crampton said that Microsoft needs to be proactive in producing beneficial and equitable outcomes through development and deployment of its AI systems.

“That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability,” Ms Crampton said.

Similarly, general access to facial recognition was removed from Azure Face API, Computer Vision, and Video Indexer on Tuesday. New customers must apply for access to use facial recognition capabilities, while existing users will lose access on June 30, 2023 if their application has not been approved.

Only customers working directly with the Microsoft accounts team are eligible for Limited Access. The success of an application is based on the proposed use case which may include facial identification to detect duplicate or blocked users or to personalise shared devices. A full list of approved Limited Access use cases can be found here.

Microsoft’s Responsible AI Standard are based on six overarching goals: accountability, transparency, fairness, reliability and safety, privacy and security, and inclusiveness. The second version builds upon the original launched in 2019.

In 2019, Microsoft began its participation in the federal government’s Artificial Intelligence Ethics Principles pilot alongside five other organisations. The pilot tasked firms with implementing the government’s eight Al ethics principles. In Microsoft’s case, the principles were applied to the development and deployment of conversational AI, which includes chatbots.

The announcement on Tuesday comes as the Office of the Australian Information Commissioner considers taking action against Bunnings, Kmart and The Good Guys for using facial recognition technology on its customers.

IBM announced its exit from development on facial recognition in mid-2020 amid criticisms the technology had embedded racial and gender bias.

Do you know more? Contact James Riley via Email.

Leave a Comment

Your email address will not be published.

Related stories