Algorithms have a racial bias


James Riley
Editorial Director

Kriti Sharma is not a man and she is not white. She’s experienced artificial intelligence bias as a result and is campaigning to stamp it out.

“Facial recognition systems often don’t work very well for non-white skin tones, the solution is training and feeding the machine – giving AI diverse experience. If you are building facial recognition give it data – not just data about a bunch of white dudes building software – but what the real world looks like,” she said.

Ms Sharma’s day job is vice president of AI for software company Sage. She has also advised the UK House of Lords ahead of the 2017 release of its AI report and recommendations, and this weekend will speak at Sydney’s Vivid festival on the impact AI is having on society.

Kriti Sharma: Facial recognition doesn’t work well for non-white skin tones

As artificial intelligence becomes embedded in everyday life, Ms Sharma says that it is critical ethics are built into solutions from the ground up and that bias is stamped out by ensuring machine learning is fueled by rich and inclusive data sets.

The risk of bias in facial recognition systems is significant according to Australia’s Human Rights Law Centre.

In its submission to the inquiry into the Government’s proposed Identity Matching Bill which would see the Department of Home Affairs make far more extensive use of facial recognition, the HRLC cited a 2018 study which found that the mis-identification rate for “darker-skinned” women was 34.7 per cent compared to 0.8 per cent for “lighter-skinned” men.

Governments and large enterprise need to be more engaged in AI policy according to Ms Sharma.

“Policy makers, lawmakers, governments, large institutions, CEOs have a big responsibility to make sure the AI we are building today is designed in the right way.

“Often the ethics of AI are an afterthought. Once you have built the AI system, to fix it later on is harder because the AI learns on its own – you need to have the right design and principles in the algorithms.”
Australia’s chief scientist, Alan Finkel has recently proposed that ethical standards be imposed on companies developing AI solutions, and the Ethics Centre is also developing a framework to help steer organisations’ AI innovations.

According to Ms Sharma: “Policy makers have a very important role because this is not just another incremental technology, it’s an exponential leap and will have huge impacts on the economy and society and we need to think about it at a macro level.

:For example, what we need to do at a curriculum level, what we need to do to re-skill workers whose roles might change, what frameworks we need to embed so that designers are designing ai in a responsible way.”

Speaking recently at CeBIT, Kate Carruthers, chief data and analytics officer with the University of NSW said organisations must take an ethical approach to any technology deployment and carefully assess the ethics of data, algorithms and practices.

She said that without proper controls bias begins creeping in from the moment data is first collected.

Ms Carruthers noted the growing array of organisations now emerging which are bent on imposing controls on the way AI is developed and applied, including Australia’s 3A Institute, led by Dr Genevieve Bell; the FAT/ML organization calling for fairness, accountability and transparency in machine learning; and the Algorithmic Justice League, fighting bias in algorithms.

According to Ms Sharma there is already evidence of algorithmic bias “impacting the most vulnerable communities in a disproportionate way.”

In the US the ProPublica investigation of racial bias in sentencing offered one example, closer to home Centrelink’s “robodebt” debacle which demonstrated how people’s lives can be impacted by poorly designed systems.

She also worries about the gendered signals sent by AI platforms such as Amazon Alexa or IBM’s Watson.

“We see a lot of stereotypes in AI systems – for example, Alexa and voice assistants.

“They all have feminine personalities and are doing mundane tasks like switching your lights on and off, booking your appointments, creating shopping lists.

“You tend to get male AI too like IBM Watson and Salesforce’s Einstein but they tend to make high powered business decisions. That is being reinforced how these systems are being designed.”

Ms Sharma said that governments needed to pay heed to the societal impact of AI and automation, the impact on workers and changing education needs.

“This is a big enough issue for government and policy makers to take it very seriously. We do absolutely need to experiment with things like universal basic income. We need to think about how we generate enough tax money to do that.

“I’m not suggesting taxing robots – it’s just a process. Should we tax smartphones if they make you faster?

“But we need to think about value creation and the benefits the technology has to offer,” ensuring that investments be focused to areas of high impact such as healthcare and education.”

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories