AI ethics guide is a PR exercise
Paul Shetler: New AI ethics guidelines are poorly defined, and full of buzzwords
The federal government’s unveiling of artificial intelligence ethical guidelines for Australian business is a PR exercise that won’t change the actions of companies, according to former Digital Transformation Office boss Paul Shetler.
Industry Minister Karen Andrews launched the guidelines on Thursday morning and announced that NAB, Commonwealth Bank, Telstra and Microsoft would trial the principles to test whether they “deliver practical benefits and translate into real world solutions”.
The guidelines boil down to eight ethics principles:
- Human, social and environmental well-being
- Human-centred values
- Privacy protection and security
- Reliability and safety
- Transparency and explainability
An accompanying document provides more detail on the principles, explaining they aim to “achieve better outcomes, reduce the risk of negative impact, practice the highest standards of ethical business and good governance”.
They outline how AI should be used to benefit society and individuals, and the objectives behind its development should be identified and justified: “machines should serve humans, and not the other way around.”
“AI systems should enable an equitable and democratic society by respecting, protecting and promoting human rights, enabling diversity, respecting human freedom and the autonomy of individuals, and protecting the environment,” the guidelines state.
The principles follow the release of a discussion paper in April and are the result of over 130 submissions and a series of stakeholder roundtables.
The government has made it clear that the “aspirational” guidelines are entirely voluntary.
“We need to make sure we’re working with the business community as AI becomes more prevalent and these principles encourage organisations to strive for the best outcomes for Australia and to practice the highest standards of ethical business,” Ms Andrews said.
“This is essential, as we build Australians’ trust that AI systems are safe, secure, reliable and will have a positive effect on their lives. Agreeing on these principles with business, academia and the community is a big step forward in setting our shared expectations of each other in Australia’s AI future.”
But according to Mr Shetler, now a partner at AccelerateHQ, the principles are full of vague “buzzwords” that won’t have any impact on how a company develops AI technologies.
“A lot of the words they’re using are completely undefined and need to be interrogated. It seems to be full of jargon and buzzwords, and I hate that kind of stuff because it’s not clear – and if you’re dealing with bureaucrats that’s basically an open door for them to drive through,” Mr Shetler told InnovationAus.com.
“It seems like a waste and it’s hard to take seriously. It’s a PR move and I don’t think it’ll have an impact on businesses. No-one is going to say that what they’re doing is bad for society or for humans. They need to be specific about what they want people to do.”
The principles are filled with “glittering generalities” that “sound very good but have no meaning around them”, he said.
“Clarity is the essence and it doesn’t seem very clear at all. That’s the biggest problem with it, it opens the door to a lot of politicisation because of the lack of clarity,” Mr Shetler said.
“These are words used by upper middle-class white people to feel good about the things they do when they’re wrecking other people’s jobs.”
The Coalition allocated $30 million in last year’s federal budget for the development of AI ethical frameworks and an AI roadmap.