Australia is “behind the curve” on artificial intelligence and must tackle the surrounding ethical issues in order to fully capitalise on its “life-changing” impacts, according to CSIRO chief executive Larry Marshall.
Speaking to InnovationAus.com on the day CSIRO’s data arm Data61 released a draft ethical framework for artificial intelligence, Dr Marshall said Australia needs to focus on AI projects that meet three key benefits: economic, environmental and societal.
“AI has the potential to literally change life itself, so you really do want to understand the ethics before you start. Other countries are throwing enormous amounts of money at AI, so where should we play given we’re not going to throw that kind of financial firepower at it?” Dr Marshall told InnovationAus.com.
“If you look at what the rest of the world is spending on AI, we’re definitely behind the curve.”
The draft ethical guidelines are meant to be a “call to arms” for Australia, he said, with consultation now open on the framework, which outlines a set of principles and practical measures that organisations and individuals in Australia can use to design, develop and use AI in an ethical way.
“We’ve put this out as a first go at a framework with the main purpose of getting the conversation going nationally. All parts of industry, academia and politics need to think about the problem and talk about,” Dr Marshall said.
The CSIRO framework outlines the ethical opportunities and risks associated with AI, with much of this depending on how the technology is developed and implemented. It aims to ensure the right balance is found between ensuring AI is used to its full potential and the interests of Australians are kept as the priority.
“In a very real sense, AI is like a child because it’s in infancy and we’re still trying to figure out what it can and can’t do and it feeds on data the way a child’s mind feeds on knowledge. It’s up to us to train and nurture it,” Dr Marshall said.
“If we’re good parents we’ll teach it all the good parts of ourselves so it has a proper understanding of things like diversity and taking out the biases. We want it to get all of our good traits and avoid all of our bad ones as humans.”
The CSIRO has listed eight core principles surrounding an ethical AI approach: generating net-benefits, doing no harm, regulatory and legal compliance, privacy protection, fairness, transparency and explainability, contestability and accountability.
It also provides a draft ‘toolkit’ for ethical AI, including impact assessments, internal and external review, risk assessments, best practice guidelines, industry standards, collaboration, mechanisms for monitoring and improvements and consultations.
CSIRO is focusing on ‘narrow AI’ which performs a specific function, rather than AI that is comparable to humans.
“The development and adoption of advanced forms of narrow AI will not wait for government or society to catch up – these technologies are already here and developing quickly. Blocking all of these technologies is not an option, any more than cutting off access to the internet would be, but there may be scope to ban particularly harmful technologies if they emerge,” the consultation paper said.
While AI comes with a range of privacy, transparency, data security, accountability and equity issues, if it is developed in an ethical way, it can “secure a competitive advantage as well as safeguard the rights of Australians”.
Data governance is “crucial” to ethical AI, the paper said, and organisations working with the technology need to ensure they have strong data governance foundations or risk their applications being fed “inappropriate data and breaching privacy laws”.
The use of AI to guide decision-making in government, banking and finance also comes with a huge number of ethical concerns, it said.
“The number of decisions driven by AI will likely grow dramatically with the development and uptake of new technology. When used appropriately, automated decisions can protect privacy, reduce bias, improve replicability and expedite bureaucratic processes. Australia’s challenge lies in developing a framework and accompanying resources to aid responsible development and use of automated decision technologies,” the paper said.
This also comes with the issue of who is ultimately responsible for these decisions made by AI, and how they can be contested.
Dr Marshall said AI “can’t be a black box”.
“One of the big dangers is that you unleash AI and it teaches itself and you can very quickly get to a point when you can’t be certain of what it’s doing and how it’s making decisions. An important principle is that the decision be contestable – how do you know if it’s making the right decision?” he said.
AI is also “susceptible” to the biases of its developers and to incorrect or incomplete data that it is fed.
“Are we teaching it our own inherent or unconscious bias? AI learns from us, and it might look at society and conclude that somehow not having very many women in leadership positions is normal, but we would like to teach it that’s not normal. We’re not perfect but we’d like to aspire to more diversity and more equality,” Dr Marshall said.
Australia should be focusing on developing AI in industries and projects that address societal, economic and environmental concerns, the draft paper says, and the framework will help politicians and industry to make these decisions.
Using AI to address drought in Australia is an example of an area that “ticks all the boxes”, Dr Marshall said.
“There’s a clear need for that for the country, with strong societal and environmental benefits and an equally strong financial benefit. Other applications where you can’t necessarily see societal or environmental benefit and maybe it’s just the financial, then it’s okay to go forward on that but not backwards on the others. We hope the framework will help us and others make decisions right on where we focus our efforts,” he said.
This would include using AI to develop “digital twins” of Australia’s water system or energy market, Dr Marshall said.
The framework also needs to address community concerns that the growth of AI will lead to disruption and jobs losses, with Dr Marshall pointing to CSIRO’s agriculture department as an example that this is not the case.
“Over the last three years we’ve shifted about 40 percent of CSIRO Ag to become digital and AI-enabled, and the number of people employed in Ag has not decreased, it’s gone up slightly. This fear of digital eating jobs, what we’ve found literally is whilst there was initial resistance because of that fear we pretty quickly figured out that the key to freeing them up to do more and higher value things was to embrace these types of technologies,” he said.
“If you use AI just to save money and cut costs, that’s a short-term perspective. The real value is in AI partnering with humans to actually create more value. A cost-cutting strategy will only last for a bit, whereas a growth strategy can grow you forever.”
Consultations on the 78-age discussion paper will close at the end of May and the final framework will be a “work in progress for a while”, Dr Marshall said, with no set date yet on when it will be released.