Toby Walsh on the impact AI
Toby Walsh: The UNSW AI researcher says government needs to get involved
Emerging technologies have the potential to greatly impact basic human rights and governments must step up to regulate their use, Australian artificial intelligence luminary expert Toby Walsh said.
It comes as the Australian Human Rights Commission has launched a major three-year project that aims to build a blueprint for how Australia can ensure that new technologies such as artificial intelligence, big data and machine learning uphold human rights and don’t damage them.
An expert reference group has been formed to guide the project, with members including Mr Walsh, NACLC policy director Amanda Alford, NSW Court of Appeal president Justice Beazley and chief scientist Alan Finkel.
Professor Walsh is a key participant at InnovationAus.com's Civic Nation 2018 forum in Sydney on September 27, discussing the future of AI.
He said it’s important to investigate new technologies through a human rights lens.
“Technology is throwing up interesting ethical challenges and potentially affecting things as basic as our fundamental human rights. I hope that out of it will come further human rights, and we will ensure that technology does make our lives better,” Prof Walsh told InnovationAus.com.
“We’re starting to see that technology has significant human rights implications in terms of algorithms not being fair, privacy, surveillance and all the other concerns people are starting to have. We’re having the conversation just in time, or possibly a little late.”
Human Rights Commissioner Edward Santow said these new technologies have the potential to be either greatly beneficial or greatly damaging to the general public.
“These developments promise enormous economic and social benefits. But the scope and pace of change also pose profound challenges. Technology should exist to serve humanity. Whether it does will depend on how it is deployed, by whom and to what end,” Mr Santow said.
“As new technology reshapes our world, we must seize the opportunities this presents to advance human rights by making Australia fairer and more inclusive,” he said. “However, we must also be alive to, and guard against, the threat that new technology could worsen inequality and disadvantage.
“The project will explore the rapid rise of new technology and what it means for our human rights.”
Professor Walsh said he has a “glass half full” approach to the potential of these new technologies to improve people’s lives, if they are implemented and regulated properly.
“They can improve the flow of information, they can give people a voice that have not previously had a voice,” he said.
“We’ve seen fine examples around the world of citizens who have benefitted from the use of technology – there’s a lot we can get from technology as well as dealing with the challenges and opportunities,” he said.
The Human Rights Commission’s report is likely to recommend a range of new regulations for the federal government to investigate, including efforts to reduce discrimination in the use of artificial intelligence in decision-making, and an emphasis on ethics and privacy.
“It is clear from the discussions that government has a role to play in greater regulation. We can’t just let the tech companies play. They’ve been given a long piece of rope and are discovering that not all disruption is good, and some of it will make lives worse,” Professor Walsh said.
The three-year investigation will identify the practical issues involved with how these technologies interplay with human rights and how best to respond to these issues, and will then develop a “practical and innovative roadmap for reform”.
“Human rights must shape the future these incredible innovations have made possible. We must seize the opportunities technology presents but also guard against threats to our rights and the potential for entrenched inequality and disadvantage,” Mr Santow said.
The project will look at the potential for new technologies to further entrench social injustices and how things like artificial intelligence can discriminate against people in society.
The commission has this week released an issues paper for the project, and is now accepting public submissions on the matters involved.
A discussion paper will be released early next year, with further consultations then taking place.
A final report with a range of recommendations will be delivered by the end of next year, with implementation beginning in 2020.
“Like any tool, technology can be used for good or ill. However, modern technology carriers unprecedented potential on an individual and global scale,” the issues paper said.
“New technologies are already radically disrupting our social, governmental and economic systems. Often the same, or similar, technologies can be used to help and to harm,” it said.
The Human Rights Commission said the project will be specifically looking at the impact of new technologies on the human right to privacy, security, safety, the right to life and the right to non-discrimination.
It would also look into the use of artificial intelligence in making important decisions without human oversight.
Professor Walsh has been a global voice on this issue, leading the recent signing of a pledge from thousands of AI researchers against killer robots.
With other signees including SpaceX founder Elon Musk and the co-founder of Google DeepMind, the group declared that they would refuse to participate in the development or manufacture of robots that could identify and attack people without any human oversight.
Chief Scientist Alan Finkel spoke at a conference in Sydney this week to launch the human rights project, saying that Australia has an important role to play in ensuring new technologies are rolled out ethically.
“The way we integrate AI into our societies will be determined by the choices we make at home. Governments decide how companies are allowed to use data. Governments decide how to invest public funds in AI development,” Dr Finkel said.
“Governments decide how they want to harness AI, for policing and healthcare and education and social security – systems that touch us all. And that means nations like Australia have choices,” he said.
“We are capable technology innovators, but we have always imported more technology than we develop. However, that doesn’t mean that we have to accept the future we’re handed by companies in China, or Europe or the United States.
“We can define our own future by being leaders in the field of ethics. And that is my aspiration for Australia: to be human custodians.”
It comes as the regulation of artificial intelligence has become a “live political issue” with many leading experts calling on the Australian government to get on the front foot.
The federal government allocated $30 million in this year’s budget for the development of an AI Ethics Roadmap, while a number of unions have been pushing for further regulation of driverless vehicles.
The Opposition and Greens have also backed a call from Microsoft for governments to regulate the use of facial recognition technology in order to address the “broader social ramifications and potential for abuse”.
The European Union’s General Data Protection Regulation is an example of how effective regulation in one area of the world can impact other countries, Professor Walsh said.
“Australia now has better data privacy because of the European legislation. It’s interesting how things happening around the world can impact in a positive way. People are now much more aware of data privacy and we have much better data privacy now,” he said.