Aust AI policy is badly adrift


Avatar photo

James Riley
Administrator

Artificial intelligence turbocharged by machine learning techniques gets smarter every second of the day and the Australian government is being prodded to stir its own grey matter on the implications of ubiquitous AI, especially weaponised systems.

Prominent figures such as Elon Musk and Stephen Hawking have been piling in on the AI debate of late which has been goosed along by the potential advent of so-called ‘killer robots’, gun platforms with some degree of autonomy on who they terminate with extreme prejudice.

Dr Hawking told a tech conference in Lisbon earlier this month that unless we mitigate the risks from ever advancing AI, it ‘could be the worst event in the history of civilisation.’

Alan Finkel: Australian policy must come to grips with AI implications

“It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many,” said Dr Hawking.

Back in July Mr Musk described AI as the “biggest risk we face as a civilisation”, warning that AI needed regulation before “people see robots go down the street killing people”.

A strongly credentialed group of Australian AI researchers sent an open letter to Prime Minister Malcolm Turnbull earlier this month urging the PM to make a stand against weaponising AI.

“Lethal autonomous weapons systems that remove meaningful human control from determining the legitimacy of targets and deploying lethal force sit on the wrong side of a clear moral line,” the letter said.

“To this end, we ask Australia to announce its support for the call to ban lethal autonomous weapons systems at the upcoming United Nations Conference on the Convention on Certain Conventional Weapons.

“Australia should also commit to working with other states to conclude a new international agreement that achieves this objective,” the AI researchers’ letter said.

Australian chief scientist Alan Finkel has also taken a strong stand on the need for legislators to get Ai on the agenda now and help make AI ‘safe and beneficial for humans.’

In a speech last week to the Creative Innovation 2017 conference in Melbourne, Dr Finkel said that as ‘parents’ to the current wave of AI applications, we have a responsibility to teach the systems to ‘play nice.’

Dr Finkel called for ethical development standards for AI research and reiterated a previous call for a global accord on weaponised AI.

“On the right-hand extreme, the equivalent of knives and guns: things that we agree as a global community are simply so dangerous that they need to be managed by international treaties,” Dr Finkel said.

“In that category we might put autonomous weaponised drones, that can select and destroy without any human decision-maker in the loop beyond establishing the rules of engagement,” he said.

Labor shadow minister for the digital economy Ed Husic put out a presser supporting Dr Finkel’s call for the development of a regulatory framework around AI.

“His voice builds on calls made by Labor back in September for the Turnbull government to champion this issue on the world stage,” said Mr Husic in a statement.

“While AI has begun to be applied in a wide variety of beneficial ways, very few people are thinking about boundary setting for the tech that thinks for itself.” he said.

Dave Heiner, who is a Microsoft Vice President and Deputy General Counsel, Regulatory Affairs, spoke at the Microsoft Summit event in Sydney last week and said the tech industry and society needed to come together and work through the policy issues that AI raises.

People needed “to be comfortable with the technology as it deploys,” Mr Heiner said.

Microsoft was working with governments around the world on three levels, Mr Heiner said.

“One is Governments are a great source of data. They have data about citizens, about the environment, making all that data publicly accessible so that bright people can go off and develop AI based solutions on the basis of that data,” he said.

“A second thing is to really invest in basic R&D relating to AI. There’s a lot of good work being done at Microsoft and the product teams. But again there’s so much more that needs to be done at research institutions [in relation to AI].

“The third thing is working constructively with industry and civil society groups to think through the range of policy issues that AI raises.”

Do you know more? Contact James Riley via Email.

Leave a Comment