Gov’t ponders AI’s governing ethics


James Riley
Editorial Director

The growing influence of AI powered systems on our daily lives has got governments keen to be on top of the AI wave rather than paddling furiously to catch up.

In the US, former Google CEO and current Alphabet board member Eric Schmidt was in front of the US House Armed Services Committee last week suggesting that key AI industry players would need to come up with governing principles on the use of the technology.

His testimony followed revelations in March that Google had been working with the US Department of Defence on Project Maven, an initiative to analyse drone imagery. After learning about Maven, more than 3,000 Google employees signed a letter of protest about the search giant’s involvement.

Genevieve Bell: The challenge is to agree on an AI definition

In his written testimony to the committee, Mr Schmidt urged the US government to come up with an ethical framework for AI projects.

“The world’s most prominent AI companies focus on gathering the data on which to train AI and the human capital to support and execute AI operations,” wrote Mr Schmidt.

“If DoD is to become ‘AI‑ready,’ it must continue down the pathway that Project Maven paved and create a foundation for similar projects to flourish… It is imperative the Department focus energy and attention on taking action now to ensure these technologies are developed by the US military in an appropriate, ethical, and responsible framework.”

As creepy data outfits such as Cambridge Analytica hack our voting intentions and decisions within everything from transport, health and finance are increasingly made autonomously by machines the UK House of Lords Artificial Intelligence Committee has popped a report called “AI in the UK: Ready, Willing and Able?”

The UK sees itself as a centre for AI development and wants to maintain its AI edge as it drops out of the embrace of the EU while not frightening the horses too much amongst the citizenry who are facing more and more AI technology usage.

The Lords’ report proposes five main principles for an AI code.

These are:

  • Artificial intelligence should be developed for the common good and benefit of humanity
  • Artificial intelligence should operate on principles of intelligibility and fairness
  • Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities
  • All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence
  • The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence

The Lords committee report also acknowledges coming jobs disruption as AI systems take over from humans and advocates a growth fund to boost business involvement in the field.

As well as the Lords committee, the UK also has an All-Party Parliamentary Group on Artificial Intelligence and Australia in March scored its first AI-focused parliamentary vehicle in the form of the Victorian Government’s All-Party Parliamentary Group on Artificial Intelligence.

This was launched with dancing robots in the foreground by co-convenors, Innovation minister Philip Dalidakis and shadow minister David Southwick.

While the Victorian government effort is the first of its kind in Australia, we have had a high-power think tank dedicated to furthering AI policy and understanding since last year.

The Autonomy, Agency and Assurance Innovation Institute (3A Institute) will bring together the best researchers from around the world and a range of disciplines, to build a new applied science around the management of artificial intelligence, data and technology and of their impact on humanity.

The Australian National University (ANU) last September launched 3A Institute in collaboration with CSIRO’s Data61.

It is charged with creating a curriculum for training certified AI practitioners by 2022 as well as researching and informing policy and understanding around AI technologies.

But first, we need a good working definition of what AI is, says the 3A Institute Director, Professor Genevieve Bell, who returned to Australia to take up the post after a career in Silicon Valley with Intel Corporation.

“One of the challenges with talking about AI, is that it has become a topic where everyone nods sagely, as if they know what everyone else is talking about,” Prof Bell said.

“I suspect that there’s a different definition in everyone’s head,” she says, adding that one of the things the UK Lords report does well is to spend several pages wrestling with a definition of AI.

“One of the first challenges we have is that everyone is carrying around a different version of what AI is – practitioners have a different understanding to regulators and citizens have a different understanding again.

“One of the first pieces of work we have to do is establish when we say AI, what are we actually talking about

“Step one, define terms. I know that can be exquisitely tedious but I think it’s pretty important,” says Professor Bell, adding that AI is made up of a “constellation of technologies” and the data that fuels it.

“Plus we need to think about ethics,” she says with AI set to become a pervasive “steam engine.”

“We are moving into a world where (the AI) steam engine will power everything from trains to machinery.”

As working AI systems become liberated from mere rule-based decision making, and move into independent thought, we will need a raft of ethical and regulatory links into these machines to ensure everything from physical safety to legal compliance.

“The 3A Institute is about building a new applied science which manages the future of cyber-physical machines,” says Professor Bell.

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories