Whether it’s driving a car, making a medical diagnosis by referencing a database of historical cases, finding potential new drugs, or playing chess, artificial intelligence is increasingly performing tasks as well as – and in some cases better than – humans.
Humans are subject to the rule of law. Kill or injure someone while driving a car and you might find yourself charged with negligence, or worse.
But what happens when an autonomous vehicle kills someone? A robot is not subject to the law. So is the car manufacturer liable, or the developer of the software? And how do you pinpoint the cause of such an accident?
University of Surrey (UK) professor of Law and Health Sciences Ryan Abbott argues that the law should not discriminate between AI and human behaviour and proposes a new legal principle of equal treatment that he claims will ultimately improve human wellbeing.
Meet the Reasonable Robot
Professor Abbott has made his case in a book The Reasonable Robot: Artificial Intelligence and the Law and he discussed his proposal with Professor Jeannie Marie Paterson from the University of Melbourne’s Centre for AI and Digital Ethics (CAIDE) in a webinar co-hosted by CAIDE and the Australian Society for Computers and Law (AUSCL).
It followed an AUSCL webinar in March in which Abbott discussed a more specific issue of AI and the law: the inability for either AI or its developer to be cited in a patent filing as the creator of any invention produced by AI.
In the most recent webinar, Professor Abbott made the point that the law’s differential treatment of AI and humans has implications beyond the deterrence and punishment of wrongdoing. He argues it has negative and socially harmful consequences, and there should be less discrimination between the two.
You can’t tax a robot worker
“If my employer, the University of Surrey could get a robot to automate my job, which they will someday, they would save money doing that, even if we were both equally good, because they have to make National Insurance contributions [a UK earnings tax that funds state pensions and other benefits] for my work and a host of other reasons, but the machine is taxed more favourably.
“So, we have different tax rules applied to human behaviour and AI behaviour, and this results in some perverse outcomes.”
Professor Abbott argues that, as a general principle, laws applying to humans and AI should be drafted to best achieve their underlying goals, rather than making any distinction between humans and robots.
“In distributional systems, the law could aim to promote fair distribution of resources or promote commerce. In intellectual property it might be promoting innovation, in the case of patent law, or in copyright law. In tort law, it is primarily incentivising safety. In a broader sense, it really is helping the law better do what that area of law is intended to do.”
In the case of traffic law one of the main aims is to minimise death and injury on the roads, and to this end Professor Abbott argues that, when autonomous cars become a practical proposition, the law should hold humans and AI to the same standard.
Treat AI and human drivers equally
Under current law, if an autonomous vehicle caused a fatality, the legal redress would be against the manufacturer through product liability law. Professor Abbott argues this could be very difficult and costly to prove. Therefore, the AI should be held to the same standards as a human driver.
“Did [the AI] run a red light without a good reason? If so, and a reasonable person wouldn’t have done that the AI is liable, not what was going on in an AI that caused it to run a red light.”
Under this scenario, human drivers – who today, Abbott said, cause 94 per cent of accidents and 1.3 million fatalities every year – would, in the eyes of the law, be judged by the standards of AI, and likely found wanting.
“If it’s practical to automate the actor’s behaviour that should set the standard to apply to all actors. If you cause an accident we would say, ‘would a reasonable self-driving car have caused this accident?’ If so, you won’t be liable. If not, you would be.”
In practice he concedes this would mean that a human driver would almost always be liable, “because a self-driving car could stop any accident through superhuman sensors and ultrafast decision making.”
This example illustrates the general principle of The Reasonable Robot. “The argument of the book is essentially that, as machines step into the shoes of people and do these same sorts of activities, we want them held to similar sorts of standards, regardless of how they are programmed and regardless of why machines execute certain programming because behaviour is very functional as opposed to design issues,” Professor Abbott said.