The intersection of the law and technology has long provided a fountain of issues for society’s most fearsome and fascinating debates. This is especially the case right now through the rapid emergence of powerful general technologies like artificial intelligence and 5G communications.
More powerful and ubiquitous computing leads to ever more urgent debate about how the tech should be applied, and what rules should be wrapped around it.
Artificial intelligence in relation to law enforcement is an especially vexed issue. With the simultaneous arrival of 5G mobile technology, the issue is generating heat.
Lyria Bennett Moses is a professor of law at the University of NSW, and a founder/director at the Allens Hub for Technology, Law and Innovation and has given more thought to these issues of law and tech than most.
In this episode of the Commercial Disco, I spoke to Professor Bennett Moses about ‘predictive policing’, whereby law enforcement agencies use advanced technology like AI to predict – probabilistically – where certain crime might take place in the future.
It has been with the arrival of AI that has added urgency to the discussions about the law and technology.
But the old meme that the law continually fails to keep up with technology is not something Professor Bennett Moses believes. “I’ve always thought that was unfair criticism,” she’s says, and is more a part of tech marketing as companies try to project a future-looking buzz.
“Law is all about how you regulate, run, manage a society. And it is always written in the present,” Prof Bennett-Moses said.
“You can only think about how people should behave, what people should be allowed to do or not do in the context of what kinds of activities are possible [in the present].
You cannot write laws for future technology, no matter how good you think your crystal ball is.
Which brings us to predictive policing.
“Predictive policing is effectively police asking a different kind of question. Instead of saying ‘how do we solve this crime that has already happened’, it is reorienting the question to say ‘can we predict – probabilistically – where crime might take place in future,” Prof Bennett Moses said.
“Is crime more likely to take place in this location in the coming week [for example]. And the answer is yes, you can – up to a point.”
There have been different kinds of predictive policy applied already, for many years. The tools range from literally using a spreadsheet to track crime as a basic level, all the way to sophisticated machine learning and applied data techniques.
But notion of predictive policing has been used since forever, even if only to allocate resources – like extra patrols – on given days of the week.
And this is where things get complicated with predictive policing, and where the professor has a lot to say in this podcast. The application of powerful AI adds significant juice to existing challenges.
These issues relate to data-driven inferencing, not just bias. The use of exiting crime databases is problematic, because data is itself an imperfect representation of reality. Some neighbourhoods report more crime than others, is an example. Domestic violence is an under-reported crime is another.
And beside crime databases are a compendium of crimes that have been reported, not crimes that have actually taken place.
And using smart data techniques to send more patrol cars to a particular location can create a feedback look: The more patrol cars, the more reported crime – and therefore the need for more patrol cars.
So predictive policy is already problematic at a location level. But it gets more so when its applied to profiling individuals and making predictions about the likelihood of an individual to commit a crime.
In NSW, a program called the Suspect Targeting Management Plan (STMP) is being used to generate lists of young people in particular considered at high-risk of offending. It’s been used for outreach programs.
This become problematic, if only because a lot of the time these interventions, although not punitive, are not positive.
And once you start using advanced data tools, “part of the story becomes ‘black-boxed’” and that leads to accountability problems both internally and externally, if unknown datapoints are being used to identifying characteristics of a person to draw up lists of “high-risk” people.
“If you have been arrested before and end up in a list of ‘the usual suspects’, that’s one thing,” Prof Bennett Moses said.
“But you can end up on these kinds of lists without ever having been arrested, and not even now you’re on it. That’s where it gets problematic.”
“Do you really want to start treating [these people] from a very young age as potential felons, with the door-knocking [interventions],” she said. “I’m not sure that’s productive.”
It is a fascinating discussion about data and the application of general AI technologies into law enforcement. You can listen to the Commercial Disco podcast here.
Do you know more? Contact James Riley via Email.