A new research centre focused on AI-powered law enforcement initiatives has been set up by the Australian Federal Police and Monash University.
The AI for Law Enforcement and Community Safety Lab (AiLECS) was launched on Tuesday as a formal research centre at the University’s Faculty of Information and Technology.
Last year, the AFP committed $4.4 million over four years to the lab through the Commonwealth Confiscated Assets Account. Monash University is also committing funding but would not publicly disclose a figure.
When the lab, co-directed by Associate Professor Campbell Wilson and AFP Leading Senior Constable Dr Janis Dalins, was first founded in 2019 as a part of Monash’s Data Futures initiative, it received funding commitments totalling $2.5 million.
The additional funding will expand the research currently being undertaken, increase AFP participation, and bring on more multidisciplinary researchers. This includes Professor Jon Rouse the former head of the Queensland Police Service’s online child exploitation unit Taskforce Argos.
Ongoing projects at the centre include research into technical and socio-technical responses to deepfakes, developing child sexual abuse material detection deep learning models, and developing ‘Data Airlock’, a tool that enables researchers to utilise sensitive data sets like child sexual abuse material without being exposed to it and without the possibility of breaching legal restrictions.
Another ongoing AiLECS Lab project has been dubbed Project Metior Telum. Researchers at the lab, with an industry partner, are using photogrammetry and 3D scanning to create a digital library of firearms to improve detection and work against firearms trafficking.
Dr Dalins said Project Metior Telum is an “important illustration of where collaboration can take us. We can trace every element of our library, from ownership to specific model”.
“Through [AiLECS] we are able to combine global first research initiatives in AI and machine learning with law enforcement expertise and principles. We aim to be a voice for ethics and accountability in AI,” Dr Dalins said.
Associate Professor Wilson said that emerging technologies that have increased the ease of access to information and content creation have as much potential to be used for positive of negative social outcomes.
“We are currently witnessing the age of AI, we have already seen how newer technologies are being misused, leading to increased cyber attacks, identity theft, exploitation and misinformation,” Associate Professor Wilson said.
“Our research at AiLECS harnesses machine learning, natural language processing, network analysis and other techniques to support law enforcement in countering child abuse material, detecting and classifying illegal firearms, recognising misinformation and analysing large online criminal networks.
“While building these AI support systems we are focused on ensuring the datasets used to train our algorithms are ethically sourced because it is important to ensure AI used for the greater good is also produced responsibly.”
Do you know more? Contact James Riley via Email.