Sovereign capability in AI is a national security issue


The AI revolution arrived later than I expected but has had a more immediate impact than I had imagined too.

I remember the moment I realised just how transformative AI was going to be. It was the mid-1990s and I was about to graduate from an undergraduate computer science degree.

In deciding whether to follow my law or computer science degrees, I saw a pitch for a research project identifying pairs of human chromosomes using computer vision – a type of AI that allows computers to interpret and understand information about the world through images.

That it was possible to automate this complex task made it clear to me then that this technology was tackling one of the core problems in science, and when it worked, would have a huge impact on how we live.

I underestimated both how long it would take for the technology to mature, and the impact it would have when it did.

Of course, AI hasn’t come out of nowhere, it’s built upon much of the technology humans have developed previously: the internet, ICT, electricity and even the steam engine.

But AI’s flexibility, generality, and its power to do things that traditionally would take a lot of human labour make it truly revolutionary.

Over the years since, my research team at the University of Adelaide built a reputation for its specialist computer vision capability. This started with a research centre with four members, and grew to one of the best groups in the world with more than 130 members when I stepped down as director.

It was hard work initially convincing people that AI was going to be transformative. Grants and partnerships with a few international companies that did understand AI’s value were critical to our growth, and I was able to eventually expand that centre into an institute in 2018.

Today, the Australian Institute for Machine Learning (AIML) is home to around 180 researchers, students and engineers.

Our people publish new research at the forefront of machine learning science and work with local and multinational companies to build innovative AI solutions for all kinds of problems.

The institute has enjoyed strong financial and political support from the South Australian government, but if it wasn’t for our foundational partnership with Lockheed Martin, we might never have got it off the ground.

Computer vision is one field of AI that is increasingly being woven into the full spectrum of defence and strategic intelligence capability, and Lockheed Martin understood the value of investing in fundamental research.

They invested to grow an AI capability. They wanted to engage and have access to skilled staff, a talent pool of smart students, new ideas and the latest developments, and to have their people in our building so they could work with the best group in Australia.

That an American defence company enabled the creation of the Australian Institute for Machine Learning is an irony not lost on me. But no Australian company was sophisticated enough to engage with that kind of industry-academic research and development model.

Russia’s invasion of Ukraine has given us a glimpse of what future warfare will look like. With significant help from Western allies Ukraine has created its own AI geospatial intelligence platform to fend off a much larger adversary.

This includes computer vision to analyse vast satellite, aerial and ground-based imagery to understand troop and vehicle movements, and natural language processing to autonomously mine vast amounts of unencrypted radio communication for actionable intelligence.

As Russia is now learning, no amount of firepower or boots on the ground can beat data-driven agility.

Just over two years ago, the Australian Government announced a $360 billion AUKUS trilateral security pact that might see the first nuclear-powered submarine arrive in Australian waters by the end of the next decade.

But we need to urgently scale up our domestic AI and autonomy capability right now. China is miles ahead of us in AI capability and they’re deploying it flexibly across their entire military system.

In April, the Australian government released its Defence Strategic Review; the 112-page public report mentions artificial intelligence only once, when it noted that AUKUS Pillar II will “develop and deliver advanced capabilities in areas such as artificial intelligence, hypersonics and maritime domain awareness.”

AI isn’t something we can buy off the shelf from our international allies and hope that it works when we plug it in. Technology that can be adopted from elsewhere is derivative, and provides no competitive advantage.

The world is spending big on AI, and our allies and adversaries are making strategic decisions to invest in their core domestic AI capability by investing in their smartest people.

The US and China are obviously the global leaders, but the UK, France, Germany, South Korea have all made billion dollar – or multibillion dollar – investments.

Singapore, with a population not much greater than that of Sydney, has a national AI strategy backed by ~AU$740 million worth of fundamental research, translational research and industry-research collaboration.

How do we address our national AI deficit? In many ways it’s simple, we need to start funding it properly.

But Australia also needs a strategy to engage with the leading minds in AI so it can build a leading AI defence capability. One way we can do that is by conducting AI defence research in an unclassified environment.

Defence can’t just recruit the best people in AI, because the best people are found in our leading universities, and they won’t go and work on the wrong side of a 10-foot classified fence under a publication embargo.

Scientists need the academic environment of open peer collaboration and competition to keep them at the global forefront of their fields. It’s critical to their careers, and their performance.

Defence technology leaders must understand that great technological innovation thrives in the open, and not in a classified lab disconnected from the wider world. A top-secret research environment is often a recipe for second-rate outcomes. Entropy increases in a closed system.

So how do we conduct leading AI research in a way that meets Australia’s defence needs?

It goes something like this: Someone in defence comes up with their most challenging technology priorities in a secret, fully classified domain. They then create an unclassified euphemism for that problem.

University researchers set to work solving that euphemism using the best AI technology in the world. And when the researchers have solved the problem, defence can then take the technology back over the 10-foot fence into the classified lab.

This idea isn’t radical and it’s not new. The Defense Advanced Research Projects Agency (DARPA) has driven high-risk, high-gain research in the US for almost 65 years and is partly responsible for some of the greatest watershed inventions of the past half century: the internet, personal computers, GPS, stealth technology and even Covid-19 vaccines.

DARPA understands the value of fundamental research and it “fully supports free scientific exchanges and dissemination of research results to the maximum extent possible.”

One of my staff recently asked me what concerned me the most about Australia falling behind in AI and what the next five to 10 years might bring to defence technology.

It’s an important question that’s nearly impossible to answer. To borrow one of NASA’s phrases, AI’s future impact is one of the great unknown unknowns. But I do know that sovereign capability in AI is absolutely critical for Australia’s future.

Professor Anton van den Hengel is an AI researcher at the University of Adelaide and the founding director of the Australian Institute for Machine Learning (AIML). Currently, he serves as the Director of AIML’s Centre for Augmented Reasoning and the Director of Amazon’s machine learning research team at Adelaide’s Lot Fourteen innovation district.

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories