Scott Farquhar is wrong: Sovereignty demands home-grown AI


Simon Kriss
Contributor

In his National Press Club address last week, Atlassian co-founder and Tech Council of Australia Chair Scott Farquhar outlined an ambitious future for artificial intelligence in Australia grounded in sovereign capability and economic opportunity.

He mapped out the AI ecosystem in layers: infrastructure, chips, models, and applications. He made a compelling case for Australia’s competitive strengths in green-powered data centres and rightly noted that chip manufacturing may be out of reach.

But when it came to the critical layer of foundational models (the engines that drive today’s most powerful AI systems) his argument gave pause.

Farquhar suggested we don’t need to build our own. We can, he said, simply make use of the models developed in the US, Europe or China – all open and available.

Yes, we can. Until we can’t.

Scott Farquhar at the National Press Club last week. Image: NPC/Fernanda Pedroso

AI sovereignty isn’t about downloading someone else’s model and hosting it safely in an Australian data centre in case a cable is cut or a tariff war erupts. It’s about knowing what we’re working with and being able to shape it ourselves.

Imagine building a house, but instead of laying your own foundation, you ask your neighbour to extend theirs. It works for a while… but cracks start to appear. When you try to inspect or repair the damage, you realise you don’t know how that foundation was built or what went into it.

You’re locked out of your own structure.

This is what’s at stake if we base our AI future on models built offshore. We don’t know what data was used, what was excluded or what biases are embedded deep in the architecture. We don’t have visibility, let alone control.

Consider President Trump’s recently released American AI Action Plan. One of its proposals is mandating that US foundational models remove all references to climate change and diversity. If that becomes reality, do we really want Australian defence, healthcare or education systems running on models cleansed of these concepts?

And building a sovereign model isn’t just about throwing money at the problem. A foreign-owned firm could easily train a model on Australian data, fine-tune it to our vernacular and still lock us out of the most critical layers of control. That’s not good enough either.

At the heart of this issue is trust. Studies show Australians are more sceptical of AI than most. We see its power. We understand its potential. But we don’t trust it especially when it’s a black box built elsewhere, trained on data we didn’t choose.

That’s why foundational models built in Australia, by Australian teams, under Australian law with Australian data are essential.

Yes, we need strong governance and the right experts at the table but we need to go beyond that. We need full transparency on how models are trained. What data was used? Was it ethically sourced? Were copyright and consent respected? Is there a clear, auditable opt-out process?

Our defence and intelligence agencies will need access to model weights, not just outputs. The Australian Federal Police won’t get that from OpenAI. Nor will Services Australia or Treasury.

Government agencies like the National AI Centre, the Digital Transformation Agency, and regulators like the Privacy Commissioner and eSafety Commissioner must be embedded in the process not as afterthoughts, but as essential pillars of public trust.

And finally, we must ensure that critical data, especially that generated by Australian citizens and institutions, never leaves our borders for processing, storage or “resting” in foreign clouds.

None of this is radical. It’s measured, pragmatic, and entirely achievable.

A sovereign Australian AI model is not a science project. It’s national infrastructure. Built properly, it will reflect our values, protect our data and earn public trust.

Simon Kriss is an AI strategist and consultant.

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories