At the Paris AI Summit, leaders came together to make a commitment around artificial intelligence that empowers everyone, is sustainable and more inclusive. The Summit’s delegates announced “more than a hundred concrete actions … in favour of trusted AI accessible to all”.
But is this model of global cooperation under threat, as we enter a geopolitical landscape where the world leader in AI – the US, becomes increasingly volatile?
The Paris AI Summit followed on from two previous global summits on AI, at Bletchley Park and Seoul. While there was still wide consensus on the main aims of the summit, the most notable difference this time around was the United States’ hostile position.
The US (and the UK) chose not to sign the AI Declaration. It’s worth noting that the declaration is not binding – it is merely a statement of intent, albeit a significant one, to keep working on AI development with the principles that have been agreed to.

And while there is some emphasis on concrete actions and deliverables, it is hard to ignore the fact that the largest AI player in the world right now is not a signatory.
Perhaps we can look towards another significant global agreement – the Paris Climate accords for how the AI Declaration might play out.
While some progress has been made, time is running out to meet the Paris targets. Since the agreement was signed, greenhouse gas emissions have just accelerated, and 2024 was the year we breached the 1.5 degrees target that was set.
President Donald Trump, true to his word, has pulled the US out of the agreement in his first week in office.
The US is the largest global emitter in the world, and has a vast and leading impact across research, trade, global aid and other climate related initiatives. It is in a similarly leading position in AI, with most of the world’s AI models, research, commercial products and patents coming from the US.
You might argue that AI is not the same as the climate. But it’s not so much the specific topic that is relevant, but in our ability to come together as a global society and agree on the best approaches to critical issues that affect all of us, on gathering for a common good, despite differences in our economies, priorities and capabilities, on a respectful, rules-based system governed by civility and conciliation – and whether this system is still viable.
Since Trump has come into office, he has swiftly overturned previous executive orders on AI that emphasised safety and sensible regulation, instead preferring a laissez-faire approach which favours the private sector and tech CEOs, while ignoring risks around bias, discrimination and harmful applications.
This is significant. As the US and the EU choose diverging paths on AI, it is unclear which approach will succeed. Emboldened by Trump, tech CEOs blast the Europeans’ strict approach to regulation, as their companies face restrictions and potential penalties under the EU’s AI Act.
Already there are signs that Europe is capitulating on some of its earlier AI principles, most notably, removing the AI Liability Directive, aimed at helping consumers sue for AI related harms.
Even the narrative around their approach is changing, with the EU’s technology Commissioner Henna Virkkunen claiming the decision was due to a new focus on competitiveness and reduced bureaucracy, and President Macron, the host of the Paris meeting saying the AI Summit was a “wake cup call” for Europe “because it’s about economic growth”.
Even Australia’s approach to AI, more conservative than other nations, now appear slow and overly cautious as the world reacts to the US’ bullish policies and investments.

While China’s DeepSeek has also thrown turmoil into the mix by showing China as a true player and forcing a renewed emphasis on security and sovereignty concerns.
Into this bubbling pot, the Paris AI Summit appeared to be an exercise in calm and reason, and for the time being, most of its attendant representatives, those nearly 100 signatories, seem to be aligned on what is needed for a human-centred, ethical AI.
But this was also the case for the Paris climate accords when it was first signed in 2015 – a clear, quantifiable mission to keep the world’s climate stable.
Now, nearly a decade later, there are questions about how successfully we have met those targets, how effective our actions have been as we hurtle towards a more dangerous environment.
Can we turn the same goodwill and good faith promises for AI into meaningful action, or will they be as fragile and vulnerable as the Paris climate agreement?
More importantly, how effective are these types of global agreements in this new, more unstable geo-political landscape today?
Can Australia keep relying on the policy innovations of larger nations to emulate, or do we need to invest much more in our own products, policy approaches and sovereign capabilities?
Jordan Guiao is Director of Responsible Technology at Per Capita’s Centre of the Public Square and author of Disconnect: Why we get pushed to extremes online and how to stop it.
Do you know more? Contact James Riley via Email.