Australian enterprises are broadly aligned with global peers on the opportunities and risks of enterprise AI but none have yet reached the highest maturity tier for responsible AI, according to new research from Infosys.
The global Responsible enterprise AI in the agentic era report, which surveyed 1,500 senior executives across seven countries, including 200 in ANZ, found that 95 per cent of ANZ respondents had experienced at least one AI-related incident in the past two years — identical to their global counterparts.
System failures (35 per cent) and inaccurate or harmful predictions (33 per cent) were the most frequently reported incidents locally, with prevalence closely mirroring global averages.

Around 40 per cent of ANZ companies rated the resulting damage as “severe” or “extremely severe”, with reputational damage cited more often here than elsewhere.
Infosys’ ANZ executive vice president Andrew Groth told InnovationAus.com the findings underscore a critical gap in enterprise readiness, showing there’s “still a way to go to get that right responsible AI framework in place.”
“We’ve seen a shift from AI projects happening in pockets of the organisation to a more enterprise view, but maturity is not yet where it needs to be — particularly when incidents are still occurring at the rates we’re seeing.”
The average financial loss from AI incidents in ANZ is statistically in line with the global average of around US$800,000 over two years, according to the report.
Local organisations also spend about the same proportion of their AI budgets on responsible AI (25 per cent) with AI-related incidents, accounting for nine per cent of budgets (versus eight per cent globally). Both ANZ and global respondents reported a similar underinvestment gap of around 25 to 28 per cent.
Most ANZ organisations have responsible AI teams with fewer than 25 members (80 per cent compared to 78 per cent globally). Unlike the global trend, where larger RAI teams are associated with more deployments, ANZ’s bigger teams don’t necessarily deliver more projects.
Teams with more than 25 members average 110 deployments (21 successful), compared to 113 deployments (28 successful) for teams with fewer than five members.
Mr Groth said local industry leaders were increasingly vocal about the need for clear guardrails, saying they welcome having regulations in place, “because it helps build trust with consumers, with government, and across the technology ecosystem.”
“Alignment between technology companies, our clients in banking and telco, and government is key to delivering the promise of AI.”
The ANZ data shows 86 per cent of respondents expect regulation to increase the number of AI initiatives over the next two years, in line with the global view that rules can accelerate adoption by providing clarity and confidence.
Eighty-two per cent of ANZ decision makers believe responsible AI will support business growth — slightly above the global average of 78 per cent — with just 12 per cent saying it will have no impact (compared with 15 per cent globally).
Mr Groth emphasised that this mindset shift is crucial: “It has to be seen as a strategic imperative, not just a compliance or technology function. This is a board-level conversation that touches legal, operations, customer service — the whole enterprise.”
The report identifies risk mitigation and trust as the most underdeveloped RAI capabilities in ANZ, consistent with global gaps. While all ANZ respondents met at least the “beginner” standard according to the report, none reached “leader” status.
Globally, 86 per cent of executives aware of agentic AI said it will pose additional compliance challenges, with ANZ respondents showing no statistical difference. Mr Groth said Australian firms are already experimenting with building their own AI stacks, but sustainability and efficiency must be built in from the outset.
Infosys’ recommended approach combines a product-led operating model — empowering teams to build and deploy AI — with a platform-driven model that embeds RAI guardrails, supported by a dedicated RAI office. This, Mr Groth said, is essential to “innovate within guardrails” and operationalise best practices across the enterprise.
While the report focuses on enterprise strategy, Mr Groth sees an urgent role for policymakers in setting the pace, saying “the sooner we can get clarity on guardrails, the sooner companies can deploy in a way that builds trust”.
“It’s not about how much policy they can consume, but how well they embed responsible AI into operations,” he said.
Do you know more? Contact James Riley via Email.