QChat AI tool deployed without proper safeguards


Trish Everingham
Contributor

The Queensland Audit Office has criticised the rollout of the state’s generative AI assistant, QChat, after finding the Department of Transport and Main Roads (TMR) failed to implement basic safeguards and monitoring for its use.

QChat has been available to public servants since February 2024, logging more than 383,000 conversations across nearly 19,000 users, including more than 3,000 within TMR.

However, the audit, released on Wednesday, found the department had not completed an ethical risk assessment, set up monitoring controls or tracked activity using available dashboards.

Queensland Paraliament. Image: Shutterstock.com/Sony Herdiana

The report warns that without proper governance, training and oversight, generative AI systems risk eroding public trust and could expose the state to privacy breaches, misinformation and inappropriate use.

QChat has been developed by the Department of Customer Services, Open Data and Small and Family Business (CDSB) over the past two years and was reportedly upgraded to GPT-4o in April.

It includes system-wide safeguards like Microsoft filters to block harmful content and prompts to enforce government tone, but the audit found that TMR relies only on these default protections.

TMR has not configured its own “entity prompts” to guide outputs, has not set up monitoring arrangements, and has not rolled out a complete training program for staff, the report found.

Uptake of an optional AI training course was also described as “low”. The report said a 2024 internal security assessment recommended policies and processes for managing breaches, monitoring usage and educating staff, but these have not been delivered.

While CDSB holds extra content safety data, this has not been shared with agencies such as TMR, further limiting oversight.

The Audit Office warned that generative AI systems are unpredictable and safeguards cannot always prevent inappropriate use, offering examples like staff entering sensitive information into prompts or relying on inaccurate content in decision-making.

The report concluded that without entity-level governance, the risks cannot be managed effectively, and recommended TMR set up monitoring activities and a structured education plan, and that CDSB share content safety information with agencies.

Both departments accepted the recommendations, with TMR saying it would introduce monitoring procedures by the end of 2025 and run a department-wide AI literacy campaign by September 2026.

Queensland introduced a whole-of-government AI governance policy in late 2024, requiring departments to follow international risk assessment standards, but the audit found these rules are being applied inconsistently, especially for systems already in use.

The report comes as federal and state agencies roll out their own generative AI tools, including Microsoft Copilot. The Commonwealth released its AI assurance framework last year, but governance remains patchy across jurisdictions.

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories