The use of ChatGPT in the public service is not being encouraged by the Digital Transformation Agency, although it is supportive of experimentation using generative AI technologies to explore use-cases as the digital agency continues development of formal AI guidelines.
The agency’s chief executive Chris Fechner noted that there was currently “no formal Commonwealth policy related to generative AI technologies like ChatGPT at this time”.
He also acknowledged that public servants would want to experiment with the technology and has advised they follow the DTA’s “initial (beta) AI guidance” published last year through its Australian Government Architecture (AGA).
The voluntary guidance is supposed to support government agencies exploring use of AI and associated technologies, as well as providing advice to the private sector on expectations for public-sector use of AI technologies.
“The DTA’s view is that without an evaluation of the algorithms, training data and validations that produce such AI results, using generative AI technologies for delivery of services or making decisions should not be supported. However, experimentation using generative AI technologies within the APS to help generate use-cases should not be discouraged,” Mr Fechner said.
“Use of generative AI technologies or other similar AI capabilities in government for policy, decisions or service delivery must be assessed and evaluated. This includes establishing the appropriate governance, consultation, transparency, contestability, validation and other controls.”
In an initial response to InnovationAus.com that was subsequently retracted, Mr Fechner said that “the DTA would not support the use of ChatGPT in the execution of public-sector business or in generating public-sector white papers, as the algorithms, training data and validations have not been tested for this use”.
The development of guidelines aimed to “ensure appropriate risk management and governance mechanisms are in place to support safe and secure AI solutions, provide a structure for consistent application of the technologies, and provide the guardrails to achieve efficiencies and reduce duplication”.
Development of the AI guidance was done by “working closely and taking guidance from the Department of Industry, Science and Resources; Data61; CSIRO; and the National AI Centre”. This included “several roundtable discussions with Australian Government agencies and industry stakeholders”.
Work is ongoing at the DTA to formally include AI guidance in the Australian Government Architecture, with Mr Fechner describing it as a “rapidly changing digital and ICT landscape. The DTA actively engages with industry and academia, and welcomes their contributions and feedback”.
The beta guidelines acknowledge the opportunity to optimise business process, reduce risks of legacy impacts, increase transparency of decision making, and process “significantly larger scales of data than current capacities” allow to deliver better citizen outcomes.
Australian AI regulation is being led by the New South Wales government through its AI framework and AI committee. At the behest of the state’s Digital Government minister Victor Dominello, a bipartisan inquiry into the use of AI across the state government is expected to take place this year, telling InnovationAus.com last month “policy decisions must be well informed and evidence based”.
“The inquiry will examine the breadth of issues this evolving space presents. While AI and associated tech presents us with a wealth of opportunities, it can be used inappropriately, which is why we need expert advice on how to get a proper framework in place,” Mr Dominello told InnovationAus.com last month.
“Regardless of any regulatory or other changes that may be considered as part of the Inquiry, education will remain a powerful mechanism to inform people on how AI can be used responsibly. I foresee industry and academia playing a pivotal role here.”
Do you know more? Contact James Riley via Email.