‘Unacceptable risk’: Public servants warned on generative AI use


Brandon How
Administrator

Public servants have been warned that using generative artificial intelligence tools for service delivery and decision-making poses “an unacceptable risk to government” in new advice from the Digital Transformation Agency.

But the advice does not rule out use of the technology, only that it should be “deployed responsibly” and “used in cases where the risk of negative impact is low”.

Developed in collaboration with the Department of Industry, Science, and Resources (DISR), the interim guidance is the first time the government has offered direction to Australian Public Service (APS) staff.

The guidance is intended to supplement policies developed by individual agencies while a whole-of-government position is developed.

The DTA has also recommended that APS agencies introduce a system to register and approve public servants’ use of generative AI platforms such as OpenAI’s ChatGPT, Google’s Bard AI, or Microsoft’s Bing AI.

The approval process should be overseen by agencies’ respective chief information security officers (CISO) or chief information officers (CIO). Agencies are also encouraged to enter into commercial arrangements for the use of AI solutions “as soon as it is possible to do so”.

Other unacceptable use cases, in addition to service delivery and decision-making, include those that require large amounts of government data, classified, sensitive or confidential information, or where coding inputs will be used in government systems.

“Classified or sensitive information must not be entered into these tools under any circumstances. You should also not enter information that would allow AI platforms to extrapolate classified or sensitive information based on the aggregation of content you have entered over time,” the guidance reads.

The guidance, which came into effect on June 29 and will continue to be iterated on, is based on four principles for the ethical use of artificial intelligence in government, which are responsible deployment, transparency and explainability, privacy protection and security, and accountability and human centred decision-making.

It also includes ‘tactical guidance’ to help APS staff implement the four principles outlined and provides three use case examples, on top of the the eight ethics principles outlined in the whole-of-economy ‘Australia’s AI Ethics Framework’ launched in 2019.

DTA chief executive Chris Fechner said the guidance had been developed through “broad consultation with Commonwealth agencies” to respond to generative AI.

“Due to the rapid evolution of technology, there is a growing demand for guidance when government staff members assess potential risks involved in its use” Mr Fechner said.

APS staff are expected to report instances of non-compliance with the guidance to their respective CISOs and CIOs. This information should be reported to the DTA periodically.

Staff should also consider “including markings in briefings and official communications” if information was generated using AI.

Development of the guidance began last year, with ‘beta guidelines’ on the adoption of AI in the public sector released earlier this year, although this did not include recommended actions.

The DTA and DISR are still undertaking work to consider the “need for whole-of-government policies and standards relating to the responsible use of AI in government” and other regulatory responses  through the DISR-led consultation on the safe and responsible use of AI.

DISR will produce a risk-classification for various AI use cases at the conclusion of the consultation. Each classification will be accompanied by appropriate actions to be taken, which may include blanket bans.

The guidance comes ahead of the release of the final report from the Royal Commission into the Robodebt Scheme on Friday.

Government services and NDIS minister Bill Shorten previously accused the robodebt scheme of taking the “human element out” of the process. The generative AI guidance advises that “humans should remain as the final decision maker in government processes”.

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories