Government departments needed to consider what could go wrong before allowing AI tools to make decisions and how much it could cost to put it right, an inquiry has heard.
Representatives from government departments, agencies and research organisations appeared before an inquiry into the use of AI in the public sector on Friday, where the acting Commonwealth Ombudsman issued the warning.
But the committee also heard government experiments with generative AI technology were expected to skyrocket in coming months, and those overseeing its use were preparing to review its use carefully.
The inquiry, called in September, is expected to investigate efforts to regulate and oversee the use of AI technology in government entities and identify potential harms.
But harms had already occurred by relying on computer models to estimate debt and incomes, Acting Commonwealth Ombudsman Sarah Bendall told the committee, and all government entities should consider the case of Services Australia before deploying the technology.
AI tools should only be used in accordance with an agency’s laws, she said, should be actively monitored by a human, and should only be deployed after considering the potential cost of a widespread error.
“Agencies need to be live to the risk of incorrect use of AI and be ready in advance to remediate errors,” Ms Bendall said.
“It’s a simple notion that when an agency makes a wrong decision they need to remediate an individual for the impact of that wrong decision, however in the context of AI, where we’re seeking to create efficiencies in government services, we are potentially talking about wrong decisions being made on a very large scale.”
While the government had already trialled Microsoft CoPilot with 7500 users, many more departments and agencies were expected to launch their own AI projects shortly, Digital Transformation Agency chief executive Chris Fechner said.
The proliferation would be a challenge, he said, and would require greater oversight and investigations into AI models and their data partners.
“With what has actually gone live now, we’ve got a degree of comfort, but there will be a very large number of things coming through in a very short timeframe,” Mr Fechner said.
The Digital Transformation Agency would require government departments to report their use of AI tools, he said, and it would monitor for any changing terms and conditions that would allow wider sharing of information.
But some departments had introduced strict controls for access to AI tools and rules for its use, Home Affairs chief data officer Pia Andrews said.
The department tested the technology in a controlled environment, she said, and found AI tools were useful for basic tasks such as summarising meetings and improving grammar.
But AI technology would not be used to produce policy documents, she said, as experiments in which it was asked to provide advanced computation “got really weird really quickly”
“Generative AI is a bit like a spellchecker on speed: you’d never ask it to write a policy or do anything super clever but it is a support mechanism that can help you with a little bit of efficiency, synthesis, these kinds of things,” Ms Andrews said.
The inquiry also heard more than 300 submissions were received into the government’s public consultation on mandatory AI guardrails, which were being considered.