The Australian government is set to pick another battle with big tech after an inquiry recommended AI software be governed by law and popular chatbots be subject to mandatory transparency rules.
AI tools from Meta, Google and OpenAI would be subject to regulation and classified as “high-risk” under the changes, which would also seek to ensure Australian copyright holders, such as authors, artists and musicians, were compensated for the use of their material.
The Adopting Artificial Intelligence inquiry called for the changes, among 13 recommendations, in its final report released late on Tuesday.
However, some committee members issued additional proposals, including independent senator David Pocock who called for the government to “go further, faster” on AI restrictions and Coalition senators who expressed concern extra rules would curtail business productivity.
The inquiry’s final report comes after 245 submissions and six public hearings featuring testimony from academics, scientists, tech companies, social media firms and creative workers.
The recommendations include a dedicated, standalone law to regulate high-risk AI use in Australia, in addition to changes to existing privacy and copyright laws.
High-risk AI uses should be defined under the law, the committee said, and should include “general purpose AI models” such as OpenAI’s ChatGPT, Google’s Gemini and Meta’s Llama.
Tech firms should also be forced to reveal what copyright works they used to train their AI models, the recommendations said, and to prove the use of this material had been “appropriately licensed and paid for”.
Executives for companies including Meta, Amazon and Google had evaded questions about the use of copyright material and consumers’ personal information, inquiry chairman and Labor senator Tony Sheldon said, and they should not be allowed to use this information without consent.
“These tech giants aren’t pioneers, they’re pirates, pillaging our culture, data and creativity for their gain while leaving Australians empty-handed,” he said.
“We need new standalone AI laws to rein in big tech and put strong protections in place for high-risk AI uses, while existing laws should be amended as necessary.”
The use of AI in workplaces should also be classified as a high-risk activity and workers and their representatives should be consulted about its potential impact, the inquiry recommended. It also said the federal government should financially support investments in sovereign AI models.
Coalition senators James McGrath and Linda Reynolds issued a dissenting report to the findings, however, arguing a dedicated AI law should only be introduced if “absolutely necessary” as it could stifle businesses when “productivity growth is near-stagnant”.
Greens senators also called for a plan to address AI’s environmental impact, and Mr Pocock urged the government to introduce AI restrictions at the next parliamentary opportunity and ban AI election material before the next election.
“I support all of the chair’s recommendations but believe some need to go further, faster,” he said.
The federal government had been considering mandatory guardrails for high-risk AI uses following a public consultation in October.
The consultation paper set out three regulatory options for AI, including an Australian AI Act to work across the economy, amendments to existing laws, and adapting existing regulations to include AI restrictions.