Australia urgently needs to restrict the use of AI to protect workers, an inquiry has been told amid warnings the technology has already been deployed to cut creative jobs and replace humans with digital clones.
The call comes after the Senate inquiry heard one voice actor had his contract cancelled and his voice replicated using AI, and another had her digital voice used in a pornographic ad without her consent.
The Adopting Artificial Intelligence inquiry was also told on Tuesday book authors and Indigenous storytellers were unable to address the theft of their work, and Australia should create sovereign AI models to ensure ethical behaviour.
The inquiry’s third public hearing drew representatives from legal, academic and media groups including the Media and Entertainment Arts Alliance, with the union calling for restrictions on AI technology to protect Australian jobs and ensure workers were compensated.
MEAA member and voice actor Cooper Mortlock told the inquiry his work on an animated series was cut short in 2022 when producers used an AI tool to clone his voice without his knowledge or compensation.
"When we reached about episode 30 of the promised 52 episodes, our producer cancelled the contract, saying 'we've decided to discontinue making the series'," he said.
"A year later, after the contract had finished, they released another episode of this series using what was obviously an AI copy of my voice and the other actors' voices."
The company initially denied using AI technology but later clarified that his employment contract allowed them to do so, Mr Mortlock said.
Australian Association of Voice Actors vice-president Teresa Lim provided another example of AI misuse, sharing the story of an Australian actor who signed a contract and recorded clips to create a digital voice.
The contract stipulated that the recordings could not be used for "profane or inappropriate content", she said.
"The next year, six people alerted her that an explicit pornography video ad was playing repeatedly on PornHub which had her distinct voice," Ms Lim said.
"Without proper regulation in place, any voice could be used for any agenda and that’s what we're really concerned about and we need safeguards in place to protect the ownership of our voices."
The misuse showed why Australians needed greater protection for their personal information, Digital Rights Watch founder Lizzie O’Shea said.
Protections should cover voice, likeness and biometric data, she said, in addition to restrictions around individual consent.
"It is clear that our laws are decades out of date," Ms O'Shea said.
"There has to be structural interventions that limit the use of personal information and seek to put limits on data-extracted business models."
Many local authors and First Nations storytellers had also had their work ingested by tech giants to train AI models, Australian Writers Guild group chief executive Claire Pullen told the inquiry.
In addition to rules around consent and compensation, she said regulations were needed to force AI firms to let copyright holders check if their work had been used and request its removal.
"Establishing an obligation for the platforms that operate in Australia to make their so-called training data discoverable, at a minimum, would be really helpful," she said.
But local restrictions on AI may have limited impact and Australian firms should be encouraged to create their own models, Adelaide University's Anton van den Hengel told the inquiry.
"The first step is that we should build our own language model," he said.
"This is critical in order that we might have a language model that captures Australian values and that leverages that trust."
The Senate inquiry is expected to issue findings on the opportunities and impacts of AI in September.