Australia's court battle with Elon Musk’s social network X over a terrorist video cost taxpayers $60,000, the Senate has heard, but could have cost significantly more had the case not been dropped.
ESafety Commissioner Julie Inman Grant revealed the figure at a Senate estimates hearing on Tuesday night, following legal action in the Federal Court and Administrative Appeals Tribunal earlier this year.
The online safety agency also issued warnings about growing threats to young Australians, with more school students facing harm from sexualised AI chatbots and rising reports of cyberbullying.
Representatives from the eSafety Commission appeared before the environment and communications legislation committee, with Ms Inman Grant grilled about its legal action with X Corp.
The commission issued a formal removal notice to the social media company in April over a video on its network that showed a stabbing attack on Sydney Bishop Mar Mari Emmanuel.
Unsatisfied with X Corp’s response, the eSafety Commission took the company to Federal Court to seek an injunction, which Mr Musk’s firm challenged in the Administrative Appeals Tribunal.
The legal battle cost $60,000 to prosecute and defend, Ms Inman Grant told the committee, but would have cost more if the dispute had not been settled.
“Through mutual agreement of both parties we decided to dissolve the (tribunal) case and there’s a very good reason for that," she said.
"In fact, saving taxpayer dollars was part of that equation."
The eSafety Commission was also fighting growing online harms against young internet users, Ms Inman Grant told senators, with reports to its youth cyberbullying scheme up by more than a third and growing incidents of sexual extortion scams against young men.
Discussions with Australian educators also revealed a disturbing trend to the commission in which tweens and young teens used AI chatbots to talk about sex and formed attachments to the programs.
“One of the things we learned two months ago was that upper primary school kids and lower secondary school children were starting to spend four to five hours a day on sexualised chatbots and felt addicted to them,” Ms Inman Grant said.
“AI-programmed sexual chatbots are not going to have the same reactions - they’re not humans - so we need to start thinking about how we talk to kids about their interactions with AI.”
The AI programs could "manipulate" children if challenged, she said, and parents and educators needed to carefully educate children and develop their critical reasoning and media literacy skills to address the issue.
The warning comes one month after the mother of a 14-year-old boy filed a lawsuit in the US against the makers of one AI chatbot that she claimed contributed to his suicide.
The eSafety Commission also revealed it had received about 41,000 complaints during the year, and had an almost 90 per cent success rate in addressing harm and 98 per cent success rate in having image-based abuse removed from social media.
Lifeline 13 11 14
beyondblue 1300 22 4636