Warning of AI Services Ban in Australia

Note: AI technology was used to generate this article's audio.
- Australia threatens to ban AI platforms that don’t enforce age restrictions
- More than half of AI services haven’t announced compliance measures ahead of the deadline
Australia’s internet regulator has warned it may take strict action against search engines and app stores that fail to verify users’ ages, following a review revealing that over half of AI services have not yet outlined steps to comply with new rules coming into effect next week.
These warnings are part of Australia’s globally strict efforts to curb AI-related risks, especially after rising legal cases linked to platforms restricting harmful content—or even encouraging self-harm or violence—amid researchers’ concerns about the impact of these platforms on young people’s mental health, reportedly greater than traditional social media.
In December, Australia became the first country to ban teenagers from using social media platforms, citing mental health risks—a move that prompted global leaders to announce similar plans.
Now, the country seeks to apply a comparable policy to AI, imposing age limits on the content users can access.
From March 9, internet services in Australia, including search engines like ChatGPT and chat platforms, will need to restrict access for users under 18 to content that is inappropriate, violent, or related to self-harm or eating disorders—or face fines of up to AUD 49.5 million (USD 35 million).
A regulator spokesperson said: “We will use our full powers if compliance is not met, including actions against major services such as search engines and app stores that serve as primary access points for these tools.”
OpenAI and Character.AI have faced lawsuits over unintentional deaths linked to minor users interacting with the platforms, while OpenAI confirmed this week it suspended a teenage account involved in a mass shooting in Canada months prior without notifying authorities.
Despite no incidents linked to AI chat in Australia so far, the regulator reported children as young as 10 using these tools for up to six hours daily.
The spokesperson added: “We are concerned that AI companies use advanced emotional manipulation and human-like profiling to attract young users and increase excessive engagement.”
Apple did not respond to the newspaper but confirmed last week it will use “reasonable methods” to prevent minors from downloading apps meant for users over 18, while Google declined to comment.
A review of 50 popular AI products found only nine had any plans or systems to verify age, with 11 platforms implementing full filters or completely blocking access in Australia, leaving 30 platforms without clear compliance measures.
The study noted that three-quarters of companion chat platforms have no filtering or age-verification systems, and one-sixth lack an email contact for reporting violations—a legal requirement.
Researcher Lisa Given from RMIT University said the findings did not surprise her, observing that “most of these tools are designed without regard for potential harms or the need for safety measures.”
She added: “It seems we are being used as a testing ground by these companies to see how far society can be pushed.”
