Australia’s internet regulator has warned that search engines and app stores could be required to block artificial intelligence services that fail to implement age verification systems by an upcoming compliance deadline. The move marks one of the world’s toughest regulatory crackdowns on AI platforms, particularly those offering chatbot and generative AI services.
Under new online safety rules taking effect March 9, digital services operating in Australia — including AI chatbots such as OpenAI’s ChatGPT and other companion bots — must prevent users under 18 from accessing harmful content. This includes pornography, extreme violence, self-harm material, and eating disorder-related content. Companies that fail to comply face fines of up to A$49.5 million ($35 million). The eSafety Commissioner has stated it is prepared to use its full enforcement powers, including action against “gatekeeper services” like search engines and major app stores that provide access to non-compliant AI tools.
The regulation follows growing global concern over AI and youth mental health. Several AI companies, including OpenAI and Character.AI, have faced lawsuits linked to alleged harmful interactions with young users. Although Australia has not yet reported chatbot-related violence, regulators say children as young as 10 are spending hours each day interacting with AI-powered tools. Officials also expressed concern that some platforms use emotional manipulation and human-like design features to increase user engagement among minors.
A recent Reuters review found that only nine of the 50 most popular text-based AI products have introduced or announced age assurance systems. Eleven others use blanket content filters or plan to block Australian users entirely. However, the majority — including many companion chatbot providers — have yet to demonstrate clear compliance measures.
Major tech companies such as Apple and Google have not provided detailed public responses. As global scrutiny of AI safety intensifies, Australia’s new age verification rules may set a precedent for stronger AI regulation worldwide.


UK Regulators Demand Social Media Platforms Strengthen Children's Age Verification
Chinese AI Stocks Surge as Tencent, MiniMax, and Zhipu Launch Agentic AI Programs
Pentagon to Halt Ivy League Programs for U.S. Military Officers Starting 2026
Yann LeCun's AI Startup AMI Raises $1 Billion at $3.5 Billion Valuation
US Lawmakers Raise Security Concerns Over Intel Testing ACM Research Chipmaking Tools
Alphabet's GFiber Merges with Astound Broadband to Build Major U.S. Internet Provider
Broadcom Stock Jumps After Strong Earnings Beat and Bullish AI Revenue Outlook
OpenAI Explores Partnership With The Trade Desk to Expand ChatGPT Advertising
Israel Orders Evacuation of Beirut’s Southern Suburbs as Tensions With Hezbollah Escalate
Trump Administration Proposes Tough AI Contract Rules as Anthropic Blacklisted by Pentagon
U.S. State Department Expands Charter Flights as Americans Struggle to Leave Middle East Amid Iran Conflict
ICE Arrests Colombian Journalist in Tennessee, Trump Administration Says She Will Receive Due Process
UBS Seeks Legal Protection Over Credit Suisse's Nazi-Era Banking Activities
California Court Rejects xAI Bid to Block AI Data Transparency Law
ANZ and Westpac Forecast Two RBA Rate Hikes in March and May 2026
Boeing Secures $289 Million Smart Bomb Contract With Israel
FAA Pushes for Further Flight Reductions at Chicago O’Hare Ahead of Busy Summer 2026 



