Australia’s internet regulator has warned that search engines and app stores could be required to block artificial intelligence services that fail to implement age verification systems by an upcoming compliance deadline. The move marks one of the world’s toughest regulatory crackdowns on AI platforms, particularly those offering chatbot and generative AI services.
Under new online safety rules taking effect March 9, digital services operating in Australia — including AI chatbots such as OpenAI’s ChatGPT and other companion bots — must prevent users under 18 from accessing harmful content. This includes pornography, extreme violence, self-harm material, and eating disorder-related content. Companies that fail to comply face fines of up to A$49.5 million ($35 million). The eSafety Commissioner has stated it is prepared to use its full enforcement powers, including action against “gatekeeper services” like search engines and major app stores that provide access to non-compliant AI tools.
The regulation follows growing global concern over AI and youth mental health. Several AI companies, including OpenAI and Character.AI, have faced lawsuits linked to alleged harmful interactions with young users. Although Australia has not yet reported chatbot-related violence, regulators say children as young as 10 are spending hours each day interacting with AI-powered tools. Officials also expressed concern that some platforms use emotional manipulation and human-like design features to increase user engagement among minors.
A recent Reuters review found that only nine of the 50 most popular text-based AI products have introduced or announced age assurance systems. Eleven others use blanket content filters or plan to block Australian users entirely. However, the majority — including many companion chatbot providers — have yet to demonstrate clear compliance measures.
Major tech companies such as Apple and Google have not provided detailed public responses. As global scrutiny of AI safety intensifies, Australia’s new age verification rules may set a precedent for stronger AI regulation worldwide.


Meta Encryption Plan Sparks Child Safety Concerns Amid New Mexico Lawsuit
Anthropic Resists Pentagon Pressure Over Military AI Restrictions
OpenAI Pentagon AI Contract Adds Safeguards Amid Anthropic Dispute
Nvidia to Launch New AI Inference Processor to Boost OpenAI Performance
Panama Investigates CK Hutchison’s Port Unit After Court Voids Canal Contracts
OpenAI Secures $110 Billion Funding Round at $840 Billion Valuation Ahead of IPO
Hyundai Motor Group to Invest $6.26 Billion in AI Data Center, Robotics and Renewable Energy Projects in South Korea
Federal Judge Blocks Virginia Social Media Age Verification Law Over First Amendment Concerns
Apple to Begin Mac Mini Production in Texas Amid $600 Billion U.S. Investment Plan
FAA Plans Flight Reductions at Chicago O’Hare as Airlines Ramp Up Summer Schedules
AI is already creeping into election campaigns. NZ’s rules aren’t ready
Trump Media Weighs Truth Social Spin-Off Amid $6B Fusion Energy Pivot
Coupang Reports Q4 Loss After Data Breach, Revenue Misses Estimates
Trump Approves FEMA Emergency Declaration After Massive Potomac River Sewage Spill
Trump Pushes Tech Giants to Build Power Plants to Offset AI Data Center Energy Costs
Hyundai Motor Plans Multibillion-Dollar Investment in Robotics, AI and Hydrogen in South Korea 



