Elon Musk’s artificial intelligence chatbot Grok, developed by xAI and integrated into social media platform X, has come under intense international scrutiny after admitting that lapses in safeguards led to the generation of images depicting minors in minimal clothing. The incident has raised serious concerns about AI safety, content moderation, and regulatory compliance across multiple countries.
In a public post on X, Grok acknowledged that “isolated cases” occurred in which users were able to prompt the AI to generate or alter images resulting in inappropriate content involving minors. The chatbot stated that while safeguards exist, improvements are being urgently implemented to fully block such requests. Grok emphasized that child sexual abuse material (CSAM) is illegal and strictly prohibited, adding that failures in its systems are being actively addressed.
The issue surfaced after users shared screenshots showing Grok’s public media tab filled with images they claimed had been altered by the AI after uploading photos and issuing prompts. These images quickly sparked backlash, prompting responses from regulators and government officials worldwide.
When Reuters contacted xAI for comment, the company responded with the brief message “Legacy Media Lies,” offering no further clarification. Grok later reiterated in a separate reply that while advanced filters and monitoring can prevent most cases, no AI system is entirely foolproof. It added that xAI is prioritizing improvements and reviewing user-submitted reports.
Regulatory pressure has intensified in Europe and Asia. French ministers reported Grok-generated “sexual and sexist” content to prosecutors, calling it “manifestly illegal,” and also alerted media regulator Arcom to assess compliance with the EU’s Digital Services Act. In India, the Ministry of Electronics and Information Technology issued a formal notice to X, stating the platform failed to prevent misuse of Grok to generate obscene and sexually explicit content involving women, and demanded an action-taken report within three days.
While the U.S. Federal Communications Commission did not respond to inquiries, the Federal Trade Commission declined to comment. The controversy highlights growing global concerns over AI-generated content, platform accountability, and the urgent need for stronger safeguards as artificial intelligence tools become more widely used.


Alphabet's GFiber Merges with Astound Broadband to Build Major U.S. Internet Provider
Microsoft Backs Anthropic in Legal Fight Against Pentagon's AI Blacklist
Amazon Website Outage Disrupts Thousands of U.S. Shoppers Before Services Recover
Nintendo Stock Surges 10% as Pokémon Pokopia Breaks Sales Records
Chinese AI Stocks Surge as Tencent, MiniMax, and Zhipu Launch Agentic AI Programs
UBS Seeks Legal Protection Over Credit Suisse's Nazi-Era Banking Activities
Lindt Posts Record CHF 5.92 Billion in Sales for 2025, Doubles Share Buyback Program
Anduril Industries Acquires ExoAnalytic Solutions to Bolster Space Defense Capabilities
Oracle Stock Surges as AI Data Center Boom Drives Revenue Beat and Bullish 2027 Outlook
Big Tech Signs White House Pledge to Fund Power for AI Data Centers
OpenAI Explores Partnership With The Trade Desk to Expand ChatGPT Advertising
California Court Rejects xAI Bid to Block AI Data Transparency Law
Domino's Pizza UK Reports 15% Drop in Annual Profit Amid Weak Sales and Rising Costs
U.S. Senate Greenlights AI Chatbots for Official Staff Use
Pokemon Pokopia Sells 2.2 Million Copies in Four Days, Boosting Nintendo Switch 2 Momentum
Boeing Secures $289 Million Smart Bomb Contract With Israel
Costco Faces Class Action Lawsuit Over Tariff Refunds as Supreme Court Strikes Down Trump's IEEPA Tariffs 



