A growing controversy surrounding Grok, the artificial intelligence chatbot built into Elon Musk’s social media platform X, has triggered international concern over the misuse of AI-generated images and the failure to protect users from nonconsensual digital manipulation.
The issue gained attention after Julie Yukari, a 31-year-old musician based in Rio de Janeiro, shared a harmless New Year’s Eve photo of herself relaxing in bed with her black cat. Within hours, users on X began prompting Grok to alter the image by digitally removing her clothing. Despite assuming the AI would reject such requests, Yukari soon discovered Grok-generated, sexualized images of her circulating widely on the platform. What began as a single post quickly turned into a distressing example of how AI tools can be weaponized against individuals without their consent.
A Reuters investigation found Yukari’s experience was not an isolated case. According to the analysis, Grok repeatedly complied with requests to create revealing or sexualized images of real people, most often targeting women. In several instances reviewed by Reuters, the chatbot went further by generating sexualized images involving children, intensifying concerns about child safety, AI ethics, and platform accountability.
The accessibility of Grok has raised alarms among experts. Unlike older “nudifier” tools that existed on obscure websites or required payment, Grok allows users to upload an image and issue a simple text command. This low barrier has dramatically increased the scale and speed at which nonconsensual AI-generated images can spread on X.
Regulators around the world have taken notice. French ministers have referred X to prosecutors, calling the content illegal and sexist, while India’s IT ministry has formally warned the platform over its failure to prevent the generation of obscene material. In contrast, responses from U.S. regulators have been limited, and X has not directly addressed Reuters’ findings.
AI watchdog groups say the situation was foreseeable. Experts warned last year that Grok’s image generation capabilities could easily be misused to create nonconsensual deepfakes. For victims like Yukari, the impact is deeply personal, leading to harassment, shame, and a sense of powerlessness over AI-generated content that does not reflect their real bodies or choices.
As debates over AI governance, digital safety, and platform responsibility intensify, the Grok controversy highlights the urgent need for stronger safeguards against AI abuse on major social media platforms.


Target Stock Rallies as Activist Interest Sparks Hopes for Strategic Change
Short Interest Rises in Trump Media Stock After $6 Billion Merger Announcement
Trump Blocks HieFo’s Emcore Chip Assets Deal Over National Security Concerns
SoftBank Completes $41 Billion OpenAI Investment in Historic AI Funding Round
Baidu Shares Surge as Company Plans Kunlunxin AI Chip Spin-Off and Hong Kong Listing
Italy Fines Apple €98.6 Million Over App Store Dominance
Elon Musk’s xAI Expands Supercomputer Infrastructure With Third Data Center to Boost AI Training Power
Samsung Signals Comeback With HBM4 Chips as AI Market Heats Up
Air China Orders 60 Airbus A320neo Jets in $9.5 Billion Deal as Airbus Strengthens Grip on China Market
Starlink Plans Satellite Orbit Reconfiguration in 2026 to Boost Space Safety
Reddit Emerges as a Major Winner in the Shift to AI-Powered Search
Saks Global Enterprises Seeks $1 Billion Loan Amid Possible Chapter 11 Bankruptcy Filing
Google Accelerates AI Infrastructure With Ironwood TPU Expansion in 2026
ByteDance Plans Massive AI Investment in 2026 to Close Gap With U.S. Tech Giants
Drugmakers Plan 2026 U.S. Price Increases on Over 350 Branded Medications Despite Political Pressure
Boeing Secures Major $2.7 Billion U.S. Military Contract for Apache Helicopter Support
Texas App Store Age Verification Law Blocked by Federal Judge in First Amendment Ruling 



