As artificial intelligence platforms become more deeply embedded in daily life, concerns about their role in enabling dangerous behavior continue to grow. A New Zealand startup is now developing a groundbreaking tool that could redirect users displaying violent extremist tendencies toward professional deradicalization support — marking a significant step forward in AI safety innovation.
ThroughLine, a crisis intervention company already contracted by OpenAI, Anthropic, and Google, currently routes at-risk users to mental health helplines when signs of self-harm, domestic violence, or eating disorders are detected. Founder Elliot Taylor, a former youth worker, is now exploring how that same infrastructure can be expanded to address online radicalization before it escalates into real-world violence.
The proposed system would use a hybrid model combining a specialized deradicalization chatbot with referrals to vetted, human-run mental health services. Unlike standard AI platforms, the tool would be trained using guidance from subject-matter experts rather than generic large language model datasets. ThroughLine is currently in active discussions with The Christchurch Call, an international initiative launched after New Zealand's 2019 terrorist attack, to develop and validate the technology.
This initiative comes as AI companies face mounting legal pressure over their failure to prevent platform-enabled violence. Canada's government threatened OpenAI with regulatory intervention after it emerged that a school shooter had been quietly banned from the platform without law enforcement being notified.
Research has consistently shown that aggressive content moderation can drive extremist sympathizers to less regulated platforms like Telegram, making early, compassionate intervention all the more critical. Taylor argues that cutting off vulnerable users mid-conversation leaves them without support and potentially more dangerous.
With over 1,600 helplines across 180 countries in its network, ThroughLine is uniquely positioned to bridge the gap between AI detection and real-world crisis response — potentially reshaping how tech platforms handle radicalization online.


Chinese Brands Are Taking Over Brazil — And It's Just Getting Started
Pilots Fear Retaliation for Refusing Middle East Flights Amid Ongoing Conflict
Foreign Investors Pour $18.65 Billion into Japanese Stocks Amid Market Stabilization
BHP's Incoming CEO Visits China Amid Pricing Dispute with CMRG
TSMC Posts Strong Q1 2025 Revenue, Riding AI Chip Demand Wave
San Francisco Suspect Arrested After Molotov Cocktail Attack on OpenAI CEO Sam Altman's Home
China's AI Stocks Surge as Zhipu and MiniMax Hit Record Highs
Samsung Electronics Posts Eightfold Profit Surge Driven by AI Chip Demand
Goldman Sachs, ANZ Cut Oil Forecasts Amid U.S.-Iran Ceasefire Hopes
Annie Altman Amends Sexual Abuse Lawsuit Against OpenAI CEO Sam Altman
Anthropic's Mythos AI Model Sparks Emergency Cybersecurity Meeting With Top U.S. Bank CEOs
Chinese Cars in Europe: Consumer Trust Is Shifting Fast
Chalco Stock Surges as Q1 2025 Profit Forecast Jumps Up to 58%
Lumentum Holdings Rides AI Wave With Order Book Filled Through 2028
NASA's Artemis II Mission: First Crewed Lunar Journey Since Apollo
U.S. Disrupts Russian Military Hackers' Global DNS Hijacking Network 



