Twitch has updated its policy to include specific criteria to identify “harmful misinformation actors.” And anyone who meets these indicators could be banned from the streaming platform.
Social media and other online platforms were previously hailed for offering new ways to disseminate breaking news much faster than traditional mediums. But, as most people learned over the last couple of years, that also means spreading misinformation can be faster and reach a much wider audience than ever before.
The Amazon-owned streaming giant is the latest platform to introduce new policies that seek to curb the spread of misinformation on its services. “Our goal is to prohibit individuals whose online presence is dedicated to spreading harmful, false information from using Twitch,” the company said in the announcement post. That means not everyone who makes “one-off statements containing misinformation” will be punished.
As mentioned, the policy updates are focused on what Twitch calls “harmful misinformation actors.” The company said it consulted with experts to determine how a Twitch user could be characterized as one.
Twitch will consider users as “harmful misinformation actors” if their channel and other online pages outside Twitch are “dedicated to (1) persistently sharing (2) widely disproven and broadly shared (3) harmful misinformation topics, such as conspiracies that promote violence.” The company said these characteristics were chosen because combining them creates the “highest risk” that could result in real-life dangers. The company also encourages users to report creators who may fit these descriptions to the dedicated email address [email protected].
Twitch’s community guidelines now include a specific section for harmful misinformation actors. The company has also identified types of misinformation content that are likely being persistently peddled by creators.
Twitch noted several times in the announcement that misinformation is “not currently prevalent” on its platform but recognized the harm it could cause. The shortlist includes misinformation targeting protected groups, conspiracy theories about “dangerous treatments” and COVID-19/vaccine misinformation, content tied to and promotes violence and content that perpetuates “verifiably false claims” about political processes like election fraud allegations. The new rules also cover misleading content about public emergencies, such as natural catastrophes and active shootings.
Photo by Marco Verch from Flickr (CC BY 2.0)


Nanya Technology Shares Surge 10% After $2.5 Billion Private Placement from Sandisk and Cisco
Federal Judge Blocks Pentagon's Blacklisting of AI Company Anthropic
Australia's Social Media Ban for Under-16s Sparks Global Movement
Chinese Universities with PLA Ties Found Purchasing Restricted U.S. AI Chips Through Super Micro Servers
SpaceX Eyes Historic IPO at $1.75 Trillion Valuation
Apple's Foldable iPhone Faces Engineering Setbacks, Mass Production Timeline at Risk
Makemation: a Nollywood movie that shows AI in action in Africa
China's Push to Steal Taiwan's Chip Technology and Talent Raises Security Alarms
SK Hynix Eyes Up to $14 Billion U.S. IPO to Fund AI Chip Expansion
Elon Musk Ties SpaceX IPO Access to Mandatory Grok AI Subscriptions
MATCH Act Targets ASML and Chinese Chipmakers in New U.S. Export Crackdown
OpenAI Executive Shake-Up Ahead of Anticipated 2026 IPO
Reflection AI Eyes $25 Billion Valuation in Massive $2.5 Billion Funding Round
Rubio Directs U.S. Diplomats to Use X and Military Psyops to Counter Foreign Propaganda
Apple Turns 50: From Garage Startup to AI Crossroads
Microsoft's $10 Billion Japan Investment: AI Infrastructure and Data Sovereignty Push
TSMC Japan's Second Fab to Produce 3nm Chips by 2028 



