OpenAI has developed a watermarking system for ChatGPT to detect AI-generated text, but internal disagreements and user backlash concerns delay its implementation.
OpenAI's Year-Old Watermarking Tool Faces Internal Disagreements and Potential Financial Impact
The Wall Street Journal reported that OpenAI has had an instrument to detect the watermark and a system for watermarking ChatGPT-created text ready for approximately a year. However, the organization has internal disagreements regarding whether or not to disclose it. It is the responsible course of action; however, it could hurt its financial performance.
OpenAI's watermarking is defined as modifying the model's prediction of the most probable words and phrases that follow the most recent ones, thereby establishing a discernible pattern. (That’s a simplification, but you can check out Google’s more in-depth explanation for Gemini’s text watermarking for more).
The company has found this to be “99.9% effective” for making AI text detectable when there’s enough of it — a potential boon for teachers trying to deter students from turning over writing assignments to AI. While not affecting the quality of its chatbot’s text output, this system could significantly enhance the integrity of AI-generated content. In a survey the company commissioned, “people worldwide supported the idea of an AI detection tool by a margin of four to one,” the Journal wrote.
OpenAI Weighs User Backlash and Circumvention Risks in Watermarking ChatGPT Texts
However, OpenAI is concerned that implementing watermarking could discourage surveyed ChatGPT users. According to The Verge, nearly 30% of these users informed the company that they would use less software if watermarking were implemented.
The Journal reported that certain employees harbored additional apprehensions, including the potential for watermarking to be readily circumvented through methods such as bouncing the text between languages with Google Translate or causing ChatGPT to add emojis and subsequently deleting them.
Employees continue to perceive the methodology as effective, regardless of this. However, the article suggests that some methods that are "potentially less controversial among users but unproven" should be tried in response to persistent user sentiments. This commitment to addressing user concerns should reassure the audience about OpenAI's dedication to maintaining a positive user experience.


Nvidia Develops New Location-Verification Technology for AI Chips
Microsoft Unveils Massive Global AI Investments, Prioritizing India’s Rapidly Growing Digital Market
IBM Nears $11 Billion Deal to Acquire Confluent in Major AI and Data Push
Intel’s Testing of China-Linked Chipmaking Tools Raises U.S. National Security Concerns
U.S. Greenlights Nvidia H200 Chip Exports to China With 25% Fee
Apple App Store Injunction Largely Upheld as Appeals Court Rules on Epic Games Case
Azul Airlines Wins Court Approval for $2 Billion Debt Restructuring and New Capital Raise
Moore Threads Stock Slides After Risk Warning Despite 600% Surge Since IPO
China Adds Domestic AI Chips to Government Procurement List as U.S. Considers Easing Nvidia Export Curbs
SK Hynix Labeled “Investment Warning Stock” After Extraordinary 200% Share Surge
EU Court Cuts Intel Antitrust Fine to €237 Million Amid Long-Running AMD Dispute
Samsung SDI Secures Major LFP Battery Supply Deal in the U.S.
Evercore Reaffirms Alphabet’s Search Dominance as AI Competition Intensifies
Westpac Director Peter Nash Avoids Major Investor Backlash Amid ASX Scrutiny
Air Force One Delivery Delayed to 2028 as Boeing Faces Rising Costs
U.S.-EU Tensions Rise After $140 Million Fine on Elon Musk’s X Platform
Australia Enforces World-First Social Media Age Limit as Global Regulation Looms 



