OpenAI has developed a watermarking system for ChatGPT to detect AI-generated text, but internal disagreements and user backlash concerns delay its implementation.
OpenAI's Year-Old Watermarking Tool Faces Internal Disagreements and Potential Financial Impact
The Wall Street Journal reported that OpenAI has had an instrument to detect the watermark and a system for watermarking ChatGPT-created text ready for approximately a year. However, the organization has internal disagreements regarding whether or not to disclose it. It is the responsible course of action; however, it could hurt its financial performance.
OpenAI's watermarking is defined as modifying the model's prediction of the most probable words and phrases that follow the most recent ones, thereby establishing a discernible pattern. (That’s a simplification, but you can check out Google’s more in-depth explanation for Gemini’s text watermarking for more).
The company has found this to be “99.9% effective” for making AI text detectable when there’s enough of it — a potential boon for teachers trying to deter students from turning over writing assignments to AI. While not affecting the quality of its chatbot’s text output, this system could significantly enhance the integrity of AI-generated content. In a survey the company commissioned, “people worldwide supported the idea of an AI detection tool by a margin of four to one,” the Journal wrote.
OpenAI Weighs User Backlash and Circumvention Risks in Watermarking ChatGPT Texts
However, OpenAI is concerned that implementing watermarking could discourage surveyed ChatGPT users. According to The Verge, nearly 30% of these users informed the company that they would use less software if watermarking were implemented.
The Journal reported that certain employees harbored additional apprehensions, including the potential for watermarking to be readily circumvented through methods such as bouncing the text between languages with Google Translate or causing ChatGPT to add emojis and subsequently deleting them.
Employees continue to perceive the methodology as effective, regardless of this. However, the article suggests that some methods that are "potentially less controversial among users but unproven" should be tried in response to persistent user sentiments. This commitment to addressing user concerns should reassure the audience about OpenAI's dedication to maintaining a positive user experience.


Apple App Store Injunction Largely Upheld as Appeals Court Rules on Epic Games Case
Evercore Reaffirms Alphabet’s Search Dominance as AI Competition Intensifies
SpaceX CEO Elon Musk Denies Reports of $800 Billion Valuation Fundraise
Taiwan Opposition Criticizes Plan to Block Chinese App Rednote Over Security Concerns
Mizuho Raises Broadcom Price Target to $450 on Surging AI Chip Demand
EssilorLuxottica Bets on AI-Powered Smart Glasses as Competition Intensifies
SK Hynix Labeled “Investment Warning Stock” After Extraordinary 200% Share Surge
Westpac Director Peter Nash Avoids Major Investor Backlash Amid ASX Scrutiny
Rio Tinto Signs Interim Agreement With Yinhawangka Aboriginal Group Over Pilbara Mining Operations
China Adds Domestic AI Chips to Government Procurement List as U.S. Considers Easing Nvidia Export Curbs
SK Hynix Shares Surge on Hopes for Upcoming ADR Issuance
Adobe Strengthens AI Strategy Ahead of Q4 Earnings, Says Stifel
Air Transat Reaches Tentative Agreement With Pilots, Avoids Strike and Restores Normal Operations
Nvidia Develops New Location-Verification Technology for AI Chips
Moore Threads Stock Slides After Risk Warning Despite 600% Surge Since IPO
Intel’s Testing of China-Linked Chipmaking Tools Raises U.S. National Security Concerns 



