OpenAI has developed a technology capable of recognizing text written by ChatGPT with 99.9% accuracy, according to The Wall Street Journal. However, the company has not made this tool available to the public due to concerns that it could lead to a decrease in user engagement with their AI solutions.
The Debate Within OpenAI
For about two years, OpenAI employees have been debating the release of this detection tool. The discussion revolves around balancing the company’s commitment to transparency with the desire to attract and retain users. There are also concerns that the tool could disproportionately affect non-native English speakers.
A survey among loyal ChatGPT users revealed that nearly a third would reduce their usage or stop using the AI bot entirely if such a detection tool were offered. The proposed technology involves invisible watermarks in the text, which can be recognized by OpenAI’s detection system.
Challenges and Ongoing Discussions
Employees have raised concerns about the potential for these watermarks to be erased through simple methods like translation or emoji manipulation. OpenAI executives, including CEO Sam Altman and CTO Mira Murati, have participated in discussions about using the tool to combat AI fraud, but no final decision has been made.
In early June, OpenAI executives and researchers met to discuss the project. While they acknowledged the effectiveness of watermarking technology, the survey results showing potential user loss have caused hesitation, notes NIX Solutions.
We’ll keep you updated on any developments regarding this technology and its potential release.
It’s worth noting that Google has also developed a similar watermarking technology called SynthID for detecting text created by their Gemini AI. This technology is currently in beta testing and not yet publicly available.