An old technique may help combat the biggest problem with AI-generated text: fraud.
EXPLANATION about "Watermarking Language Models" paper and GPTZero | How to detect text with ChatGPT?
AI-written text can be useful, but it’s also susceptible to malicious abuse. Fake, misattributed political text, cheating in school, for example. And detecting machine-written text is tricky. Even OpenAI, the company that built Dall-E and ChatGPT, can only identify AI-written text 26% of the time, and that’s where watermarking comes in.
“In my opinion, the biggest danger with AI-generated text is the potential for it to be misused or exploited by unscrupulous actors. Watermarking can help mitigate this risk, as it makes it easier to trace the source of a text and take appropriate action if necessary,” Ataur Rahman, CEO of AI assistant company GetGenie, told Lifewire via email.
Unlike image watermarks, which are designed to be obvious and shame anyone who tries to use the image without permission, AI text watermarks use hidden patterns encoded into the text itself. These are undetectable by human readers, but can be detected by software.