Ever since the AI chatbot launched in November 2022, techies and hackers have been trying to circumvent ChatGPT’s limitations and figure out what makes it tick. But it’s usually a moving target (like DAN) and jailbreaking AI chatbots isn’t child’s play. Unless ChatGPT gives it all up without even asking.
ChatGPT Reveals OpenAI’s Secret Rules: What We Discovered!
In a surprising turn of events, ChatGPT recently accidentally exposed its instruction set data to a user. After greeting ChatGPT with a simple “Hi,” Reddit user F0XMaster was presented with all of ChatGPT’s instructions, embedded by OpenAI, in the chat. The unsolicited instruction set contained several safety and practical guidelines for the chatbot.
Fortunately, before this was fixed and the instruction sets were removed, the user was able to post the whole thing on Reddit. Here are some key points from everything ChatGPT revealed, and what it tells us about how it handles user requests.
The information that ChatGPT let slip includes some basic instructions and guidelines for various tools such as DALL-E, a browser, python, and oddly enough, a set of ChatGPT personalities. For the sake of brevity, we’ll only highlight the most notable bits here. You can read the full set of instructions over on F0XMaster’s reddit post.