The March 2023 ChatGPT Bug: A Platform‑Level Data Leak
On 20 March 2023, a bug in an open-source Redis library caused ChatGPT to leak user chat histories and payment information, exposing how a single dependency vulnerability can compromise data at scale.
Key Facts
- Bug in redis‑py open-source library corrupted connection pools
- Users saw other users' chat titles and first messages
- Payment data of ~1.2% of ChatGPT Plus subscribers was exposed
- Exposed data included names, emails, payment addresses, and partial credit card numbers
- Led to Italy temporarily banning ChatGPT and a €15 million fine for OpenAI
Background
On 20 March 2023 OpenAI briefly shut down ChatGPT after discovering that a bug in the redis‑py open‑source library corrupted connection pools and returned data to the wrong users. During the outage, some users saw other users' chat titles and first messages in their history. More seriously, OpenAI's post‑mortem revealed that the bug also exposed payment details for about 1.2% of ChatGPT Plus subscribers.
What Happened
The bug was triggered when a server‑side change increased the number of canceled requests. In certain race‑condition scenarios, the redis‑py client returned data from the wrong cache connection, making it possible for a user to see another active user's chat title or first message.
If a user opened the "My account" page or subscription confirmation emails during a specific nine‑hour window on 20 March 2023, they could also see another user's first and last name, email address, payment address and credit‑card type along with the last four digits and expiration date. Full card numbers were not exposed.
A seemingly minor open-source library bug caused a high-impact data leak affecting millions of users. The incident demonstrated that AI platform security is only as strong as its weakest dependency.
Response and Consequences
OpenAI took ChatGPT offline while patching the bug, fixed the underlying Redis client issue and added redundant checks to ensure cached data is returned to the correct user. The company notified affected users and apologized publicly.
Nevertheless, the incident had broader implications. Italy's data‑protection authority cited this bug, along with concerns about lawful data processing and age verification, when it temporarily banned ChatGPT in March 2023 and later fined OpenAI €15 million.
Lessons for Enterprise AI Governance
This event shows that AI providers must rigorously test third‑party dependencies and failure modes. Even a seemingly minor open‑source library bug can cause a high‑impact data leak when millions of users rely on a platform.
- Test dependencies rigorously. Third-party libraries are attack surfaces. AI platforms must stress-test every dependency under failure conditions.
- Implement redundant validation. Providers should cross-check cache data against authenticated user IDs before sending responses.
- Prepare incident-response plans. Privacy and data-protection compliance are critical for AI services. Companies must establish transparency practices before launching AI products.
- Monitor regulatory implications. Regulators' reactions illustrate that AI data breaches carry heavier consequences than traditional software bugs.
Prunex provides continuous monitoring of AI system outputs for PII and sensitive data exposure. With policy-based enforcement, automated redaction, and comprehensive audit logging of every data flow, organisations can detect and prevent data leakage before it reaches end users—regardless of the underlying platform.
Prevent AI data leakage in your organisation
See how Prunex helps enterprises enforce policy and maintain compliance across every AI interaction.
Request a Demo →