Samsung's Employee Data Leaks to ChatGPT
In April 2023, Samsung employees inadvertently leaked confidential corporate information into ChatGPT in three separate incidents—sending proprietary semiconductor source code and internal meeting recordings to a third-party AI system with no way to retrieve or delete them.
Key Facts
- Three separate data leakage incidents within weeks of allowing ChatGPT use
- Proprietary semiconductor source code sent to ChatGPT for debugging
- Internal meeting recording transcribed by ChatGPT
- Data entered ChatGPT's training pipeline with no retrieval mechanism
- Samsung subsequently restricted AI tool usage and limited prompts to 1024 bytes
Background
In April 2023 multiple media outlets reported that Samsung Electronics employees inadvertently leaked confidential corporate information into ChatGPT. The Economist Korea and other sources described three separate incidents that occurred within weeks of the company permitting employees to use generative AI tools.
The incidents triggered widespread discussion about the risks of enterprise employees using public AI tools without proper governance controls.
What Happened
Faulty source code pasted for debugging
An engineer copied faulty semiconductor database source code into ChatGPT and asked it to fix the errors. The proprietary code, part of Samsung's core semiconductor intellectual property, was transmitted to OpenAI's servers.
Equipment test code shared for optimisation
A second employee pasted code used to identify defects in Samsung manufacturing equipment, asking ChatGPT to optimise the test sequence. Again, confidential engineering data left Samsung's control.
Internal meeting recording transcribed
A third employee asked ChatGPT to generate meeting minutes from a smartphone recording of an internal meeting. The content of the confidential meeting was transmitted to a third-party AI system.
In each case, proprietary code or internal meeting content was transmitted to ChatGPT's servers, which means it could be incorporated into the model's training data and potentially retrieved by other users.
Employees used generative AI tools for productivity without understanding the privacy implications. When users share data with AI assistants, the information becomes part of the model's data corpus, and there is no reliable way to delete specific prompts.
Response and Consequences
After the incidents, Samsung reportedly limited ChatGPT prompts to 1024 bytes and considered disciplinary measures. The company also imposed new policies restricting employees from entering sensitive information into public AI tools.
The episodes underscored warnings from security analysts: when users share data with AI assistants, the information becomes part of the model's data corpus, and there is no reliable way to delete specific prompts.
Lessons for Enterprise AI Governance
Samsung's experience illustrates the human factor in AI data leaks. This is not a niche risk—a 2025 enterprise security report found that 77% of employees paste sensitive company data into generative AI tools, typically from personal, unmanaged accounts.
- Provide clear guidelines. Organisations must define what can and cannot be shared with AI systems—and communicate this clearly to every employee.
- Implement technical controls. Deploy data-loss-prevention tools that block or redact sensitive information before it is sent to AI systems.
- Monitor prompt activity. Security teams should have visibility into what data is being sent to AI tools across the organisation.
- Offer secure alternatives. Outright bans are ineffective. Enterprise AI deployments with data-retention controls and governance layers are more sustainable.
- Train employees. Regular training on AI data privacy should be mandatory for anyone using generative AI tools in their work.
Prunex acts as a governance layer between enterprise users and AI systems. Policy-based controls detect and block sensitive data—source code, PII, confidential documents—before it reaches external AI systems. Every interaction is inspected, evaluated against your policies, and logged for audit. Employees can still use AI tools productively, but within enforced boundaries that protect your organisation's data.
Prevent AI data leakage in your organisation
See how Prunex helps enterprises enforce policy and maintain compliance across every AI interaction.
Request a Demo →