WEBINARSecure your AI agents in days, not weeks– Discover Polymer’s SecureRAG today!

Request a demo

Polymer

Download free DLP for AI whitepaper

Summary

  • Generative AI tools like ChatGPT and Bard are rapidly gaining popularity, with 78% of knowledge workers using them.
  • Only 1 in 3 organizations have defined AI usage policies, leaving significant security gaps.
  • Shadow AI is growing—employees use generative AI tools outside of IT oversight, increasing data leakage risks.
  • Accidental insiders may unknowingly input sensitive data, feeding AI platforms with risks for future breaches.
  • Malicious insiders and compromised accounts can intentionally expose data, even learning to deploy malware using AI.
  • Banning generative AI is not the solution; organizations should use specialist DSPM tools for AI to mitigate data leakage risks.

Generative AI tools like ChatGPT and Bard are taking the world by storm—at a rate faster than security teams can keep up. Today, 78% of knowledge workers use third-party GenAI tools. Yet, just a third of organizations say they have defined guidelines on AI usage in place. 

As more and more corporate data flows into third-party generative AI platforms, the security risks are becoming impossible to ignore. These tools are expanding the very fabric of the insider threat surface, opening up new avenues for data leakage and theft.

Generative AI: A new type of insider threat 

When security professionals think of generative AI security risks, their first thoughts typically center around threat actor manipulation: jailbreaking techniques, next-level phishing scams and other generative AI attack vectors. However, less thought is given to how these tools symbolize a new kind of insider threat. 

Just take Samsung. In 2023, the company was forced to ban the use of generative AI interfaces in its company, after employees input classified source code into conversations with ChatGPT. 

Why is this a problem? Because of the nature of how large language learning (LLM) models work. All prompts, conversations and files that employees share with a generative AI tool cannot be considered private. These models use ingested data points for training and refinement. 

In essence, this means any sensitive data shared with an LLM could later regurgitate in another response to another user. And it’s not just a theoretical problem, either. In December, security researchers managed to manipulate ChatGPT into leaking sensitive information through targeted prompts—highlighting that, if an employee has shared company data with ChatGPT, a data breach could truly be just a prompt away. 

Compounding matters is the fact that generative AI usage is often unsanctioned—out of the IT department’s line of sight and control. It’s a phenomenon known as shadow AI, where employees increasingly use tools like ChatGPT and DeepSeek using personal accounts, uploading company data to enhance their productivity—but with none of the security protections and controls that come with using sanctioned IT applications. 

Generative AI can exasperate the human factor

Generative AI is its own kind of novel insider threat. But it also creates more opportunities for human insiders to compromise data security. 

As we’ve covered, accidental insiders may unwittingly input sensitive data—like customer records, source code, and intellectual property—into these tools. When they do, they feed into the risk of the generative AI insider threat, giving these platforms more potential data to resurface down the line. 

On top of that, there’s also the risk of malicious insiders and compromised accounts. Disgruntled insiders, for example, may share sensitive information with third-party generative AI tools on purpose to trigger compliance violations. Using tools like GhostGPT or FraudGPT, they could even learn how to conduct complex attacks on your organization, using their foothold to deploy malware or exfiltrate sensitive data. Whilst, in a bygone era, these kinds of attacks took plenty of knowledge and skill, generative AI tools have democratized malware creation. 

Then, of course, there is the threat of malicious actors compromising employee-owned generative AI accounts. Tools like ChatGPT and Bard often only require a password to access. With so many employees reusing passwords (and millions available on the dark web) it is all too likely that, at some point, a threat actor will manage to break into an employee account. From there,they can harvest any sensitive data shared during interactions—data that could be used to fuel further attacks or even open doors to new avenues of intrusion within the organization.

Mitigating the generative AI insider threat

While banning generative AI might seem like the simplest solution to addressing insider threats, it’s far from the most effective. History has shown us, especially with the rise of SaaS apps, that when organizations ban useful tools in the workplace, employees often find ways to bypass those restrictions.

The smarter, more practical approach is to implement tools that minimize the risk of data exposure within platforms like ChatGPT and Bard. That’s where Polymer comes in.

Polymer’s data security posture management solution (DSPM) is designed to protect data privacy and prevent sensitive information from being exposed in generative AI tools.

Here’s how Polymer helps you combat the insider threat in the age of generative AI applications:

  • Bidirectional monitoring: Leveraging natural language processing and automation, Polymer scans both user prompts and AI-generated responses in real-time. When it detects sensitive data, Polymer automatically takes action based on the contextual use policies set by your security team, ensuring immediate remediation.
  • E-discovery for GenAI interactions: Polymer streamlines compliance and auditing processes by enabling your team to quickly search and retrieve generative AI interactions when facing e-discovery requests, audits, or compliance reviews.
  • Human risk management: Polymer prevents accidental data leakage by offering real-time human risk management. If a user violates a compliance or security policy, Polymer notifies them with a point-of-violation message, providing additional context about the violation to reinforce secure behavior.
  • Insider visibility: With robust logging and audit features, Polymer gives your security team granular visibility into employee activity in generative AI applications. This enables you to detect repeat offenders, compromised accounts, or malicious insiders before a data breach occurs.

Secure AI usage in your organization today. Request a demo now.

Polymer is a human-centric data loss prevention (DLP) platform that holistically reduces the risk of data exposure in your SaaS apps and AI tools. In addition to automatically detecting and remediating violations, Polymer coaches your employees to become better data stewards. Try Polymer for free.

SHARE

Get Polymer blog posts delivered to your inbox.