The Chinese firm DeepSeek has officially entered the AI arena. When the company released its latest iteration in January of this year, it reached 16 million downloads in just 18 days—nearly double the 9 million downloads achieved by OpenAI’s ChatGPT at launch.
Chances are, DeepSeek is already being used in your workplace, quietly boosting employee productivity and efficiency—even though it’s not sanctioned by your IT department. This begs the question: is the application secure?
Let’s find out.
What is DeepSeek?
DeepSeek is an AI development firm headquartered in China. It rapidly distinguished itself in the landscape of large language models (LLMs) by embracing an open-source philosophy. The company released its first model in November 2023 and has since introduced a series of lightning-fast enhancements. The release of its R1 reasoning model in January 2025 that catapulted DeepSeek into the international spotlight.
The R1 model is a notable advancement in AI deduction capabilities. Trained through reinforcement learning, it employs a technique known as chain-of-thought (CoT) reasoning. This lets the model pause, reflect, and self-correct before providing a response—leading to answers that are often more logical, reasonable, and accurate.
While ChatGPT can engage in CoT reasoning when prompted, DeepSeek’s R1 was specifically optimized to reason in this way, making it a pioneering program. On top of that, the founders of the company say DeepSeek cost millions less to create than OpenAI—just $6m compared to the “over $100m” mentioned by OpenAI head, Sam Altman.
DeepSeek: The security risks
DeekSeek is certainly making waves around the world. However, like all digital tools, it comes with its fair share of security risks that employees and security teams must be aware of. Here are the most significant ones.
Bi-directional data leakage
Generative AI systems like DeepSeek rely on massive amounts of input data to refine and improve their performance. Every query and piece of information entered into these systems contributes to their ongoing training. This process is essential for these tools to get smarter over time. But it also leads to security risks. Namely, sensitive data entered into the system can inadvertently become accessible to others.
This issue is particularly worrying as we’re in the era of shadow AI, where employees adopt generative tools on their own, often outside of IT’s radar. Without clear organizational policies or proper training, they likely don’t register that the information they input into DeepSeek could be shared or exposed to others.
But the problem isn’t just sensitive data entering these systems—it’s that it can be regurgitated. Tools like DeepSeek don’t truly understand things like context or compliance, meaning they can surface sensitive data in completely unrelated responses. That means regulated information might end up in the hands of unauthorized users—not just in your company, but in totally unknown ones.
Harmful content
A recent study from Bristol University revealed that CoT models like DeepSeek are more likely to generate harmful content than standard LLMs. And it’s not just that they respond poorly—they respond more effectively—in the worst possible way. This is because their step-by-step reasoning makes their answers more accurate—and in the wrong hands, more dangerous.
In one case, DeepSeek offered detailed instructions on how to commit a crime and avoid getting caught. Not because it was hacked or broken, but because that’s what it was asked—and it’s designed to help.
Of course, the main concern isn’t that your employees will use DeepSeek maliciously. It’s that AI tools can be hijacked. If an attacker gains access—through a cleverly crafted prompt or a compromised account—they can weaponize the model to reveal sensitive or regulated data.
Platform vulnerabilities
No software-as-a-service platform is bulletproof. And the bigger the tech company, the bigger the target. Malicious actors are always looking for weak spots—and sometimes, tech firms unintentionally leave the door open.
That’s exactly what happened with DeepSeek earlier this year. In January 2025, security researchers discovered a publicly accessible back-end database leaking sensitive data onto the web. The exposed database included DeepSeek chat histories, API keys, log streams, and operational details.
To their credit, DeepSeek took it offline quickly after being notified. But it’s not clear how long the database was exposed, and who might have accessed it in the meantime. More troublingly, it’s all too possible a similar incident could happen again.
Open-source structure
DeepSeek’s open-source foundation is one of its most talked-about features. It offers flexibility and accessibility—qualities that appeal to developers, researchers, and enterprise teams alike. But that openness comes with significant security trade-offs.
Because the model is publicly available, anyone can download, modify, and redeploy it. That includes not just customizing how it performs, but altering—or outright removing—its built-in safety mechanisms. This means modified DeepSeek variants could be used to create harmful outputs—malware, phishing emails, and so forth.
While other AI platforms automatically prohibit this kind of content, DeepSeek doesn’t seem to. As a study by Cisco shows, DeepSeek failed to block a single high-risk prompt during testing. For comparison, OpenAI blocked 86% of the same prompts, while Google’s Gemini blocked 64%. Additional research shows DeepSeek is 11 times more likely to be successfully exploited by malicious actors than other AI models.
Securing generative AI usage in the enterprise
Banning DeepSeek might seem like the most simple solution—but it’s not an effective one. Employees will almost always find workarounds, especially if they believe a tool helps them work more efficiently.
Instead, organizations need to acknowledge that AI tools like DeepSeek are already part of the workflow—and focus on enabling their safe, secure use.
Here’s how:
- Start by establishing a clear, acceptable use policy. Set straightforward guidelines on what kinds of data can and can’t be entered into AI systems. For example, make it explicit that sensitive information—such as customer data, financials, or internal documentation—should never be shared with these platforms.
- Embrace human risk management for AI. Equip your teams with real-time, in-the-moment training that reinforces safe AI practices. Look for tools that embed nudges and reminders directly into employees’ workflows, helping them make better decisions as they interact with AI.
- Combine education with enforcement by implementing bi-directional data security posture management (DSPM). These tools monitor for data leakage both into and out of applications like DeepSeek and ChatGPT, providing an added layer of protection against human error.
Incorporate AI into your workflows and maintain data security with Polymer. Our DSPM solution combines next-gen data loss prevention (DLP) with human risk management, empowering organizations to accelerate AI readiness securely.