DeepSeek: Risk or Opportunity?
DeepSeek is fueling the AI debate. While proponents emphasize performance and cost advantages, opponents warn of opaque data protection rules.
The hype surrounding AI has reached new heights with DeepSeek. In January 2025, the company introduced its AI model, ‘DeepSeek R1,’ which competes with well-known AI systems such as ChatGPT and Claude. DeepSeek is financed by the Chinese hedge fund High-Flyer.
The media is abuzz with news about this AI from China. "German data protectionists want to scrutinize DeepSeek," headlines Tagesschau.de. "Data leak discovered at Chinese AI start-up," warns Handelsblatt. "Did the new Chinese AI steal en masse from ChatGPT?" asks Bild.de. Meanwhile, SR.de reports, "DFKI warns of risks associated with Chinese AI DeepSeek." Sascha Lobo writes in his Spiegel column, "DeepSeek is the best AI news for us since ChatGPT."
The performance of DeepSeek's AI model, combined with its lower costs, has even led to a decline in well-known technology stocks. Market values for companies such as Nvidia and Broadcom fell by up to 30% at times. In the U.S., DeepSeek's app climbed to the top of the most downloaded iPhone apps in January 2025.
How do IT companies assess DeepSeek's new AI offering?
Dominic Rizzo, Portfolio Manager at T. Rowe Price
"At first glance, R1 may seem like a negative development for AI infrastructure due to spending optimizations, but its medium-term impact remains uncertain. U.S. AI labs could theoretically incorporate many of DeepSeek's computational efficiencies to improve performance while maintaining current spending levels. DeepSeek V3's $5.6 million cost excludes various other expenses, and there is uncertainty regarding the number of GPUs Chinese labs actually possess. On CNBC, a U.S. AI company CEO mentioned that DeepSeek had around 50,000 H100 GPUs, equating to an estimated expenditure of $1.5 billion. Additionally, these GPUs would include a 'sourcing premium.'"
Satnam Narang, Senior Staff Research Engineer, Tenable
"The release of DeepSeekV3 and the more powerful DeepSeek-R1 as open-source makes LLMs accessible to virtually everyone. The issue, however, is that unlike closed-source models, which operate within defined guardrails, open-source LLMs are more vulnerable to abuse. We don't yet know how quickly cybercriminals will exploit DeepSeek's models. If past trends are any indication, the race is already underway."
Chester Wisniewski, Global Field CTO, Sophos
"More pressing for organizations is that DeepSeek will likely be adopted by various products and companies due to its cost-effectiveness, potentially posing significant privacy risks. As with any AI model, organizations must conduct thorough risk assessments, extending to products and suppliers that may incorporate DeepSeek or similar LLMs. Companies must be able to make informed decisions based on expert analysis."
Ismet Koyun, CEO and Founder, KOBIL Group
"DeepSeek proves that computing power is not the sole determinant of AI capabilities. This Chinese startup has upended conventional wisdom. DeepSeek is not only openly available to everyone but also delivers performance comparable to established AI models like ChatGPT while requiring significantly less computing power. For years, European companies have cited lower budgets as an excuse for lagging behind U.S. AI giants. DeepSeek proves that narrative wrong. China is demonstrating its ability to develop cutting-edge technology, and we must take it seriously."
Darren Guccione, CEO and Co-Founder of Keeper Security
"While AI platforms like DeepSeek represent a step forward, organizations must carefully consider the risks, particularly concerning platforms with less transparent data access and monitoring. Entering sensitive company data into such systems could expose critical information to government-controlled surveillance or misuse, effectively creating a Trojan horse for an organization and its employees. This has major implications for BYOD (Bring Your Own Device) policies and user-level security. Employees could download DeepSeek on personal devices, which might then interact with company websites, applications, and systems. If not managed properly, the risk to data privacy and security could be exponential."
Adrianus Warmenhoven, Cybersecurity Expert at NordVPN
"DeepSeek's privacy policy, available in English, explicitly states that user data, including conversations and generated responses, is stored on servers in China. This raises significant concerns, not only due to data collection practices—which include user-shared information and external sources—but also because of the risks associated with storing such data in a country with different privacy and security standards. Users must be aware that any data shared with DeepSeek may be subject to government access under China's cybersecurity laws, which require companies to provide authorities with data upon request."
Dan Schiappa, Chief Product and Services Officer at Arctic Wolf
"Many people are already worried about the amount of data accessible to social media platforms, as seen in recent rulings regarding TikTok. The concerns around DeepSeek go even further, particularly regarding its training data. Chinese foundation models, trained on vast datasets, are being used for a wide range of applications. Given that DeepSeek has already restricted new registrations following a cyberattack, questions arise about whether the company has adequate security measures in place to protect user privacy. Furthermore, China has a history of intellectual property theft and imitating U.S. and European technologies."
Marc Benioff, CEO, Salesforce
"DeepSeek is now the number one app in the App Store, surpassing ChatGPT—without NVIDIA supercomputers or $100 million in funding. The real treasure in AI is no longer the user interface or the model itself; these have become commodities. The true value lies in data and metadata—the oxygen fueling AI's potential. The wealth of the future? It lies in our data. Deep gold."
Gavin Baker, CIO & Managing Partner, Atreides Management
"DeepSeek R1 is real, with significant nuances. Most importantly, R1 is far cheaper and more efficient than previous models—not just because of the $6 million training cost. R1 is 93% cheaper to use than O1 per API call, can run locally on a high-end workstation, and doesn't appear to have hit any rate limits, which is remarkable. Simple math suggests that each billion active parameters in FP8 requires 1GB of RAM, meaning R1 needs about 37GB. Batching significantly reduces costs, and increased processing power improves token generation speeds, giving cloud inference an edge.
I would also highlight the geopolitical dynamics at play here. It’s no coincidence that this launch happened right after 'Stargate.' RIP, $500 billion—we hardly knew you..."