This guy is very very scared of Deepseek and all the potential malicious things it will do, seemingly due to the fact that it’s Chinese. As soon as the comments point out that ChatGPT is probably worse, he disagrees with no reasoning.
Transcription:
DeepSeek as a Trojan Horse Threat.
DeepSeek, a Chinese-developed Al model, is rapidly being installed into productive software systems worldwide. Its capabilities are impressive-hyper-advanced data analysis, seamless integration, and an almost laughably low price. But here’s the problem: nothing this cheap comes without a hidden agenda.
What’s the real cost of DeepSeek?
-
Suspiciously Cheap Advanced models like DeepSeek aren’t “side projects.” They take massive investments, resources, and expertise to develop. If it’s being offered at a fraction of its value, ask yourself-who’s really paying for it?
-
Backdoors Everywhere DeepSeek’s origin raises alarm bells. The more systems it infiltrates, the more it becomes a potential vector for mass compromise. Think backdoors, data exfiltration, and remote access at scale-hidden vulnerabilities deliberately built in.
-
Wide Adoption = Global Risk From finance to healthcare, DeepSeek is being installed across critical systems at an alarming rate. If adoption continues unchecked, 80% of our systems could soon be compromised.
-
The Trojan Horse Effect DeepSeek is a textbook example of a Trojan horse strategy: lure organizations with a cheap, powerful tool, infiltrate their systems, and quietly map or control them. Once embedded, reversing the damage will be nearly impossible.
The Fairytale lsn’t Real
The story of DeepSeek being a “low-cost, side project” is just that-a fairytale. Technology like this isn’t developed without strategic motives. In the world of cyber warfare, cheap tools often come at the highest cost.
What Can We Do?
Audit your systems: Is DeepSeek already embedded in your critical infrastructure?
Ask the hard questions: Why is this so cheap? Where’s the transparency?
Take immediate action: Limit adoption before it’s too late. The price may look attractive, but the real cost could be our collective security.
Don’t fall for the fairytale.
ROFL He’s sweating so much because DeepSeek is proving their little money making scam shouldn’t be as expensive and resource intensive as it is! So he’s out here trying to shame DeepSeek, which will make investors ask a lot of hard questions and retract funding for their AI Lie. If DeepSeek could burst the bubble of American Made LLM Models, I’d be tickled pink. I’d naturally never use it as LLMs are really only good for spellchecking and grammar in my opinion (never should’ve strayed further than that without proper research and developing a true code of ethics that wouldn’t be overstepped constantly).
I love how much this C-Suite shitbag is maulding at the moment!
nah LLMs have uses. as a chef I can plug in ingredients and it will generate me good combinations that can help inspire. for d&d it can help stitch a few spaces I didn’t think of. It’s good as a sounding board for my creativity
I like the idea of it being used to help me find documents or Web articles like how perplexity does it. Even if the AI is wrong the article is real and tangible. Something like that to help find articles in a local knowledge base for IT teams, D&D campaigns, etc would be awesome!