Alibaba’s new AI coding model, Qwen3-Coder, is earning praise for its advanced performance – but also raising serious concerns about cybersecurity and national security.
Promoted as Alibaba’s most capable code-generation model to date, Qwen3-Coder uses a Mixture of Experts (MoE) framework with 35 billion active parameters out of 480 billion total. It supports 256,000 tokens of context, stretchable to 1 million, and reportedly outperforms other open models from DeepSeek and Moonshot AI.
But some experts are urging caution. Jurgita Lapienyė, Chief Editor at Cybernews, warns the tool could act as a Trojan horse in open-source clothing, enabling hidden vulnerabilities or data leaks in systems that adopt it.
“We may be sleepwalking into a future where critical systems are built with compromised code,” she said.
Why It Matters
The rise of AI in software development is already reshaping how code is written, fixed, and deployed. Qwen3-Coder, like other agentic tools, can complete complex tasks with minimal input – automating not just suggestions, but entire workflows.
However, this convenience introduces new risks:
- Subtle vulnerabilities could be intentionally introduced during code generation.
- Sensitive company data might be exposed during AI-assisted debugging or code review.
- Autonomous agents could act without clear oversight, making unauthorized changes or scanning internal systems.
Compounding concerns is China’s National Intelligence Law, which obligates companies like Alibaba to cooperate with state requests – including access to AI models or user data.





