Get the latest crypto news, updates, and reports by subscribing to our free newsletter.
Giấy phép số 4978/GP-TTĐT do Sở Thông tin và Truyền thông Hà Nội cấp ngày 14 tháng 10 năm 2019 / Giấy phép SĐ, BS GP ICP số 2107/GP-TTĐT do Sở TTTT Hà Nội cấp ngày 13/7/2022.
© 2026 Index.vn
Chinese tech circles are debating a central question: whether DeepSeek V4 will be released soon, or whether the project will become China’s most significant AI misstep. Over the past year, global large language model (LLM) makers have maintained rapid update cycles, while DeepSeek’s pace has slowed for roughly 15 months, with repeated delays and growing uncertainty among developers and investors.
In the past year, major global players—including OpenAI, Anthropic and Google—have generally released new model versions every 2–3 months, sometimes monthly. These updates are used to test, verify and refine performance. By contrast, DeepSeek has slowed its large-model update cadence for about 15 months, turning what was once a leading position into a lagging one.
In early April, some Chinese media reported that DeepSeek V4 could be released within weeks. However, expectations have been tempered, with commentary emphasizing that the company’s prior experience suggests caution.
A key datapoint cited by 36Kr traces back to January 2025, when Reuters reported that relevant authorities “encouraged” DeepSeek to use Huawei’s Ascend processors rather than continuing to rely on Nvidia after DeepSeek R1 launched. The report’s wording—“encouraged”—was described as carrying more weight than a simple suggestion.
DeepSeek is widely viewed as China’s first successful AI model to break the US technology blockade. That symbolic role elevated DeepSeek from a purely technical company into a strategic actor tied to China’s push for technological self-sufficiency.
According to 36Kr, in early 2025 DeepSeek worked to train its next-generation model using Huawei Ascend 910C chips. The effort faced major technical hurdles, including insufficient training stability, frequent system crashes in large-scale distributed scenarios, and chip-to-chip communication speeds that did not meet expectations.
Huawei engineers reportedly provided direct support at DeepSeek’s headquarters, but compatibility issues during training could not be resolved. The outcome was a compromise: DeepSeek returned to Nvidia GPUs for training, while Ascend chips were used only for inference. As described in the report, the training phase—identified as the core bottleneck—absorbed nearly a year of trial and error.
In 2026, 36Kr reports new signals around V4 development. Sources cited by the outlet say DeepSeek did not open Nvidia access early. Instead, it prioritized providing an early pre-release to Huawei’s next-generation Ascend 950PR chips to ensure compatibility.
To distribute risk, the adaptation effort was reportedly synchronized with Cambricon Technologies, described as the “Nvidia of China.” Even so, the report says the technical challenges remained substantial.
36Kr attributes the main adaptation challenge to “alignment of accuracy,” meaning the model must deliver consistent results across different hardware ecosystems. Achieving that required substantial low-level code adjustments.
While global leaders typically upgrade models every 2–3 months, DeepSeek’s resources were heavily directed toward adapting to domestic chips during this period. The report notes that domestic Chinese chips and Nvidia’s US chips still differ in generation-level performance, ecosystem maturity and toolchain completeness—making adaptation time-consuming and creating divergence from an earlier plan focused primarily on model performance.
In early 2026, rumors circulated that Alibaba, ByteDance and Tencent had placed orders for hundreds of thousands of Ascend 950PR chips from Huawei. One theory presented in the report is that cloud service providers may be waiting for DeepSeek V4’s test results to assess whether domestic chips can support large-scale AI training.
If DeepSeek V4 performs successfully, Huawei’s 950PR chips could move toward commercial upgrades. If results fall short, the outcome could help the industry define the current limits of domestic Chinese chips.
In line with DeepSeek CEO Liáng Wénfāng’s stated principle—“if expectations aren’t met, there is no disclosure”—the preparation for V4 suggests the model may have passed inference-efficiency tests. If confirmed, 36Kr says it could become a milestone for China’s AI industry and for progress toward national technological self-reliance.
However, independent evaluations cited by the report for March–April 2026 indicate that DeepSeek’s code-generation ability was notably surpassed by Claude 4 (Opus 4.6 / Sonnet 4.6) in standard third-party benchmarks. The report also says DeepSeek’s multimodal capabilities remain largely limited to text and images, lagging behind Claude and GPT in image analysis, computing and video understanding.
In 2026, DeepSeek shifted product focus to an Agent system, described as technically more challenging. Community feedback cited by 36Kr indicates DeepSeek has engaged with leading Chinese Code Agent and Search Agent groups, but there remains a systemic gap compared with top OpenAI or Google models in multi-tool orchestration, long-chain task execution and resilience in real-world environments.
The report frames this gap as more related to market competition versus national strategy than a decline in underlying technical capability.
From late 2025, 36Kr reports that core DeepSeek personnel began to waver. Named departures include Wang Bingting (core author of the early LLM), Qu Da (core author of R1), Wei Hao-nian (OCR lead), and Nguyễn Trưng (multi-modal lead).
Beyond personnel changes, the report points to the accumulation of DeepSeek’s technical foundation from V1 to R1 and links departures to incentive structure. According to senior Chinese recruitment firms cited by 36Kr, competitors offered compensation 2–3 times higher, and some firms proposed total compensation in eight-figure sums.
DeepSeek, described as a startup without external funding (with the parent being High-Flyer Quant), is said to be unable to match stock-based incentives and high valuations offered by major firms such as ByteDance, Alibaba and Tencent. Liáng Wénfāng is reported to have begun advancing the company’s valuation and clarifying stock-option value to provide more certainty for staff. Even so, with competitors like Zhipu AI and MiniMax already listed and stock prices rising, retention pressure remains high.
36Kr describes DeepSeek’s current position as ambiguous: commercialization is still needed, talent retention is crucial, and there is also an expectation to localize the model domestically. The report suggests the tension between these roles may explain why DeepSeek slowed over the past year.
As a result, market expectations for DeepSeek V4’s performance are being revised downward. The model may not be a “blockbuster” that shocks the global tech community immediately, but 36Kr argues it could represent a meaningful industrial milestone—showing that China’s advanced models can achieve practical usability on domestic hardware.
In that framing, the “test” of DeepSeek V4 may be especially important for the long-term direction of China’s AI sector.

Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…