DeepSeek V4 on Huawei Ascend Is Evidence China's Chip Gap Is Closing — But Not Proof
TexTak holds [china-domestic-chip-parity] at 48%, and today's DeepSeek V4 announcement is the most significant data point we've received on this forecast in months — and also the most ambiguous. DeepSeek releasing frontier-competitive open-source models with 1M token context windows, explicitly integrated with Huawei's Ascend chips, is real evidence that China's domestic silicon can support serious AI workloads at scale. GLM-5's prior training run on 100K Ascend chips already told us scale was possible. V4's performance claims on Ascend tell us something more: that the software stack is maturing alongside the hardware. But 'supports frontier-level training' and 'reaches 80% of H100 benchmark performance' are not the same claim, and today's news proves the first while leaving the second genuinely contested.
Here's the evidentiary problem we're working through in real time. DeepSeek V4's performance is being measured by DeepSeek on benchmarks of their choosing, running on Huawei hardware with a software stack optimized for that hardware. That's not an independent benchmark. The forecast resolves on a Chinese-made AI chip reaching 80% of Nvidia H100 performance — which implies independent, apples-to-apples measurement of hardware throughput, not model-level benchmark scores that can be influenced by training choices, quantization, and inference optimization. A model that achieves strong benchmark scores on Ascend chips doesn't tell us the Ascend chip is within 80% of H100 on raw compute tasks. It tells us that with the right model architecture and software optimization, you can get competitive outputs from different hardware. That's meaningful but it's proximate evidence, not direct evidence for our forecast target.
The Tencent and Alibaba investment reporting at a $20B+ valuation is separately significant and we're not dismissing it. That valuation, if it closes, signals that Chinese institutional capital believes DeepSeek's technical claims are credible enough to anchor a major bet. Smart money has access to more than press releases. But investor confidence in a company's capabilities is still circumstantial evidence for chip parity specifically — it could reflect confidence in DeepSeek's software and training efficiency rather than Huawei's hardware closing the gap with TSMC-manufactured silicon.
The counterargument we genuinely can't dismiss: SMIC's 7nm process constraint is a fundamental physics limitation, not a software problem. Transistor density directly affects power efficiency and compute-per-watt, and no amount of architectural optimization fully compensates for fabrication node gaps. Huawei's Ascend 910C reportedly approaches H100-class performance on some workloads, but 'approaches' in Chinese government-adjacent reporting and 'reaches 80%' in independent benchmark testing are different thresholds. EUV lithography access remains blocked and that constraint doesn't yield to engineering creativity alone. Our 48% reflects genuine uncertainty about whether Huawei can cross the 80% threshold while SMIC remains two or more nodes behind TSMC.
What moves this forecast: independent benchmarking from a credible third party — an academic institution, a non-Chinese cloud provider, or a Western research lab — that puts Ascend 910C or its successor within 80% of H100 on standard MLPerf or equivalent metrics. That single data point would push us above 60% immediately. If instead the V4 release cycle reveals that performance is heavily dependent on DeepSeek-specific software optimizations that don't generalize to other model families, we'd take that as evidence the hardware gap is real and pull back toward 40%. The 1M token context window capability is genuinely impressive and shouldn't be dismissed — but it's the software telling us something, not the silicon speaking for itself.