China's AI Price War Is Reshaping the Industry: MiniMax Matches Claude at 1/20th the Cost

The 1/20th Price Tag That Changed Everything
When MiniMax released its M2.5 model on February 11, the AI industry had one of those moments where the ground shifts under your feet. Here's an open-source model from a Chinese startup that scores 80.2% on SWE-Bench Verified, trailing Anthropic's Claude Opus 4.6 (80.8%) by just 0.6 percentage points. It actually beats Opus on Multi-SWE-Bench, 51.3% to 50.3%. And on the BFCL Multi-Turn benchmark, M2.5 destroys Opus by 13.5 points.
The kicker? It costs one-twentieth of what Claude Opus charges. Running M2.5 continuously for an hour at full speed costs about $1. Running a coding agent 24/7 on M2.5-Lightning would cost roughly $720 per month, compared to tens of thousands for frontier proprietary models. The weights are on Hugging Face under a modified MIT License.
This isn't one outlier. This is the new normal coming out of China.
How MiniMax Pulled It Off
The secret is architectural. M2.5 uses a Mixture of Experts (MoE) design with 230 billion total parameters, but only 10 billion are activated for any given token. Think of it as having a massive team of specialists, but only calling on the few you need for each specific task. This dramatically reduces the compute required per inference while maintaining the model's total knowledge base.
MiniMax offers two variants: the standard M2.5, optimized for cost at $0.15 per million input tokens and 50 tokens per second, and M2.5-Lightning, optimized for speed at 100 tokens per second with slightly higher pricing of $0.30 per million input tokens. For comparison, the median price across frontier models is $0.60 per million input tokens.
The practical result is that companies running autonomous coding agents, customer support bots, or document processing pipelines can now do so at a fraction of what they'd pay for Claude or GPT-5. The economics of AI deployment just changed fundamentally.
The Bigger Picture: Five New Chinese Models in One Month
MiniMax isn't alone. CNBC reported in early March that China has released five major new AI models since the start of the year, with contributions from Tencent, Alibaba, Baidu, ByteDance, and several well-funded startups. UBS singled out MiniMax as the standout, but the whole field is moving fast.
ByteDance's Seedance 2.0 went viral with AI-generated video capabilities. Moonshot AI's Kimi 2.5 focused on coding and agentic task completion. Zhipu AI's GLM 5.0 pushed the frontier on code generation. And Alibaba's Qwen family has quietly overtaken Meta's Llama in cumulative downloads on Hugging Face, a milestone few in the West noticed.
According to a recent MIT study, Chinese open-source models have now surpassed U.S. models in total downloads globally. They've captured roughly 30% of the "working" AI market, meaning models actually deployed in production rather than sitting on leaderboards. Budget-friendly, open-source AI isn't a scrappy underdog strategy anymore. As one analyst put it, it's the new default.
DeepSeek V4: The Next Bombshell
If MiniMax shifted the ground, DeepSeek V4 could crater it. Expected to launch in the first week of March, DeepSeek's upcoming model packs a staggering 1 trillion total parameters with only 32 billion active per token. It's multimodal, handling text, images, and video. And the projected cost is $0.10 to $0.30 per million input tokens, potentially 50 times cheaper than GPT-5.
DeepSeek has already made a provocative move: it denied Nvidia and AMD early access to V4, instead giving Chinese chipmakers like Huawei a several-week optimization head start. This isn't just a technical decision. It's a geopolitical statement about the AI supply chain.
Since DeepSeek released its R1 reasoning model in January 2025, Chinese companies have repeatedly delivered models that match Western performance at a fraction of the cost. A RAND report found that Chinese AI models now cost roughly one-sixth to one-fourth of comparable U.S. systems. DeepSeek's API pricing alone came in 90-95% cheaper than OpenAI and Anthropic equivalents.
What This Means for the AI Industry
The price war has profound implications for every player in the AI ecosystem.
For OpenAI and Anthropic, the pressure is real. OpenAI just raised $110 billion in a colossal funding round, but the valuation depends on maintaining premium pricing. If open-source Chinese models continue to close the performance gap while charging 5-10% of the price, the subscription and API revenue models that justify those valuations come under serious strain.
For enterprises, this is unambiguously good news. The cost of building AI-powered products and services is plummeting. Tasks that were economically unfeasible at $15-30 per million tokens become trivial at $0.15-0.30. This doesn't just reduce costs for existing use cases; it opens up entirely new categories of applications that never made financial sense before.
For the open-source ecosystem, Chinese labs are becoming the dominant force. The combination of technical sophistication, aggressive pricing, and genuine open-weight releases is attracting developers worldwide. The days when Meta's Llama was the unchallenged leader in open-source AI are over.
What to Watch
DeepSeek V4's actual release is the immediate catalyst. If it delivers on the trillion-parameter, $0.10-per-million-token promise, expect another wave of benchmark comparisons and pricing pressure that will ripple through the entire industry.
Longer term, watch the regulatory dimension. DeepSeek blocking Nvidia and AMD from early V4 access signals that the U.S.-China AI competition is entering a new phase where model access, not just chip access, becomes a geopolitical tool. The Secretary of Commerce's March 11 evaluation of AI regulations and the FTC's policy statement on AI could reshape how these Chinese models are deployed in Western markets.
The AI price war is here, and the companies that built their businesses on premium pricing are going to have to find ways to justify the gap. Performance alone may no longer be enough.
References
- MiniMax's new open M2.5 and M2.5 Lightning near state-of-the-art while costing 1/20th of Claude Opus 4.6 - VentureBeat
- Forget DeepSeek. China's already released 5 new AI models and UBS prefers this one - CNBC
- DeepSeek Started a Price War — Now Every Chinese AI Lab Wants In - Technology.org
- What's next for Chinese open-source AI - MIT Technology Review
- MiniMax M2.5: Built for Real-World Productivity - MiniMax
Get the Daily Briefing
AI, Crypto, Economy, and Politics. Four stories. Every morning.
No spam. Unsubscribe anytime.