DeepSeek Accelerates R2 AI Model Launch Amid China's Full-Scale AI Commitment
Chinese AI startup DeepSeek is expediting the release of its next-generation AI reasoning model, R2, following the remarkable success of its R1 model, which recently led to a significant $1 trillion downturn in global equity markets.

BEIJING/HONG KONG/SINGAPORE, February 25, 2025 – Chinese AI startup DeepSeek is expediting the release of its next-generation AI reasoning model, R2, following the remarkable success of its R1 model, which recently led to a significant $1 trillion downturn in global equity markets. Originally slated for an early May debut, sources indicate that the Hangzhou-based company now aims to unveil R2 ahead of schedule, with the new model expected to enhance coding capabilities and extend reasoning functions beyond the English language.
The R1 model distinguished itself by utilizing less-powerful Nvidia chips while delivering performance on par with models developed by major U.S. tech corporations at substantially higher costs. This achievement has prompted competitors to reassess their strategies and has drawn attention from the U.S. government, which prioritizes AI leadership as a national interest. Concurrently, Chinese authorities and numerous companies have begun integrating DeepSeek's models into their operations, reflecting China's robust support for the company's advancements.
DeepSeek's founder, Liang Wenfeng, who amassed his wealth through the quantitative hedge fund High-Flyer, maintains a low public profile, having not engaged with media since July 2024. Under Liang's leadership, DeepSeek operates with a research-focused approach, eschewing traditional hierarchical management structures prevalent in Chinese tech firms. The company emphasizes a collaborative environment, offering competitive compensation to attract top-tier talent.
High-Flyer's substantial investments in AI research and computing infrastructure have been pivotal to DeepSeek's accomplishments. The firm invested 1.2 billion yuan in supercomputing clusters equipped with approximately 10,000 Nvidia A100 chips, facilitating large-scale AI model training. This strategic foresight proved advantageous, especially after U.S. export restrictions on advanced AI chips to China were implemented.
DeepSeek's innovative use of cost-effective AI architectures, such as Mixture-of-Experts (MoE) and multihead latent attention (MLA), has enabled the development of competitive models with reduced computational demands. Analysts estimate that DeepSeek's offerings are priced 20 to 40 times lower than comparable models from competitors like OpenAI. This disruptive pricing has influenced industry leaders to adjust their strategies, with companies like OpenAI and Google introducing more affordable access tiers in response.
Despite its growing prominence, DeepSeek has been advised by Chinese authorities to maintain a low profile to avoid excessive media attention. The company's models have seen widespread adoption across various sectors in China, including government agencies and state-owned enterprises. However, concerns over privacy have led some countries, such as South Korea and Italy, to remove DeepSeek's applications from their national app stores.
As DeepSeek prepares for the imminent launch of the R2 model, the global AI landscape anticipates potential shifts in competitive dynamics and regulatory responses. The company's trajectory underscores the rapid advancements in AI technology and the strategic importance of AI capabilities on the international stage.
Written by: Ali Abdullah (Punjab University)
What's Your Reaction?






