Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Open Source AI Is Moving From Sideshow To Strategy

    April 20, 2026

    GenAI Bitcoin Thriller Has To Sell This Junk To Any Sucker It Can

    April 20, 2026

    DeepSeek Seeks First Outside Funding at $10 Billion Valuation – Unite.AI

    April 19, 2026
    Facebook X (Twitter) Instagram
    ailogicnews.aiailogicnews.ai
    • Home
    ailogicnews.aiailogicnews.ai
    Home»Deepseek»NVIDIA Releases OpenReasoning-Nemotron, Open-Source AI Model Distilled from China’s DeepSeek R1
    Deepseek

    NVIDIA Releases OpenReasoning-Nemotron, Open-Source AI Model Distilled from China’s DeepSeek R1

    AI Logic NewsBy AI Logic NewsJuly 21, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    NVIDIA has released OpenReasoning-Nemotron, a new family of powerful and efficient open-source AI models. Announced on July 19, 2025, and available globally via Hugging Face, the models set new performance records for reasoning in math, science, and code.

    NVIDIA developed four models by distilling capabilities from DeepSeek’s massive 671B R1-0528 model, one of China’s top AI systems. This strategy provides developers with free, commercially permissive access to state-of-the-art reasoning AI.

    The release aims to democratize advanced capabilities, making them available in 1.5B, 7B, 14B, and 32B parameter sizes. This avoids the need for frontier-scale computing resources, opening doors for smaller teams and researchers.

    Distilling Power From a Frontier Model

    At the core of OpenReasoning-Nemotron is a sophisticated distillation strategy. NVIDIA leveraged the recently upgraded DeepSeek-R1-0528 model, a 671-billion parameter powerhouse, to teach smaller models its advanced reasoning skills.

    This was achieved by training on a curated dataset of 5 million high-quality reasoning examples generated by the DeepSeek model. The process transfers generalized reasoning ability into more compact architectures based on Alibaba’s Qwen 2.5 framework.

    The results are impressive. According to NVIDIA’s benchmarks, the Nemotron models establish new state-of-the-art pass@1 scores for their size classes. The 32B model, for instance, scores 89.2 on AIME24 and 70.2 on LiveCodeBench without special tuning.

    For even more demanding tasks, NVIDIA introduced a “heavy” mode using a technique called Generative Selection (GenSelect). This method generates multiple potential solutions and uses the model to select the best one, significantly boosting accuracy on complex problems.

    With GenSelect, the 32B model’s score on the HMMT Feb 2025 math benchmark jumps from 73.8 to an incredible 96.7, demonstrating powerful emergent reasoning capabilities at scale.

    Nvidia OpenReasoning-Nemotron, Benchmarks

    The DeepSeek Connection and Geopolitical Context

    The choice of DeepSeek’s model as the source is a major validation of the Chinese firm’s technology. DeepSeek made waves in May 2025 with its R1-0528 update, claiming its performance was “approaching that of leading models, such as O3 and Gemini 2.5 Pro,” from competitors like OpenAI and Google.

    This progress occurs amid intense geopolitical headwinds. In April 2025, a US House Committee labeled DeepSeek a national security risk. Committee Chairman John Moolenaar issued a stark warning about the company.

    He stated, “This report makes it clear: DeepSeek isn’t just another AI app — it’s a weapon in the Chinese Communist Party’s arsenal, designed to spy on Americans, steal our technology, and subvert U.S. law.” NVIDIA’s use of the model highlights the interconnected nature of global AI development.

    NVIDIA has released all four OpenReasoning-Nemotron models under a commercially permissive license. They are designed for easy integration with tools like the NVIDIA NeMo framework, TensorRT-LLM, and Hugging Face Transformers, facilitating rapid deployment.

    By using only Supervised Fine-Tuning (SFT) and avoiding Reinforcement Learning (RL), NVIDIA provides a strong, stable baseline. This allows the research community to build upon these models to explore new RL techniques for reasoning, potentially accelerating the entire field.

    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleVinod Khosla Says Windsurf Founders Left Team ‘Behind’
    Next Article Could This Monster Nvidia-Backed Artificial Intelligence (AI) Data Center Stock Be the Best Bargain in the Market Right Now?
    AI Logic News

    Related Posts

    Deepseek

    DeepSeek Seeks First Outside Funding at $10 Billion Valuation – Unite.AI

    April 19, 2026
    Deepseek

    Nvidia’s Jensen Huang warns DeepSeek running on Huawei chips is a ‘horrible outcome’ for US

    April 19, 2026
    Deepseek

    DeepSeek Seeks First Outside Funding at $10B Valuation

    April 18, 2026
    Demo
    Top Posts

    DeepSeek V4 And Tencent’s New Hunyuan Model To Launch In April

    March 17, 202641 Views

    OpenAI’s Simo Said to Warn Staff Ag

    March 17, 202633 Views

    Houston’s Small Biz Gets Smarter: H

    July 29, 202513 Views
    Latest Reviews
    ailogicnews.ai
    © 2026 Lee Enterprises

    Type above and press Enter to search. Press Esc to cancel.