Wednesday, 14 May 2025
  • My Feed
  • My Interests
  • My Saves
  • History
  • Blog
Subscribe
Capernaum
  • Finance
    • Cryptocurrency
    • Stock Market
    • Real Estate
  • Lifestyle
    • Travel
    • Fashion
    • Cook
  • Technology
    • AI
    • Data Science
    • Machine Learning
  • Health
    HealthShow More
    Skincare as You Age Infographic
    Skincare as You Age Infographic

    When I dove into the scientific research for my book How Not…

    By capernaum
    Treating Fatty Liver Disease with Diet 
    Treating Fatty Liver Disease with Diet 

    What are the three sources of liver fat in fatty liver disease,…

    By capernaum
    Bird Flu: Emergence, Dangers, and Preventive Measures

    In the United States in January 2025 alone, approximately 20 million commercially-raised…

    By capernaum
    Inhospitable Hospital Food 
    Inhospitable Hospital Food 

    What do hospitals have to say for themselves about serving meals that…

    By capernaum
    Gaming the System: Cardiologists, Heart Stents, and Upcoding 
    Gaming the System: Cardiologists, Heart Stents, and Upcoding 

    Cardiologists can criminally game the system by telling patients they have much…

    By capernaum
  • Sport
  • 🔥
  • Cryptocurrency
  • Data Science
  • Travel
  • Real Estate
  • AI
  • Technology
  • Machine Learning
  • Stock Market
  • Finance
  • Fashion
Font ResizerAa
CapernaumCapernaum
  • My Saves
  • My Interests
  • My Feed
  • History
  • Travel
  • Health
  • Technology
Search
  • Pages
    • Home
    • Blog Index
    • Contact Us
    • Search Page
    • 404 Page
  • Personalized
    • My Feed
    • My Saves
    • My Interests
    • History
  • Categories
    • Technology
    • Travel
    • Health
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Home » Blog » Together AI Released DeepCoder-14B-Preview: A Fully Open-Source Code Reasoning Model That Rivals o3-Mini With Just 14B Parameters
AITechnology

Together AI Released DeepCoder-14B-Preview: A Fully Open-Source Code Reasoning Model That Rivals o3-Mini With Just 14B Parameters

capernaum
Last updated: 2025-04-11 08:48
capernaum
Share
Together AI Released DeepCoder-14B-Preview: A Fully Open-Source Code Reasoning Model That Rivals o3-Mini With Just 14B Parameters
SHARE

The demand for intelligent code generation and automated programming solutions has intensified, fueled by a rapid rise in software complexity and developer productivity needs. While natural language processing and general reasoning models have surged with significant breakthroughs, the coding domain has experienced slower progress. This lag is primarily attributed to the scarcity of high-quality, verifiable datasets critical for effectively training RL-based systems. Unlike mathematical problems, which benefit from a wealth of structured, verifiable examples online, coding tasks often suffer from noise, insufficient test coverage, and unverifiable outputs. Consequently, advancing LLMs for code generation has remained a formidable challenge until now.

DeepCoder-14B-Preview was released by Together AI in collaboration with the Agentica team. This powerful model was fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning, and it demonstrates substantial progress in code reasoning. With a performance of 60.6% Pass@1 accuracy on the LiveCodeBench (LCB), DeepCoder-14B-Preview not only closes the gap with leading models like o3-mini-2025 but matches their output, all while using just 14 billion parameters, a notable feat in efficiency and capability.

The release is especially significant considering the benchmarks. DeepSeek-R1-Distill-Qwen-14B scores 53.0% on LCB, and DeepCoder-14B-Preview demonstrates an 8% leap in accuracy compared to its base model. Also, it competes toe-to-toe with established models, such as o3-mini (60.9%) and o1-2024-12-17 (59.5%) in accuracy and coding prowess. Regarding competitive coding metrics, it reaches a Codeforces rating of 1936 and a percentile of 95.3%, which are clear indicators of its real-world coding competence.

Image Source

The model was trained over 2.5 weeks on 32 H100 GPUs using a curated dataset of 24,000 verifiable coding problems. This dataset was built by rigorously filtering existing resources to ensure quality and diversity. It combines problems from the TACO Verified set, PrimeIntellect’s SYNTHETIC-1, and entries from LiveCodeBench submitted between May 2023 and July 2024. The selection process emphasized programmatic verification of test cases, a minimum of five unit tests per problem, and deduplication to avoid data contamination. This helped maintain training integrity and maximize RL effectiveness.

To facilitate this level of validation, DeepCoder’s training incorporated a scalable code sandbox environment capable of executing massive parallel evaluations. Over 1,000 coding problems were assessed at each RL step using two robust sandboxes, the Together Code Interpreter and a local sandbox. These environments ensured that every model-generated solution was rigorously tested across multiple unit tests, filtering out reward hacking and encouraging genuine reasoning over memorization.

Image Source

Also, the system architecture supporting DeepCoder was optimized through “verl-pipe,” an upgraded extension to the post-training RL pipeline that doubled training speed through systems-level improvements. This enhancement accelerates development cycles and provides a modular framework for others looking to build or iterate on similar LLMs in open-source ecosystems.

Some Key Takeaways from the release of DeepCoder-14B-Preview include:

  • DeepCoder-14B-Preview achieves 60.6% Pass@1 accuracy on LiveCodeBench—matching o3-mini’s performance with fewer parameters.  
  • The model’s training leveraged 24K verifiable coding problems, carefully curated to avoid noise and reward hacking.  
  • It was trained on 32 H100 GPUs for 2.5 weeks, emphasizing reproducibility and system efficiency.  
  • A dual-sandbox environment ensured accurate and scalable code verification during training.  
  • System optimization via verl-pipe doubled training speed and provides a reusable pipeline for future models.  
  • DeepCoder is fully open-sourced, including datasets, code, and training logs, paving the way for community-driven development.  

Check out the Technical details, Model on Hugging Face and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.

🔥 [Register Now] miniCON Virtual Conference on OPEN SOURCE AI: FREE REGISTRATION + Certificate of Attendance + 3 Hour Short Event (April 12, 9 am- 12 pm PST) + Hands on Workshop [Sponsored]

The post Together AI Released DeepCoder-14B-Preview: A Fully Open-Source Code Reasoning Model That Rivals o3-Mini With Just 14B Parameters appeared first on MarkTechPost.

Share This Article
Twitter Email Copy Link Print
Previous Article Bitcoin vs Gold: What’s In Demand As US Bond Market Looks Fragile Bitcoin vs Gold: What’s In Demand As US Bond Market Looks Fragile
Next Article BNB Chain Completes Lorentz Testnet Hardforks; Here’s The Timeline For Mainnet BNB Chain Completes Lorentz Testnet Hardforks; Here’s The Timeline For Mainnet
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Using RSS feeds, we aggregate news from trusted sources to ensure real-time updates on the latest events and trends. Stay ahead with timely, curated information designed to keep you informed and engaged.
TwitterFollow
TelegramFollow
LinkedInFollow
- Advertisement -
Ad imageAd image

You Might Also Like

ServiceLink expands closing technology

By capernaum
Reinforcement Learning, Not Fine-Tuning: Nemotron-Tool-N1 Trains LLMs to Use Tools with Minimal Supervision and Maximum Generalization
AIMachine LearningTechnology

Reinforcement Learning, Not Fine-Tuning: Nemotron-Tool-N1 Trains LLMs to Use Tools with Minimal Supervision and Maximum Generalization

By capernaum

FHA cites AI emergence as it ‘archives’ inactive policy documents

By capernaum

Better leans on AI, sees first profitable month since 2022

By capernaum
Capernaum
Facebook Twitter Youtube Rss Medium

Capernaum :  Your instant connection to breaking news & stories . Stay informed with real-time coverage across  AI ,Data Science , Finance, Fashion , Travel, Health. Your trusted source for 24/7 insights and updates.

© Capernaum 2024. All Rights Reserved.

CapernaumCapernaum
Welcome Back!

Sign in to your account

Lost your password?