Saturday, 17 May 2025
  • My Feed
  • My Interests
  • My Saves
  • History
  • Blog
Subscribe
Capernaum
  • Finance
    • Cryptocurrency
    • Stock Market
    • Real Estate
  • Lifestyle
    • Travel
    • Fashion
    • Cook
  • Technology
    • AI
    • Data Science
    • Machine Learning
  • Health
    HealthShow More
    Eating to Keep Ulcerative Colitis in Remission 
    Eating to Keep Ulcerative Colitis in Remission 

    Plant-based diets can be 98 percent effective in keeping ulcerative colitis patients…

    By capernaum
    Foods That Disrupt Our Microbiome
    Foods That Disrupt Our Microbiome

    Eating a diet filled with animal products can disrupt our microbiome faster…

    By capernaum
    Skincare as You Age Infographic
    Skincare as You Age Infographic

    When I dove into the scientific research for my book How Not…

    By capernaum
    Treating Fatty Liver Disease with Diet 
    Treating Fatty Liver Disease with Diet 

    What are the three sources of liver fat in fatty liver disease,…

    By capernaum
    Bird Flu: Emergence, Dangers, and Preventive Measures

    In the United States in January 2025 alone, approximately 20 million commercially-raised…

    By capernaum
  • Sport
  • 🔥
  • Cryptocurrency
  • Data Science
  • Travel
  • Real Estate
  • AI
  • Technology
  • Machine Learning
  • Stock Market
  • Finance
  • Fashion
Font ResizerAa
CapernaumCapernaum
  • My Saves
  • My Interests
  • My Feed
  • History
  • Travel
  • Health
  • Technology
Search
  • Pages
    • Home
    • Blog Index
    • Contact Us
    • Search Page
    • 404 Page
  • Personalized
    • My Feed
    • My Saves
    • My Interests
    • History
  • Categories
    • Technology
    • Travel
    • Health
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Home » Blog » Why throwing more AI compute at verification might be a mistake
AIData Science

Why throwing more AI compute at verification might be a mistake

capernaum
Last updated: 2025-04-11 15:56
capernaum
Share
Why throwing more AI compute at verification might be a mistake
SHARE

Why throwing more AI compute at verification might be a mistake

Contents
The new math of reasoning at scaleCompute budgets change everythingThe brutal result: SC is still king (unless you’re rich)The smart way to use GenRM (if you must)

Getting large language models (LLMs) to reason better is one thing. Getting them to do it without burning through absurd amounts of compute is another. A new research paper from TU Darmstadt, UCLA, Google DeepMind, and Mila digs deep into this trade-off — and might just change how AI developers think about scaling reasoning at test time.

The core tension? Whether LLMs should spend their compute generating more answers (what’s known as Self-Consistency, or SC), or verifying a few promising answers using Generative Reward Models (GenRMs). Turns out, choosing wrong can make your model waste up to 128 times more compute — for a barely noticeable performance bump.

The new math of reasoning at scale

LLMs like GPT-4, Llama, or Qwen have gotten shockingly good at solving math and science problems by generating multiple chains of thought (CoTs) and picking the most common result. That’s the idea behind SC — brute force wisdom of the crowd. But researchers have also been excited by GenRMs, a newer approach that lets LLMs act like their own judge by verifying answers through further chain-of-thought reasoning.

Previous comparisons made GenRM look wildly efficient: matching SC’s accuracy with 4× fewer solutions. But this paper calls that framing out — hard. Why? Because nobody was counting the true compute cost of all those verification steps.

Compute budgets change everything

This study introduces a clean framework for measuring the real cost of SC and GenRM approaches under a fixed compute budget. It works like this: you can either spend compute generating more answers (SC), or split that budget between a few answers and many verifications (GenRM). Their model for calculating total inference compute is refreshingly straightforward: C(S, V) = S(1 + λV), where S is the number of solutions, V the number of verifications, and λ reflects verification length relative to solutions.

The brutal result: SC is still king (unless you’re rich)

The experiments left little doubt. Across Llama and Qwen models, from 7B to 70B parameters, and across math and science reasoning tasks, the story repeated: SC outperformed GenRM at lower compute budgets. Only when compute scaled past 8× did GenRM catch up. And getting a modest 3.8% performance boost over SC required an eye-watering 128× more compute.

That result held up even for advanced “thinking models” like QwQ-32B, and on hard math datasets like AIME24. SC wins when compute is tight. GenRM only makes sense when compute is practically free — or when the problems are so difficult that verification pays off dramatically.


IEA warns: AI could double global data center energy use by 2030


The smart way to use GenRM (if you must)

Still, the study doesn’t dismiss GenRM entirely. In fact, it derives inference scaling laws for GenRM — a blueprint for compute-optimal problem solving. The key finding? When scaling GenRM, allocate compute towards generating solutions faster than verifications — roughly 1.5 to 2 times faster. In numbers, their scaling laws found optimal solution count scales with compute budget as S ∝ C^0.57, while optimal verifications scale as V ∝ C^0.39.

This research leaves practitioners with a very practical guide: if compute is limited, trust SC and spend it on generating more solutions. If compute is abundant, and especially if you’re dealing with harder reasoning tasks, using GenRM with the right scaling balance might be worth it — but only with serious optimization.

For AI developers facing real-world constraints, the takeaway is almost comically simple: more thinking beats more verifying, unless you have near-infinite resources. And even then, verifying needs to be smart, efficient, and minimal.

The full paper, “When To Solve, When To Verify: Compute-Optimal Problem Solving and Generative Verification for LLM Reasoning,” is available on arXiv. Their codebase is open at GitHub.


Featured image credit

Share This Article
Twitter Email Copy Link Print
Previous Article Best Presales to Benefit from Trump’s IRS Broker Rule Reversal Best Presales to Benefit from Trump’s IRS Broker Rule Reversal
Next Article DeepL named to Forbes AI 50 List for second consecutive year DeepL named to Forbes AI 50 List for second consecutive year
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Using RSS feeds, we aggregate news from trusted sources to ensure real-time updates on the latest events and trends. Stay ahead with timely, curated information designed to keep you informed and engaged.
TwitterFollow
TelegramFollow
LinkedInFollow
- Advertisement -
Ad imageAd image

You Might Also Like

This AI paper from DeepSeek-AI Explores How DeepSeek-V3 Delivers High-Performance Language Modeling by Minimizing Hardware Overhead and Maximizing Computational Efficiency
AITechnology

This AI paper from DeepSeek-AI Explores How DeepSeek-V3 Delivers High-Performance Language Modeling by Minimizing Hardware Overhead and Maximizing Computational Efficiency

By capernaum

LLMs Struggle with Real Conversations: Microsoft and Salesforce Researchers Reveal a 39% Performance Drop in Multi-Turn Underspecified Tasks

By capernaum
Windsurf Launches SWE-1: A Frontier AI Model Family for End-to-End Software Engineering
AITechnology

Windsurf Launches SWE-1: A Frontier AI Model Family for End-to-End Software Engineering

By capernaum
Salesforce AI Releases BLIP3-o: A Fully Open-Source Unified Multimodal Model Built with CLIP Embeddings and Flow Matching for Image Understanding and Generation
AITechnology

Salesforce AI Releases BLIP3-o: A Fully Open-Source Unified Multimodal Model Built with CLIP Embeddings and Flow Matching for Image Understanding and Generation

By capernaum
Capernaum
Facebook Twitter Youtube Rss Medium

Capernaum :  Your instant connection to breaking news & stories . Stay informed with real-time coverage across  AI ,Data Science , Finance, Fashion , Travel, Health. Your trusted source for 24/7 insights and updates.

© Capernaum 2024. All Rights Reserved.

CapernaumCapernaum
Welcome Back!

Sign in to your account

Lost your password?