Wednesday, 14 May 2025
  • My Feed
  • My Interests
  • My Saves
  • History
  • Blog
Subscribe
Capernaum
  • Finance
    • Cryptocurrency
    • Stock Market
    • Real Estate
  • Lifestyle
    • Travel
    • Fashion
    • Cook
  • Technology
    • AI
    • Data Science
    • Machine Learning
  • Health
    HealthShow More
    Foods That Disrupt Our Microbiome
    Foods That Disrupt Our Microbiome

    Eating a diet filled with animal products can disrupt our microbiome faster…

    By capernaum
    Skincare as You Age Infographic
    Skincare as You Age Infographic

    When I dove into the scientific research for my book How Not…

    By capernaum
    Treating Fatty Liver Disease with Diet 
    Treating Fatty Liver Disease with Diet 

    What are the three sources of liver fat in fatty liver disease,…

    By capernaum
    Bird Flu: Emergence, Dangers, and Preventive Measures

    In the United States in January 2025 alone, approximately 20 million commercially-raised…

    By capernaum
    Inhospitable Hospital Food 
    Inhospitable Hospital Food 

    What do hospitals have to say for themselves about serving meals that…

    By capernaum
  • Sport
  • 🔥
  • Cryptocurrency
  • Data Science
  • Travel
  • Real Estate
  • AI
  • Technology
  • Machine Learning
  • Stock Market
  • Finance
  • Fashion
Font ResizerAa
CapernaumCapernaum
  • My Saves
  • My Interests
  • My Feed
  • History
  • Travel
  • Health
  • Technology
Search
  • Pages
    • Home
    • Blog Index
    • Contact Us
    • Search Page
    • 404 Page
  • Personalized
    • My Feed
    • My Saves
    • My Interests
    • History
  • Categories
    • Technology
    • Travel
    • Health
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Home » Blog » RoR-Bench: Revealing Recitation Over Reasoning in Large Language Models Through Subtle Context Shifts
AIMachine LearningTechnology

RoR-Bench: Revealing Recitation Over Reasoning in Large Language Models Through Subtle Context Shifts

capernaum
Last updated: 2025-04-11 11:00
capernaum
Share
RoR-Bench: Revealing Recitation Over Reasoning in Large Language Models Through Subtle Context Shifts
SHARE

In recent years, the rapid progress of LLMs has given the impression that we are nearing the achievement of Artificial General Intelligence (AGI), with models seemingly capable of solving increasingly complex tasks. However, a fundamental question remains: Are LLMs genuinely reasoning like humans or merely repeating patterns learned during training? Since the release of models like GPT-3 and ChatGPT, LLMs have revolutionized the research landscape, pushing boundaries across AI and science. Data quality, model scaling, and multi-step reasoning improvements have brought LLMs close to passing high-level AGI benchmarks. Yet, their true reasoning capabilities are not fully understood. Instances where advanced models fail to solve simple math problems—despite their apparent simplicity—raise concerns about whether they are truly reasoning or just mimicking familiar solution patterns.

Although various benchmarks exist to evaluate LLMs across domains like general knowledge, coding, math, and reasoning, many rely on tasks solvable by applying memorized templates. As a result, the actual intelligence and robustness of LLMs remain debatable. Studies show LLMs struggle with subtle context shifts, simple calculations, symbolic reasoning, and out-of-distribution prompts. These weaknesses are amplified under perturbed conditions or misleading cues. Similarly, multi-modal LLMs, including vision-language models like GPT-4v and LLaVA, show the same tendency to recite instead of reason when tested with subtly altered visual or textual inputs. This suggests that issues like spurious correlations, memorization, and inefficient decoding might underlie these failures, indicating a gap between observed performance and genuine understanding.

ByteDance Seed and the University of Illinois Urbana-Champaign researchers introduce RoR-Bench, a new multi-modal benchmark designed to identify whether LLMs rely on recitation rather than genuine reasoning when solving simple problems with subtly altered conditions. The benchmark includes 158 text and 57 image problem pairs, each featuring a basic reasoning task alongside a slightly modified version. Experiments reveal that leading models like OpenAI-o1 and DeepSeek-R1 suffer drastic performance drops—often over 60% with minor changes. Alarmingly, most models struggle to recognize unsolvable problems—preliminary fixes like prompt engineering offer limited improvement, emphasizing the need for deeper solutions.

RoR-Bench is a Chinese multimodal benchmark created to assess whether LLMs rely on memorized solution patterns rather than true reasoning. It contains 215 problem pairs—158 text-based and 57 image-based—where each pair includes an original and a subtly altered version. The original problems are simple, often from children’s puzzle sets, while the modified ones introduce minor changes that require entirely different reasoning. Annotators ensured minimal wording changes and no ambiguity. Notably, some problems are designed to have no solution or feature unrelated information, testing LLMs’ ability to recognize illogical conditions and resist recitation-based answers.

The study empirically evaluates leading LLMs and VLMs on the RoR-Bench benchmark, focusing on their ability to reason through subtle problem changes rather than merely recalling learned patterns. Results reveal that most models suffer a significant performance drop—often over 50% when tested on slightly modified problems, suggesting a reliance on memorization rather than genuine reasoning. Even techniques like Chain-of-Thought prompting or “Forced Correct” instructions provide limited improvement. Few-shot in-context learning shows some gains, especially with increased examples or added instructions, but still fails to close the gap. Overall, these findings highlight the limitations of current models in adaptive reasoning.

In conclusion, the study introduces RoR-Bench, a Chinese multimodal benchmark designed to uncover a critical flaw in current large language models: their inability to handle simple reasoning tasks when problem conditions are slightly altered. The significant performance drop—often over 50% suggests that these models rely on memorization rather than true reasoning. Even with added prompts or few-shot examples, the issue remains largely unresolved. While the benchmark is limited to Chinese, initial English results indicate similar weaknesses. The findings challenge assumptions about LLM intelligence and call for future research to develop models that reason genuinely rather than reciting learned patterns from training data.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.

🔥 [Register Now] miniCON Virtual Conference on OPEN SOURCE AI: FREE REGISTRATION + Certificate of Attendance + 3 Hour Short Event (April 12, 9 am- 12 pm PST) + Hands on Workshop [Sponsored]

The post RoR-Bench: Revealing Recitation Over Reasoning in Large Language Models Through Subtle Context Shifts appeared first on MarkTechPost.

Share This Article
Twitter Email Copy Link Print
Previous Article OpenAI supercharges ChatGPT with enhanced memory OpenAI supercharges ChatGPT with enhanced memory
Next Article Top 10 games from the Triple-i Initiative showcase Top 10 games from the Triple-i Initiative showcase
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Using RSS feeds, we aggregate news from trusted sources to ensure real-time updates on the latest events and trends. Stay ahead with timely, curated information designed to keep you informed and engaged.
TwitterFollow
TelegramFollow
LinkedInFollow
- Advertisement -
Ad imageAd image

You Might Also Like

A Step-by-Step Guide to Build a Fast Semantic Search and RAG QA Engine on Web-Scraped Data Using Together AI Embeddings, FAISS Retrieval, and LangChain
AI

A Step-by-Step Guide to Build a Fast Semantic Search and RAG QA Engine on Web-Scraped Data Using Together AI Embeddings, FAISS Retrieval, and LangChain

By capernaum
Agent-Based Debugging Gets a Cost-Effective Alternative: Salesforce AI Presents SWERank for Accurate and Scalable Software Issue Localization
AI

Agent-Based Debugging Gets a Cost-Effective Alternative: Salesforce AI Presents SWERank for Accurate and Scalable Software Issue Localization

By capernaum

This AI Paper Investigates Test-Time Scaling of English-Centric RLMs for Enhanced Multilingual Reasoning and Domain Generalization

By capernaum
Rethinking Toxic Data in LLM Pretraining: A Co-Design Approach for Improved Steerability and Detoxification
AIMachine LearningTechnology

Rethinking Toxic Data in LLM Pretraining: A Co-Design Approach for Improved Steerability and Detoxification

By capernaum
Capernaum
Facebook Twitter Youtube Rss Medium

Capernaum :  Your instant connection to breaking news & stories . Stay informed with real-time coverage across  AI ,Data Science , Finance, Fashion , Travel, Health. Your trusted source for 24/7 insights and updates.

© Capernaum 2024. All Rights Reserved.

CapernaumCapernaum
Welcome Back!

Sign in to your account

Lost your password?