Can Language Models Falsify?
Evaluating Algorithmic Reasoning with Counterexample Creation

Shiven Sinha1 Shashwat Goel2,3 Ponnurangam Kumaraguru1
Jonas Geiping2,3,4 Matthias Bethge4,5○ Ameya Prabhu4,5○

1 IIIT Hyderabad, 2 ELLIS Institute Tübingen, 3 Max Planck Institute for Intelligent Systems, 4 Tübingen AI Center, 5 University of Tübingen. ○ denotes equal supervision.

arXiv Paper GitHub Code Results Results HuggingFace Data
Figure 1: Description
Reasoning about correctness.   While standard benchmarks for algorithmic reasoning require models to generate solutions, we propose an inverse benchmark to evaluate reasoning about correctness by falsifying incorrect solutions. To allow expressivity, we let the model output a code that generates the counterexample input, and validate it by comparing the output of the incorrect solution with a held out correct solution.

Abstract

There is growing excitement about the potential of Language Models (LMs) to accelerate scientific discovery. Falsifying hypotheses is key to scientific progress, as it allows claims to be iteratively refined over time. This process requires significant researcher effort, reasoning, and ingenuity. Yet current benchmarks for LMs predominantly assess their ability to generate solutions rather than challenge them. We advocate for developing benchmarks that evaluate this inverse capability — creating counterexamples for subtly incorrect solutions. To demonstrate this approach, we start with the domain of algorithmic problem solving, where counterexamples can be evaluated automatically using code execution. Specifically, we introduce REFUTE, a dynamically updating benchmark that includes recent problems and incorrect submissions from programming competitions, where human experts successfully identified counterexamples. Our analysis finds that the best reasoning agents, even OpenAI o3-mini (high) with code execution feedback, can create counterexamples for only < 9% of incorrect solutions in REFUTE, even though ratings indicate its ability to solve up to 48% of these problems from scratch. We hope our work spurs progress in evaluating and enhancing LMs’ ability to falsify incorrect solutions — a capability that is crucial for both accelerating research and making models self-improve through reliable reflective reasoning.

REFUTE Benchmark

REFUTE (Refuting Erroneous Findings Using Targeted Examples) evaluates whether language models can reason about when code might fail. Specifically, given a problem statement, the input format, and an incorrect code, the task is to find a valid input on which the code fails to produce the desired output. We source these samples from the popular programming competition platform Codeforces, which has been the target of several recent solution generation benchmarks.

Dataset construction

Results

We tested prompting and agentic strategies with code-execution feedback across frontier models. We also estimate the number of correct solutions they can generate from scratch based on their officially reported Codeforces ratings. w/ Correct denotes their ability to find counterexamples to incorrect code when the correct, ground-truth code is also revealed to them.

Model Solution Generation (%) Counterexample Creation (%)
Prompting ReAct Agent w/ Correct
DeepSeek-V3 10.8 2.7 3.7 3.7
Sonnet 3.5 6.6 4.6 3.0 2.2
Gemini Flash 2.0 (Thinking) 2.1 2.5 2.5
DeepSeek-R1 44.0 5.8 8.6 4.6
o3-mini (high) 48.7 8.9 8.6 9.3

Findings

What are common failure modes? When can models create counterexamples? Are successes across different models and methods correlated? Find more insights in our paper.