Refined thresholds for inconsistency: The effect of the graph associated with incomplete pairwise comparisons

This paper refines inconsistency thresholds for incomplete pairwise comparison matrices by demonstrating that they depend not only on matrix size and missing entries but also critically on the underlying graph structure of known comparisons, enabling more accurate error detection and real-time monitoring.

Original authors: Kolos Csaba Ágoston, László Csató

Published 2026-04-14
📖 5 min read🧠 Deep dive

This is an AI-generated explanation of the paper below. It is not written or endorsed by the authors. For technical accuracy, refer to the original paper. Read full disclaimer

Imagine you are a chef trying to create the perfect menu for a new restaurant. You have a list of dishes (let's call them Alternatives), and you need to decide which ones are better than others. To do this, you ask a group of food critics to compare the dishes two at a time.

  • "Is the Steak better than the Salad?"
  • "Is the Salad better than the Soup?"
  • "Is the Steak better than the Soup?"

If the critics are perfectly logical, their answers should follow a simple rule: If Steak > Salad and Salad > Soup, then Steak must be > Soup. This is called consistency.

However, humans aren't robots. Sometimes a critic says, "Steak is twice as good as Salad," and "Salad is three times as good as Soup," but then claims, "Steak is only five times better than Soup." This is inconsistency. It's a logical glitch.

The Problem: The "Missing Pieces" Puzzle

In the real world, you can't always get every critic to compare every single dish. Maybe the Steak and the Soup were never tasted together because they were served on different nights. This leaves you with a Partial Puzzle (an incomplete matrix).

For decades, experts used a "10% Rule" to decide if a puzzle was good enough. If the logical errors were less than 10%, they said, "Good enough, let's use this menu." If it was over 10%, they said, "Go back and ask the critics again."

But here's the catch: The shape of the missing pieces matters.

The Big Discovery: It's About the Map, Not Just the Count

The authors of this paper realized that the old "10% Rule" was too simple. It only counted how many comparisons were missing. It didn't care which ones were missing.

Think of your dishes as cities on a map, and the comparisons as roads connecting them.

  • Scenario A: You have roads connecting City 1 to 2, 2 to 3, and 3 to 4. The missing roads are between 1-3 and 2-4. The roads form a long, winding chain.
  • Scenario B: You have roads connecting 1-2, 2-3, and 3-4, but the missing roads are 1-4 and 2-3. The roads form a different shape, like a star or a cluster.

The paper proves that Scenario A and Scenario B have different "tolerance levels" for errors, even if they both have the same number of missing roads.

The Secret Ingredient: The "Spectral Radius" (The Map's Vibe)

The authors found a mathematical way to measure the "shape" or "vibe" of this road map. They call it the Spectral Radius.

  • Simple Analogy: Imagine the road map is a musical instrument. The Spectral Radius is like the instrument's "resonance" or how easily sound travels through it.
    • If the roads are spread out evenly (like a regular grid), the sound travels smoothly. This is a "low resonance" map.
    • If the roads are clustered in a way that creates bottlenecks or loops, the sound gets stuck or bounces around wildly. This is a "high resonance" map.

The paper shows that maps with higher resonance (Spectral Radius) are naturally more prone to logical errors. Therefore, they need a stricter rule. You can't accept as many errors in a "bumpy" map as you can in a "smooth" map.

Why This Matters in Real Life

  1. Better Software: Imagine an app that helps you make decisions (like choosing a university or a house). As you fill in your answers, the app can now look at the shape of your missing questions. If your answers form a "bumpy" map, the app will say, "Hey, your logical errors are a bit high, but that's expected for this shape. Let's keep going." If they form a "smooth" map, it might say, "Stop! You have too many errors; please rethink your answers."
  2. Saving Time: Experts often get tired. If they stop too early because they think they are "too inconsistent," they might miss a great solution. If they keep going when they are actually fine, they waste time. This new method tells them exactly when to stop based on the specific pattern of their missing data.
  3. The "10% Rule" is Outdated: The famous "10% rule" is like saying "All cars are safe if they go under 60 mph." But a Formula 1 car and a family minivan have different safety needs. Similarly, a "smooth" decision map and a "bumpy" one need different error limits.

The Bottom Line

This paper is like upgrading the GPS for decision-making. Instead of just counting how many roads are missing, it looks at the topology (the shape) of the road network. By understanding the "Spectral Radius" (the map's resonance), we can set smarter, fairer rules for when a decision is good enough to be trusted.

In short: Don't just count the missing pieces of your puzzle; look at how they fit together. The shape of the hole changes the rules of the game.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →