An Explainable and Interpretable Composite Indicator Based on Decision Rules

This paper proposes a novel framework for constructing explainable and interpretable composite indicators by applying the Dominance-based Rough Set Approach to induce transparent if-then decision rules that clarify the rationale behind classifications or scores across various MCDA scenarios, while also handling missing values and efficiently generating minimal rules for continuous indicators.

Salvatore Corrente, Salvatore Greco, Roman Słowiński, Silvano ZappalÃ

Published 2026-03-04
📖 5 min read🧠 Deep dive

Imagine you are trying to grade a student's performance. In the old way, you might take their math score, add it to their science score, multiply by a "weight" for how important science is, and then add a "bonus" for art. You get a final number, say 87.5.

But here's the problem: Why 87.5?
If you ask the teacher, they might say, "Well, the math was weighted at 0.4 and science at 0.6, and the formula is complex." To a parent or the student, this feels like a black box. You see the input (grades) and the output (score), but the magic happening inside is hidden.

This paper proposes a new way to build these "grades" (called Composite Indicators) that turns the black box into a glass box. Instead of a mysterious math formula, the grade is explained by simple, clear rules, like a recipe or a flowchart.

The Core Idea: "If This, Then That"

The authors suggest replacing complex math formulas with Decision Rules. Think of these rules as traffic signs or simple logic gates.

Instead of saying, "Your score is 87.5 because of this weighted average," the new system says:

"IF your Math score is above 80 AND your Science score is above 70, THEN you get a 'Good' grade."

This is much easier to understand. It's like a doctor diagnosing a patient. They don't just give a number; they say, "Because your fever is high and your cough is dry, you likely have the flu."

The Four Scenarios: How It Works in Real Life

The paper shows how this works in four different situations, using some great real-world examples:

1. The Medical Scale (The Glasgow Coma Scale)

  • The Old Way: Doctors add up points for eye opening, verbal response, and movement. If the total is 7, the patient is "Severe." But why is 7 severe? Is it because the eyes didn't open? Or because they can't speak? The total score hides the specific reason.
  • The New Way: The system generates rules like: "If the patient cannot open their eyes AND cannot speak, THEN they are Severe."
  • The Analogy: It's like a detective solving a crime. Instead of just saying "The suspect is guilty because the math adds up," the detective says, "The suspect is guilty because they were at the scene AND had the weapon."

2. The "Obscure" National Index (Human Development Index)

  • The Old Way: Countries are ranked by a complex formula involving life expectancy, schooling, and income. It's hard to tell exactly which factor pushed a country into "High Development" or "Low Development."
  • The New Way: The system creates rules like: "If a country's life expectancy is over 73 years AND schooling is over 12 years, THEN it is 'Very High Development'."
  • The Analogy: Imagine a club with a secret handshake. The old way just tells you if you're in or out. The new way tells you exactly what the handshake is: "You get in if you can say the password AND do the wave."

3. Learning from the Boss (Decision Maker Preferences)

  • The Scenario: A boss looks at 50 stocks and says, "I like Stock A, I hate Stock B, and Stock C is okay." They don't give a formula; they just give their gut feeling.
  • The New Way: The computer looks at the boss's choices and reverse-engineers the rules. It figures out: "Ah, the boss likes stocks where the profit margin is high AND the risk is low."
  • The Analogy: It's like a cooking apprentice watching a master chef. The apprentice doesn't know the recipe, but by watching the master add salt only when the soup is sour, the apprentice learns the rule: "If soup is sour, add salt."

4. Explaining a Complex Computer Model

  • The Scenario: Sometimes, a super-complex AI model (like a "black box" algorithm) gives a score.
  • The New Way: The system takes that complex score and translates it back into simple rules for humans to understand.
  • The Analogy: It's like a translator. The computer speaks "Math," and the human speaks "English." This method translates the computer's math into a story the human can understand.

Handling the Messy Stuff: Missing Data and Contradictions

Real life is messy. Sometimes data is missing (like a student who didn't take the math test), or the rules might seem to contradict each other.

  • Missing Data: Imagine a student is missing a math score. The old way might throw the student out of the calculation. The new way says: "We don't know the math score, but if their Science score is high enough, they still qualify for the 'Good' category, regardless of what the math score turns out to be." It's like saying, "Even if we don't know the full story, we have enough evidence to make a safe decision."
  • Contradictions: Sometimes a student might fit a rule for "Good" and a rule for "Bad" at the same time. The paper introduces a smart "referee" (an algorithm) that picks the best set of rules that don't fight each other, ensuring the final decision is fair and consistent.

Why This Matters

The authors are essentially saying: "Stop hiding behind complex math."

In a world where AI and data drive decisions (from who gets a loan to which country gets aid), people have a right to know why a decision was made.

  • Transparency: You can see the logic.
  • Fairness: You can check if the rules are biased.
  • Trust: You can trust the result because you understand the reasoning.

The Big Takeaway

Think of this paper as a translator between the cold, hard logic of data and the warm, fuzzy logic of human understanding. It turns a mysterious "Score of 87.5" into a clear, honest story: "You got this score because you did X, Y, and Z."

It's not just about getting a grade; it's about understanding the story behind the grade.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →