Linear Programming for Multi-Criteria Assessment with Cardinal and Ordinal Data: A Pessimistic Virtual Gap Analysis

This paper introduces a novel, scalable linear programming framework called Virtual Gap Analysis (VGA) that integrates cardinal and ordinal data to perform pessimistic, two-step multi-criteria assessments, thereby mitigating subjective biases and enabling efficient ranking and elimination of the least favorable alternatives in decision support systems.

Fuh-Hwa Franklin Liu, Su-Chuan Shih

Published 2026-04-14
📖 5 min read🧠 Deep dive

Imagine you are the manager of a fleet of delivery trucks. You want to know which truck is the best and which one is the worst so you can decide which one to keep and which one to retire.

But here's the catch: You can't just look at one thing. You have to look at:

  • Fuel efficiency (a number, like miles per gallon).
  • Driver safety record (a number, like accidents per year).
  • Customer satisfaction (a rating from 1 to 5 stars).
  • Brand reputation (a subjective score from a survey).

This is what experts call Multi-Criteria Assessment (MCA). It's like trying to grade a student not just on math scores, but also on their handwriting, their attitude, and their participation in class.

The problem with most existing methods is that they are like a biased teacher. They might say, "I think safety is twice as important as fuel," or they might struggle to compare a "5-star rating" with "gallons of gas." This leads to unfair rankings.

The Solution: The "Virtual Gap" Detective

This paper introduces a new, smarter way to rank these alternatives called Virtual Gap Analysis (VGA). Think of it as a high-tech, unbiased detective that uses math (specifically Linear Programming) to find the truth.

Here is how the new method works, broken down into simple steps:

1. The Two-Step "Worst-First" Strategy

Instead of trying to find the "Superstar" immediately, this method starts by finding the "Strugglers." It works in two stages:

  • Stage 1: The "Worst Practice" Filter.
    Imagine a room full of people. The method asks: "Who is the least efficient?" It doesn't just look at the numbers; it looks at how much "virtual money" (a made-up currency used for comparison) you would have to spend to fix a bad truck.

    • If a truck is already perfect, the "cost to fix" is $0.
    • If a truck is terrible, the "cost to fix" is $100.
    • The method groups everyone who has a "fix cost" of $0 into a "Worst Group." These are the trucks that are already as bad as they can possibly be compared to their peers.
  • Stage 2: The "Who's the Worst of the Worst?" Showdown.
    Now, we have a small group of the worst-performing trucks. We need to know which one is the absolute bottom.
    The method runs a second, more intense test on just this group. It asks: "If we take away the best truck from this group, who is left holding the bag?"
    The one with the highest "fix cost" in this final round is declared the Worst Alternative. You remove them from the fleet. Then, you repeat the process to find the next worst, and so on, until you have ranked everyone from best to worst.

2. The Magic of "Virtual Currency"

How does it compare apples (fuel) to oranges (customer ratings)?
The method invents a Virtual Currency.

  • It assigns a "price" to every input and output.
  • It calculates a "Virtual Gap": The difference between what the truck costs (inputs) and what it earns (outputs) in this virtual world.
  • The Analogy: Imagine you are comparing a heavy truck and a light bike. The method says, "Okay, let's pretend 1 gallon of gas costs $10, and 1 star of satisfaction is worth $50." It balances the scales so you can compare them fairly without needing to convert everything into the same physical unit.

3. Handling "Subjective" Data (The Likert Scale)

What about the "Customer Satisfaction" rating (1 to 5 stars)?
The method treats these ratings like a ruler. It knows that a "5" is the top of the ruler and a "1" is the bottom. It doesn't guess what a "5" means; it simply knows it's the best possible position. It uses this to calculate how much "virtual money" is lost when a truck gets a "3" instead of a "5."

Why is this a Big Deal?

  1. No Bias: Traditional methods often ask a human to decide, "Is safety more important than speed?" This new method lets the math decide the weights automatically based on the data. It's like a judge who has no personal feelings about the drivers.
  2. Handles Mixed Data: It can easily mix hard numbers (dollars, tons) with soft numbers (ratings, surveys) without getting confused.
  3. The "Pessimistic" View: Most methods try to be optimistic ("Look how good this truck is!"). This method is pessimistic ("Look how much this truck needs to improve"). This is actually more useful for decision-makers because it highlights the risks and the worst-case scenarios.
  4. Scalable: It can handle a fleet of 10 trucks or 10,000 trucks without breaking a sweat.

The Bottom Line

This paper presents a new, robust tool for making tough decisions. Whether you are choosing the best university, the most efficient factory, or the safest airline, this method acts like a fair, mathematical referee.

It doesn't just tell you who won; it tells you exactly how far the losers are from winning and what specific changes they need to make to get there. It turns a messy, subjective debate into a clear, data-driven ranking.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →