On the Monotonicity of Information Costs

This paper establishes simple necessary and sufficient conditions for information cost functions to be monotone with respect to Blackwell and Lehmann informativeness orders, grounded in their garbling characterizations, and applies these criteria to evaluate several well-known cost functions from the literature.

Xiaoyu Cheng, Yonggyun Kim

Published Mon, 09 Ma
📖 6 min read🧠 Deep dive

Imagine you are a detective trying to solve a mystery. You have a choice: you can buy a cheap, blurry snapshot of the crime scene, or you can pay a fortune for a high-definition, 3D video with audio.

In economics, we assume that better information should cost more. This seems obvious, right? But in the world of complex math and decision theory, "better" is a tricky word. Does a blurry photo count as "better" if it helps you catch a thief in a specific situation but fails in another?

This paper, titled "On the Monotonicity of Information Costs," by Xiaoyu Cheng and Yonggyun Kim, tackles a fundamental question: How do we mathematically guarantee that a cost function (a price tag for information) actually charges more for better info?

Here is the breakdown using simple analogies.

1. The Two Rules of "Better"

The authors look at two different ways to define what "better information" means. Think of these as two different rulebooks for a game.

  • The Blackwell Rulebook (The Universal Judge):
    Imagine a judge who says, "Experiment A is better than Experiment B only if A helps you win every single possible game you could play."

    • The Problem: This rule is incredibly strict. It's like saying a Ferrari is only "better" than a bicycle if it's faster at driving on the highway, and faster at climbing a mountain, and better at swimming, and better at cooking dinner. Because no car can do everything better than a bike in every scenario, the judge often says, "I can't compare them." They are "incomparable."
    • The Cost: If you follow this rulebook, your price tag must go up whenever an experiment helps in any possible scenario.
  • The Lehmann Rulebook (The Specialized Judge):
    This judge is more practical. They say, "Experiment A is better than B if it helps you win in specific, logical situations where things move in one direction."

    • The Scenario: Think of an auction. If you know the item is worth more, you bid higher. If you know it's worth less, you bid lower. This is a "monotone" situation (things move in one direction).
    • The Advantage: The Lehmann judge can compare experiments that the Blackwell judge couldn't. It's like saying, "For the specific game of 'Auction,' this high-def video is definitely better than the blurry photo, even if the photo is better for 'Swimming.'"

2. The Big Discovery: The "Price Tag" Test

The authors wanted to know: What rules must a price tag follow to be fair under these two rulebooks?

They discovered that you don't need to check every single possible experiment to see if the price is fair. You only need to check tiny, local changes. They call this "Decreasing in Signal Replacement."

The Analogy: The "Signal Swap"
Imagine your information is a set of clues.

  • Blackwell Test: If you take a clue that is "High Quality" and swap it for a "Low Quality" clue (making the whole set worse), the price must go down. If the price stays the same or goes up, your pricing model is broken.
  • Lehmann Test: This is trickier. It's not just about swapping any clue. It's about swapping clues in a specific order.
    • Imagine you have a list of clues ranked from "Low State" (bad news) to "High State" (good news).
    • The Lehmann rule says: If you take a "High State" clue and accidentally swap it with a "Low State" clue only when the news is bad, the price must drop.
    • Why? Because in monotone problems (like auctions), mixing up the order of clues confuses the decision-maker specifically in the way that matters most.

3. The "Path" Problem (The Hard Part)

The authors had to prove that if your price tag passes these tiny, local tests, it will pass the big, global test.

  • The Blackwell Path: Imagine you have a very informative map. You want to turn it into a less informative map by slowly adding fog. The authors proved you can do this by adding fog in small, step-by-step chunks. If your price drops every time you add a little fog, the total price will be correct.
  • The Lehmann Path (The Mountain Climb): This was the hard part. The set of "good" experiments (those that follow the Lehmann rules) is not a smooth hill; it's a jagged, non-convex mountain. You can't just walk in a straight line from a good map to a bad map without falling off the mountain (violating the rules).
    • The Solution: The authors built a special "rope bridge" (a mathematical path) that stays strictly on the mountain ridge. They showed that you can transform a complex experiment into a simpler one by making tiny, specific swaps that keep the "logic" of the experiment intact while lowering the quality. If the price drops on every step of this bridge, the pricing model is valid.

4. Testing Famous Price Tags

Finally, the authors took famous "price tags" used by economists and tested them against their new rules.

  • The "Entropy" Cost (The Standard): This is the most common way to price information (based on uncertainty). They confirmed it works for both the Blackwell and Lehmann judges. It's a fair price tag.
  • The "Bregman" Cost (The New Kid): This is a newer, more flexible pricing model used in machine learning and advanced economics.
    • The Verdict: The authors found that this model fails the test. It sometimes charges more for information that is actually worse (or charges the same for different levels of quality).
    • Why? It's like a store that charges you more for a slightly damaged apple than a perfect one because of how they calculated the price. The authors showed exactly why it fails: it doesn't respect the "Signal Swap" rule.

Summary: Why Does This Matter?

In the real world, economists and AI researchers build models where agents (people or computers) "buy" information to make decisions.

If the "price tag" they use is flawed—meaning it doesn't strictly charge more for better info—the whole model breaks. The agents might make irrational choices, or the math might predict things that never happen in reality.

The Takeaway:
This paper gives economists a simple checklist (the "Signal Swap" test) to verify if their information pricing models are fair.

  1. If you want a universal price tag: Make sure the price drops whenever you swap a good clue for a bad one.
  2. If you want a specialized price tag (for auctions, markets, etc.): Make sure the price drops when you swap clues in a specific, ordered way.

By providing these simple "local" tests, the authors made it much easier to build robust economic models that accurately reflect how we value information.