Causal Meta-Analysis: Rethinking the Foundations of Evidence-Based Medicine

This paper proposes a causal framework for meta-analysis that introduces novel aggregation formulas for nonlinear effect measures to address the limitations of conventional methods, revealing that standard approaches can sometimes misleadingly suggest treatment benefits where causal effects are actually harmful.

Clément Berenfeld, Ahmed Boughdiri, Bénédicte Colnet, Wouter A. C. van Amsterdam, Aurélien Bellet, Rémi Khellaf, Erwan Scornet, Julie Josse

Published Thu, 12 Ma
📖 4 min read☕ Coffee break read

Imagine you are a chef trying to create the perfect soup recipe for a city of 1 million people. You don't have time to cook for everyone, so you ask five different local restaurants (studies) to send you their best soup recipes and tell you how much people liked them.

The Old Way: The "Average Taste" Approach

In traditional medicine (and traditional meta-analysis), the experts act like a taste-test committee. They take the "rating" from Restaurant A, the "rating" from Restaurant B, and so on. They then calculate a weighted average.

  • The Problem: This committee assumes that "liking soup" is a simple, straight line. If Restaurant A says 10% of people liked it, and Restaurant B says 20%, they just average those numbers.
  • The Trap: Sometimes, the math gets tricky. If you are measuring something complex (like "how much better" the soup is compared to water), the way you average the numbers matters.
    • If you average the percentages directly, you might get one result.
    • If you average the logarithms of the percentages (a common statistical trick to handle big differences), you might get a completely different result.
    • The Danger: In the real world, this means the committee might say, "This soup is amazing! Everyone should eat it!" while the actual city population, when you mix all the ingredients together, actually finds it terrible. The math trick made the soup look good, but the reality is bad.

The New Way: The "Causal" Approach

The authors of this paper propose a new way to cook: Causal Meta-Analysis.

Instead of just averaging the ratings, they ask: "What would happen if we actually served this soup to the entire city?"

They treat the problem like a mixture of ingredients.

  1. The Ingredients (The Studies): Each restaurant has a different mix of customers (some are kids, some are elderly, some are spicy-lovers).
  2. The Goal: We want to know the effect of the soup on the whole city, not just the average of the restaurants.
  3. The Method: Instead of averaging the final scores, they take the raw data (how many people in Restaurant A liked it, how many in Restaurant B liked it) and re-mix them according to the size of the city's population.

The Analogy of the "Non-Linear" Trap:
Imagine you are measuring how much a car speeds up.

  • Risk Difference (Linear): If Car A goes 10 mph faster and Car B goes 20 mph faster, the average is 15 mph. Simple.
  • Risk Ratio (Non-Linear): If Car A is twice as fast as a bike, and Car B is three times as fast, simply averaging "2" and "3" doesn't tell you how fast the combined fleet is compared to the bikes. The math gets wobbly.

The paper shows that for these "wobbly" math problems (like Risk Ratios and Odds Ratios), the old "average the scores" method often lies. It might say the treatment is a miracle cure, while the new "re-mix the ingredients" method reveals it's actually harmful.

Why This Matters

The authors tested this on 500 real medical studies.

  • Most of the time: The old way and the new way agreed. The soup was good in both cases.
  • The scary part: In a few critical cases, the old method said, "This drug saves lives!" while the new causal method said, "Actually, this drug hurts people."

The Takeaway

Think of Meta-Analysis as trying to predict the future based on past reports.

  • The Old Way is like taking a poll of five different towns and averaging their answers. It's easy, but it can be misleading if the towns are very different.
  • The New Way is like building a simulated city using the data from those five towns. It asks, "If we put all these people together, what actually happens?"

The paper argues that for medical decisions that affect millions of people, we need to stop just averaging numbers and start simulating the real population. This ensures that when doctors say a treatment works, it actually works for the real patients, not just for the math on a spreadsheet.

In short: Don't just average the reviews; mix the ingredients to see what the final dish actually tastes like.