Don't Disregard the Data for Lack of a Likelihood: Bayesian Synthetic Likelihood for Enhanced Multilevel Network Meta-Regression

This paper proposes a Bayesian Synthetic Likelihood (BSL) framework integrated with Hamiltonian Monte Carlo to enhance Multilevel Network Meta-Regression by leveraging available subgroup-level summary statistics to impute missing individual-level covariates, thereby improving population-adjusted treatment comparisons in the presence of incomplete data.

Harlan Campbell, Charles C. Margossian, Jeroen P. Jansen, Paul Gustafson

Published Thu, 12 Ma
📖 5 min read🧠 Deep dive

Imagine you are a detective trying to solve a mystery: Which medicine works best for which type of patient?

In the world of medicine, researchers run clinical trials. Sometimes, they have the "gold standard" data: a list of every single patient, what they took, and exactly how they reacted, including their age, weight, and medical history. This is like having a complete, high-definition video of the crime scene.

But often, companies only release a blurry photo or a summary report. They tell you, "50% of people got better," but they hide the details because of privacy laws or trade secrets. They might say, "It worked well for heavy people, but not for light people," but they won't give you the names or weights of the individuals.

This paper introduces a clever new detective tool called Bayesian Synthetic Likelihood (BSL) to solve this mystery even when the "video" is missing.

Here is the story of how it works, broken down into simple concepts:

1. The Problem: The Missing Puzzle Pieces

Standard methods (called ML-NMR) try to guess the answer by looking at the blurry photo and the summary report. They try to "fill in the blanks" by averaging out the missing details.

  • The Flaw: It's like trying to guess the flavor of a soup by only tasting the broth, ignoring the fact that the chef wrote a note saying, "I added extra salt for the spicy section." Standard methods often throw away these helpful notes (subgroup summaries) because they don't know how to fit them into the math without the full list of ingredients.

2. The Solution: The "Imagination Engine" (BSL)

The authors propose a new method that acts like a super-powered imagination engine. Instead of giving up on the missing details, the engine pretends to have them.

Here is the step-by-step process, using a Bakery Analogy:

  • The Setup: You are a baker trying to figure out the perfect recipe for a cake. You have the recipe for 10 cakes (the data you have), but you are missing the ingredient lists for 100 other cakes. However, you do have a summary note from the head baker: "In the big batch, 80% of the cakes were too sweet."
  • The Old Way: You ignore the note about sweetness and just guess based on your 10 cakes. Your guess might be way off.
  • The New Way (BSL):
    1. Make a Guess: You guess a recipe (e.g., "Maybe I used 2 cups of sugar?").
    2. Simulate the Missing Cakes: You use your computer to "bake" 100 imaginary cakes based on that guess.
    3. Check the Result: You look at your imaginary cakes. "Oh, with 2 cups of sugar, 90% of my imaginary cakes were too sweet."
    4. Compare: The real note said 80%. Your guess (90%) is too high.
    5. Adjust: You tweak your recipe (try 1.5 cups of sugar) and repeat the simulation.
    6. Repeat: You keep doing this thousands of times until your imaginary cakes match the real summary note perfectly.

By constantly simulating the missing data and checking if it matches the real summary notes, the method finds the true recipe that fits both the data you have and the notes you were given.

3. The Technical Hurdle: The "Smoothie" Problem

There was a big problem with this idea. Computers that do this kind of math (called HMC or Hamiltonian Monte Carlo) are like smoothie makers. They need everything to be perfectly smooth and continuous to work.

But the "summary notes" in real life are often discrete (counting whole numbers: 1 cake, 2 cakes, 3 cakes). You can't have 2.5 cakes.

  • The Glitch: If you try to put a "bumpy" discrete number into a "smooth" smoothie maker, the machine breaks or gets confused. It stops working efficiently.

The Fix: The "Continuous Relaxation"
The authors invented a way to turn the "bumpy" numbers into "smooth" numbers without losing the meaning.

  • Imagine instead of counting whole cakes, you measure the volume of batter. It's still the same amount of cake, but now it's a smooth liquid that the smoothie maker can handle easily.
  • They also use a correction step (called PSIS) at the end. It's like tasting the smoothie after you make it and adding a tiny pinch of salt or sugar to fix any flavor errors caused by turning the cake into batter. This ensures the final answer is perfectly accurate.

4. The Result: Recovering Lost Treasure

The authors tested this on real data about Psoriasis (a skin condition).

  • The Test: They took a study where they pretended to lose the individual patient data, keeping only the summary notes.
  • The Outcome:
    • The Old Method (ignoring the notes) gave a blurry, uncertain answer. It couldn't tell if the medicine worked better for heavy people or light people.
    • The New Method (BSL) used the summary notes to "reconstruct" the missing details. It recovered almost all the information, giving an answer that was nearly as good as if they had the full, private data list.

Why This Matters

In the real world, companies often hide individual patient data due to privacy or profit. This paper shows that we don't need to throw away the valuable "summary notes" that are published.

By using this Imagination Engine, doctors and policymakers can make much better decisions about which medicine to prescribe to which patient, even when the full data is locked away. It turns "partial information" into "near-complete knowledge," saving us from discarding valuable clues.

In short: Don't throw away the clues just because you don't have the whole puzzle. This new method lets you build the puzzle by imagining the missing pieces and checking if they fit the picture you already have.