Constructing Evidence-Based Tailoring Variables for Adaptive Interventions

This paper proposes a systematic framework for empirically developing evidence-based tailoring variables for adaptive interventions, arguing that while secondary data can be used, specifically designed optimization experiments (such as SMARTs or factorial designs) provide the most direct causal evidence for determining optimal measurement times, decision points, and cutoffs.

John J. Dziak, Inbal Nahum-Shani

Published Thu, 12 Ma
📖 6 min read🧠 Deep dive

Imagine you are a chef trying to create the perfect recipe for a soup that helps people recover from a bad habit, like smoking or overeating. You know that one size doesn't fit all. Some people need a little extra spice (more support), while others are fine with the basic broth.

This paper is about how to figure out exactly who needs the extra spice, when to add it, and how much to add.

In the scientific world, this is called an Adaptive Intervention. The "rules" you use to decide who gets the extra help are called Tailoring Variables. The authors, John Dziak and Inbal Nahum-Shani, are asking: How do we scientifically design these rules so they actually work, rather than just guessing?

Here is the breakdown of their ideas using simple analogies.

1. The Four Ingredients of a Rule

To make a good decision about who needs extra help, you need to define four things. Think of this like setting up a traffic light system for a patient's progress:

  • The Sensor (Observed Variable): What are we measuring? (e.g., "Did they use the app 5 times this week?" or "Did they smoke a cigarette?")
  • The Check-in Time (Assessment Time): When do we look at the data? (e.g., "Every Monday morning.")
  • The Decision Moment (Decision Time): When do we actually make the call? (e.g., "If they haven't used the app by Wednesday, we intervene.")
  • The Red Line (Cutoff): What is the specific number that triggers the alarm? (e.g., "If they used the app less than 3 times, they are 'non-responders' and need help.")

The paper argues that getting these four ingredients right is just as important as the treatment itself. If your red line is too high, you miss people who need help. If it's too low, you waste resources helping people who didn't need it.

2. The Two Ways to Find the Perfect Rule

The authors say there are two main ways to figure out the best settings for your traffic light: Looking in the Rearview Mirror (Secondary Data) or Building a Test Track (Optimization Trials).

Method A: Looking in the Rearview Mirror (Secondary Data Analysis)

Imagine you have a giant notebook of old records from people who already tried the basic soup recipe. You look at the data to see who got better and who didn't.

  • The Problem: This is like trying to predict the weather by looking at yesterday's clouds. You can see who got sick, but you don't know what would have happened if you had added the extra spice at that specific moment.
  • The Risk: You might find a pattern, but you can't be sure it's the cause. Maybe the people who failed would have failed anyway, or maybe they would have succeeded if you had intervened earlier. It's full of "what ifs."

Method B: Building a Test Track (Optimization Randomized Controlled Trials)

This is the "Gold Standard." Instead of guessing, you build a controlled experiment where you test different rules against each other.

  • The Analogy: Imagine you have four identical cars. You put them on a test track.
    • Car 1 stops for help if the driver misses 1 turn.
    • Car 2 stops for help if the driver misses 2 turns.
    • Car 3 stops for help if the driver misses 3 turns.
    • Car 4 never stops for help.
  • You drive them all the same distance and see which car finishes fastest and safest. Because you randomized (randomly assigned) the cars to these rules, you know for a fact that the difference in speed was caused by the rule, not by the driver's skill.

3. The Tricky Parts: Timing and Trade-offs

The paper highlights that choosing these rules isn't just about math; it's about timing and trade-offs.

  • The "Wait vs. Act" Dilemma:
    Imagine you are waiting for a friend to show up.

    • If you call them after 5 minutes, you might be annoying them (they were just running late).
    • If you call them after 2 hours, they might have already left town.
    • The Science: The authors explain that waiting longer often gives you better data (you know for sure they aren't coming), but waiting too long means you miss the chance to help them. You have to find the "elbow" in the curve—the point where waiting longer doesn't give you much more info, but acting now is still effective.
  • The "False Alarm" vs. "Missed Opportunity" Trade-off:

    • Sensitivity (Catching everyone): If you set the cutoff very low (e.g., "If they miss any app use, send help"), you catch everyone who needs help, but you also waste money helping people who were fine.
    • Specificity (Only the needy): If you set the cutoff very high (e.g., "Only send help if they miss 5 days"), you save money, but you might miss the people who were struggling but just barely made the cutoff.
    • The Solution: You have to decide: Is the "rescue" treatment cheap and easy (like a text message)? Then be generous with the cutoff. Is it expensive and invasive (like a hospital visit)? Then be strict with the cutoff.

4. The Big Picture: Why This Matters

The authors conclude that while looking at old data is cheaper and easier, it often leads to guesswork. To build truly effective, scalable health programs (like apps for addiction or weight loss), we need to run smart experiments.

They suggest using fancy experimental designs (like SMARTs or Factorial Designs) which are like complex board games where you test multiple rules at once.

  • Example: Instead of testing "When to help?" and "Who to help?" in two separate studies, you can test them together in one big experiment to see how they interact.

The Takeaway

Building a successful health intervention is like tuning a high-performance engine. You can't just guess the settings. You have to test them.

  • Don't just guess the cutoff: Test different thresholds.
  • Don't just guess the timing: Test different decision points.
  • Don't just guess the variable: Test different ways of measuring progress.

By using Optimization Trials (the Test Track), scientists can move from "We think this rule works" to "We have proof this rule works best for this specific person at this specific time." This saves money, reduces patient burden, and ultimately saves lives.