Optimising supervised machine learning algorithms predicting cigarette cravings and lapses for a smoking cessation just-in-time adaptive intervention (JITAI)

This study found that while machine learning models can detect smoking lapse risks, their modest overall performance and high inter-individual variability suggest that reducing assessment frequency or simplifying predictors does not consistently improve outcomes, indicating that such algorithms are best used in combination with rules-based approaches rather than as standalone solutions for just-in-time adaptive interventions.

Leppin, C., Brown, J., Garnett, C., Kale, D., Okpako, T., Simons, D., Perski, O.

Published 2026-02-27
📖 4 min read☕ Coffee break read
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine you are trying to help a friend quit smoking. You know that the hardest part isn't the physical addiction, but the sudden, overwhelming urge to smoke that hits at random moments—like when they are stressed, bored, or hanging out with friends.

This study is like a digital safety net experiment. The researchers wanted to build a smart phone app that could predict exactly when these "danger zones" (cravings or slips) would happen, so it could send a helpful message just in time to stop the person from lighting up.

However, there's a catch: If the app asks your friend too many questions (like "How are you feeling?" every hour), they will get annoyed and stop using it. But if it asks too few questions, the app might be too dumb to predict the danger.

The researchers tested different ways to build this "smart predictor" to find the perfect balance between being helpful and not being annoying. Here is how they did it, using some simple analogies:

1. The "Weather Forecast" Analogy (Predicting the Outcome)

The researchers tried to predict two things:

  • The "Lapse": Did your friend actually smoke a cigarette? (This is like predicting if it will rain).
  • The "Craving": Did your friend feel a strong urge to smoke, even if they didn't? (This is like predicting if the sky looks stormy).

The Finding: It was much easier to predict if it would rain (a lapse) than to predict if the sky looked stormy (a craving). Cravings are like mood swings—they change too fast and are too personal for a computer to guess perfectly.

2. The "Camera Frequency" Analogy (How often to ask questions)

The team tested asking questions at different speeds:

  • High Speed: Asking 16 times a day (every waking hour).
  • Low Speed: Asking only 3 times a day.

The Surprise: You might think asking more questions gives a smarter computer. But for predicting actual slips (lapses), asking fewer questions actually worked better!

  • Why? Imagine trying to spot a thief in a crowded room. If you take a photo every second, you get a blur of motion. If you take a photo every few minutes, you get a clearer picture of who is actually moving.
  • For cravings, however, asking fewer questions made the app dumber. You need frequent updates to catch a fleeting feeling like a craving.

3. The "Personal Trainer" Analogy (Using Personal Data)

The researchers tested two types of "trainers":

  • The Group Trainer: A coach who knows what works for everyone based on data from 37 people.
  • The Personal Trainer: A coach who spends the first few days watching just your friend to learn their specific habits before making predictions.

The Finding:

  • For predicting lapses, the "Personal Trainer" (using your friend's own data) did a slightly better job at catching the specific moments they slipped.
  • For predicting cravings, the "Personal Trainer" actually did worse. It seems that trying to learn a specific person's habits from just a few days of data confused the algorithm. The "Group Trainer" (general rules) was actually more reliable.

4. The "Backpack" Analogy (How many variables to check)

The app could check 34 different things (mood, location, time of day, pain levels, etc.). The researchers asked: "Do we need to carry all 34 items in our backpack, or can we just carry the top 15?"

The Finding: They could throw away most of the heavy items! The algorithm performed almost the same with just a few key questions (like "Are you stressed?" or "Do you have a cigarette nearby?") as it did with the full list. This is great news because it means the app can be much simpler and less annoying for the user.

The Bottom Line: What Does This Mean for You?

The study found that while computers are getting better at predicting smoking slips, they aren't perfect crystal balls yet.

  • The Good News: We can build a simpler app that asks fewer questions (3 to 5 a day) and still catches most "slip" moments.
  • The Bad News: The app isn't perfect. It will sometimes send a warning when you aren't in danger (a false alarm), or miss a danger when you are.
  • The Solution: Don't rely on the computer alone. The best approach is a hybrid: Use the computer to give a "heads up" based on general rules, but combine it with human wisdom and simple rules (like "If you feel stressed, take a walk").

In short: The researchers found a way to make a "smart safety net" that isn't too heavy to carry, but they warn us not to trust it 100%. It's a helpful tool, but it needs a human hand to guide it.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →