Prediction variability in physiologically based pharmacokinetic modeling of tissue disposition under deep uncertainty

This study quantifies how deep uncertainty in parameter predictions and model assumptions significantly impacts the variability of tissue-specific pharmacokinetic outcomes in physiologically based pharmacokinetic (PBPK) models, revealing substantial prediction discrepancies particularly for lipophilic, protonated molecules.

Farahat, M., Flaherty, D., Fox, Z. R., Akpa, B. S.

Published 2026-03-29
📖 5 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine you are a chef trying to invent a new recipe for a super-delicious cake. You have a list of ingredients (the chemical structure of a new drug) and a cookbook (the PBPK model) that tells you how those ingredients will behave in the oven (the human body).

The goal of this paper is to figure out: How much can we trust the cookbook's predictions when we don't have the exact ingredients in front of us, and we have to guess their properties?

Here is the story of the research, broken down into simple concepts:

1. The Setup: The "Virtual Kitchen"

In the world of drug discovery, scientists use computers to test thousands of potential medicines before ever making them in a lab. They use PBPK models (Physiologically Based Pharmacokinetic models). Think of these models as a high-tech GPS for drugs. Once you tell the GPS where the drug starts (your mouth or a vein), it predicts exactly where the drug will go in the body, how long it will stay there, and how much of it will reach the target (like a tumor or an infected cell).

Usually, this GPS works great if you give it exact measurements of the drug (like its weight, how oily it is, how acidic it is). But in the "virtual kitchen," scientists often don't have the real drug yet. They have to use AI guesses (Machine Learning) to estimate those measurements.

2. The Problem: The "Fuzzy Compass"

The problem is that AI guesses aren't perfect. They are like a fuzzy compass.

  • If the compass says "North," it might actually be North-North-East.
  • If the compass says "10 miles," it might be 12 miles.

The researchers asked: If we feed these "fuzzy" guesses into our drug GPS, does the final destination prediction become a total mess? Or does the GPS stay reliable enough to tell us which drugs are worth testing?

3. The Experiment: Testing Four Different Maps

The researchers took four different versions of the drug GPS (four different mathematical models) and tested them against real-world data first to make sure they worked. Then, they introduced "fuzziness" (uncertainty) to the input data to simulate what happens when we rely on AI guesses.

They created 10,000 fake drugs (pseudomolecules) and ran them through these four maps, adding random errors to the inputs to see how the predictions wobbled.

4. The Big Discovery: The "Lipophilic Proton" Trap

The results revealed a fascinating and tricky pattern:

  • Most of the time, the maps agreed. For most drugs, even with fuzzy inputs, all four models pointed to roughly the same destination.
  • But for a specific type of drug, the maps went crazy. They found a "danger zone" in the chemical world: Drugs that are both very oily (lipophilic) and very charged (protonated).

Think of these drugs like greasy magnets.

  • Because they are oily, they love to stick to fatty tissues (like adipose tissue).
  • Because they are charged, they love to stick to specific proteins in the blood.

When the models tried to predict where these "greasy magnets" would go, the four different maps started arguing with each other. One map said, "It will stay in the fat!" Another said, "It will rush to the liver!" A third said, "It will get stuck in the muscle!"

5. Why Did They Disagree? (The "Rulebook" Differences)

The researchers dug into why the maps disagreed. It turned out the models had different rulebooks for how to handle these greasy magnets:

  • Model A assumed that if a molecule is charged, it can't stick to fat.
  • Model B assumed that if a molecule is charged, it can still stick to fat, but only if it's really oily.
  • Model C had a special "calibration" step that tweaked the numbers based on past data, which made it very stable but maybe too rigid.

When the input data was fuzzy (the AI guesses were slightly off), these small differences in the rulebooks got amplified. It's like four people trying to navigate a foggy forest. If they all agree on the path, they are fine. But if one person thinks "North is left" and another thinks "North is right," and the fog is thick, they will end up in completely different places.

6. The Takeaway: Don't Trust a Single Map

The main lesson from this paper is a warning for drug developers:

  1. Uncertainty is real: When we use AI to guess drug properties, the final prediction isn't a single number; it's a range of possibilities.
  2. Watch out for "Greasy Magnets": If a new drug candidate is both oily and charged, different models might give you wildly different answers. You can't just pick one model and trust it blindly.
  3. Better inputs matter: To get a better prediction, we don't just need better maps (models); we need better compasses (more accurate AI predictions for things like how oily or charged the drug is).

The Bottom Line

This study is like a safety check for the future of drug discovery. It tells us that while our computer models are powerful, they have blind spots. If we are designing a drug that is a "greasy magnet," we need to be extra careful, run multiple models, and understand that our predictions might have a bigger margin of error than we thought. It's a call to be humble about our predictions and to keep improving the tools we use to guess the properties of new medicines.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →