Physics-Informed Self-Supervised Generative Model for 3D Localization Microscopy

This paper proposes a physics-informed, self-supervised generative model that bridges the simulation-to-experiment gap in 3D localization microscopy by training directly on unlabeled experimental data to produce high-fidelity, fully labeled synthetic images, thereby significantly enhancing the performance of supervised localization networks in complex and low signal-to-noise scenarios.

Goldenberg, O., Daniel, T., Xiao, D., Shalev ezra, Y., Shechtman, Y.

Published 2026-03-30
📖 4 min read☕ Coffee break read
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine you are trying to find tiny, glowing fireflies in a dense, foggy forest at night. This is essentially what scientists do in Localization Microscopy. They want to pinpoint the exact location of individual molecules (the fireflies) inside a cell to see how life works at a microscopic level.

However, there's a catch: the "fog" (background noise) and the way the camera blurs the light (the physics of the lens) make it incredibly hard to tell exactly where each firefly is.

The Problem: The "Fake Forest" Trap

To teach computers (AI) how to find these fireflies, scientists usually create a simulated forest on a computer. They program the AI with rules about how light behaves and what the background looks like.

The problem? The simulation is never perfect.
It's like trying to teach someone to drive a car in a video game, but the video game doesn't have potholes, sudden rain, or weird wind gusts. When that driver gets on a real road, they crash because the real world is messier than the game.

In science, this is called the "Simulation-to-Experiment Gap." The AI learns on fake data, but when it sees real microscope images, it gets confused by the messy, unmodeled noise and fails to find the molecules accurately.

The Solution: PILPEL (The "Smart Copycat")

The authors of this paper created a new AI system called PILPEL (Physics-Informed Latent Particles for Emitter Localization). Instead of building a fake forest, PILPEL learns directly from the real forest.

Here is how it works, using a simple analogy:

1. The "Deconstruction" Phase

Imagine you have a messy photo of a crowded party. You want to teach a robot to find specific people.

  • Old Way: You give the robot a textbook on how people look and tell it to guess.
  • PILPEL's Way: You show the robot the actual messy party photo. The robot is smart enough to say, "Okay, I see a person here, a table there, and some weird lighting effects." It learns to separate the people (the molecules) from the background noise (the party chaos).

2. The "Physics" Anchor

The secret sauce is that PILPEL isn't just guessing; it has a rulebook built into its brain. It knows the laws of physics regarding how light bends through a microscope lens (called the Point Spread Function or PSF).

  • Think of the PSF as the specific "fingerprint" of a firefly's light. Even if the firefly is deep in the forest (3D space), its light blurs in a specific, predictable pattern.
  • PILPEL uses this rulebook to ensure that when it identifies a "person," it knows exactly where they are in 3D space, even if they are hidden in the fog.

3. The "Super-Generator"

Once PILPEL has learned from the real, messy photos, it becomes a master forger.

  • It takes the "rules" it learned (how the background looks, how the noise behaves) and combines them with the "physics rules" (how the light blurs).
  • It then generates brand new, perfect training photos.
  • Crucially, because it built these photos from scratch using the physics rules, it knows the exact, perfect location of every single "firefly" in the new image.

Why This is a Game-Changer

Now, the scientists take these perfectly labeled, realistic photos generated by PILPEL and use them to train the main AI (the one that actually finds the molecules in real experiments).

  • The Result: The main AI is no longer trained on a boring video game. It is trained on a "simulated reality" that looks and feels exactly like the real world, complete with all the messy background noise and weird lighting.
  • The Outcome: When this AI goes back to the real microscope, it doesn't crash. It finds 4 to 5 times more molecules than before, even in the darkest, noisiest parts of the cell.

The Bottom Line

Before this paper, scientists had to spend days manually tweaking computer simulations to try to make them look like real life, often failing to capture the true messiness of biology.

PILPEL skips the manual tweaking. It looks at the real mess, learns the rules of the game, and then creates a perfect training ground for the AI. It bridges the gap between "what we think the world looks like" and "what the world actually looks like," allowing us to see the tiny details of life with unprecedented clarity.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →