How 'Micro' is Microperimetry? - Characterizing the Effect of Fundus Tracking on the Psychometric Function

This study demonstrates that while fundus tracking in microperimetry primarily improves psychometric precision by reducing false positives and sharpening sensitivity curves at non-seeing retinal loci, it supports a defensible suprathreshold criterion intensity of 10 to 13 dB for distinguishing seeing from non-seeing retina.

Lipsky, T., Ehrenzeller, C., Ansari, G., Pfau, K., Harmening, W., Wu, Z., Pfau, M.

Published 2026-03-27
📖 5 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine your retina (the back of your eye) is a giant, high-resolution map of a city. Microperimetry is like a drone flying over this city, dropping little "flashlights" (stimuli) at specific addresses to see if the people living there can see the light.

The big question this paper asks is: How steady is the drone's hand?

In the real world, our eyes are never perfectly still. They jitter, drift, and make tiny, involuntary movements (like a shaky hand holding a camera). Microperimetry machines have a feature called "Fundus Tracking," which is like a gimbal on a camera that tries to keep the flashlight perfectly locked on the target, even if your eye moves.

This study wanted to know: Does turning on the "gimbal" (tracking) actually make a difference, or is the drone steady enough on its own? And, if we just want to know "Can you see this light or not?" (a quick check rather than a detailed measurement), what brightness should we use to get the most accurate answer?

Here is the breakdown of their findings using some everyday analogies:

1. The Experiment: The "Blind Spot" Test

The researchers used healthy volunteers and tested two types of locations:

  • The "Seeing" Neighborhood: Areas of the retina that work perfectly.
  • The "Blind Spot" (The Dark Alley): A natural area in everyone's eye where there are no light sensors (the optic nerve head). It's a guaranteed "dark zone."

They tested these spots with the tracking ON and OFF.

2. The Findings: The "Shaky Hand" Effect

In the "Seeing" Neighborhood:
When the eye was looking at a place that should see the light, the tracking didn't change much. Whether the drone was locked on or just hovering, the people said, "Yes, I see it!"

  • Analogy: If you are shining a flashlight on a bright, white wall, it doesn't matter if your hand shakes a little; the wall is still bright.

In the "Blind Spot" (The Dark Alley):
This is where it got interesting. When the drone was supposed to drop a light in the "Dark Alley" (where no one should see anything), the results changed based on the tracking.

  • Tracking OFF: The drone's hand shook. Sometimes, the flashlight accidentally drifted out of the dark alley and landed on the "seeing" neighborhood next door. The volunteer would say, "I see it!" even though the light was supposed to be in the blind spot. This is a False Positive.
  • Tracking ON: The drone stayed locked. The light stayed in the dark alley. The volunteer correctly said, "I don't see it."
  • Analogy: Imagine trying to drop a coin into a tiny hole in a table. If your hand is shaky (no tracking), you might miss the hole and hit the table surface (the seeing retina). If you use a guide rail (tracking), the coin goes straight into the hole.

The "Slope" of the Curve:
The researchers also looked at how "sharp" the transition was between seeing and not seeing.

  • Without tracking: The transition was "fuzzy." It was hard to tell exactly where the light stopped being visible.
  • With tracking: The transition became "crisp" and steep.
  • Analogy: Without tracking, the edge of a shadow looks blurry and gray. With tracking, the edge of the shadow is a sharp, black-and-white line.

3. The Big Question: How Bright Should the Flashlight Be?

In clinical practice, doctors often want to do a quick "defect map" to see if a patient has a blind spot. They don't want to do a long, detailed test. They just want to ask: "Can you see this light?"

To do this, they need to pick a specific brightness (criterion).

  • Too dim: You might miss a real problem.
  • Too bright: You might think a problem exists when it doesn't.

The researchers ran simulations to find the "Goldilocks" brightness. They found that 13 decibels (dB) was the mathematically perfect brightness to separate "seeing" from "not seeing."

  • However, they noted that in real-world diseases (like macular degeneration), eyes are generally a bit weaker. So, they concluded that 10 dB is a safe, conservative, and very good choice for doctors to use in practice. It's close enough to the perfect 13 dB to be accurate, but safe enough to catch real problems in weaker eyes.

The Takeaway

  1. Tracking Matters: Even in healthy eyes with good focus, turning on the "tracking" feature makes the test more precise. It stops the "flashlight" from accidentally drifting into the wrong neighborhood, which prevents false alarms.
  2. The "Sweet Spot" Brightness: For quick checks to map out blind spots, using a light intensity of 10 to 13 dB is the scientifically backed "sweet spot." It's bright enough to be seen by healthy eyes but dim enough to be invisible to truly damaged areas, giving the most accurate map of the eye's health.

In short: Microperimetry is "micro" enough to be precise, but only if you use the tracking feature to keep your hand steady, and you pick the right brightness to ask the question.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →