Through BrokenEyes: How Eye Disorders Impact Face Detection?

This paper presents a computational framework using the BrokenEyes system to simulate five common eye disorders and analyze their specific impacts on deep learning feature representations, revealing critical disruptions in models trained under degraded visual conditions.

Prottay Kumar Adhikary

Published 2026-02-27
📖 5 min read🧠 Deep dive

Imagine your brain is a super-smart security guard at a club, and its job is to look at people walking by and decide, "That's a human!" or "That's a statue!" This security guard is very good at its job when the lights are bright and the view is clear.

But what happens if the security guard puts on different kinds of bad glasses? Or if the camera lens they are looking through gets foggy, scratched, or covered in spots?

This paper, titled "Through BrokenEyes," is like a scientific experiment where researchers built a computer program to simulate exactly that. They wanted to see how five common eye problems mess up a computer's ability to recognize faces.

Here is the breakdown of their experiment in simple terms:

1. The "BrokenEyes" Filter (The Bad Glasses)

The researchers created a special digital tool called BrokenEyes. Think of this as a set of five different "bad glasses" they put over every photo in their database. Each pair of glasses mimics a specific real-world eye disease:

  • The Cataract Glasses (The Foggy Window): Imagine looking through a window covered in thick fog. You can see shapes, but everything is blurry and washed out. This simulates cataracts, where the eye's lens gets cloudy.
  • The Glaucoma Glasses (The Tunnel Vision): Imagine looking through a long, narrow pipe. You can see what's right in front of you, but everything on the sides is pitch black. This simulates glaucoma, which eats away at your side vision.
  • The AMD Glasses (The Black Hole in the Middle): Imagine a dark, blurry spot right in the center of your vision, like a smudge on a camera lens that you can't wipe off. This simulates Age-related Macular Degeneration, which ruins your ability to see details in the center.
  • The Refractive Error Glasses (The Out-of-Focus Camera): Imagine your camera lens is just slightly out of focus. The whole picture is a bit fuzzy, but you can still make out shapes. This simulates nearsightedness or farsightedness.
  • The Retinopathy Glasses (The Spotted Lens): Imagine looking through a window with random black spots or dirt scattered all over it. This simulates Diabetic Retinopathy, where damage to the retina creates blind spots.

2. The Experiment (The Security Guard Training)

The researchers took thousands of photos of people and non-people (like cars or trees) and ran them through these "BrokenEyes" filters.

Then, they trained a computer brain (a neural network called ResNet18) to play a game: "Is this a human or not?"

  • First, they trained it on clear photos (Normal Vision). The computer got 100% right. It was a perfect security guard.
  • Next, they trained separate versions of the computer on each of the five blurry/distorted sets of photos.

3. The Findings (How the Computer Got Confused)

The big question was: How did the computer's "brain" change when the input was bad?

They didn't just look at whether the computer got the answer right or wrong. They looked inside the computer's brain to see how it was "thinking" about the images. They used two main tools to measure this:

  • Cosine Similarity (The "Vibe Check"): This measures how much the computer's internal thoughts about a blurry face look like its thoughts about a clear face.

    • High Score: The computer is thinking almost the same way as usual.
    • Low Score: The computer is completely confused; its internal map of the face is totally different.
  • Activation Energy (The "Effort Meter"): This measures how hard the computer is working.

    • High Energy: The computer is screaming, "I'm trying so hard to find the face!" (It's working overtime).
    • Low Energy: The computer is giving up or not seeing enough to get excited.

The Results:

  • The Worst Offenders: Cataracts and Glaucoma caused the biggest chaos.
    • The computer's "vibe check" score dropped massively. It was like the security guard was looking at a completely different person.
    • The "Foggy Window" (Cataracts) made the computer work the hardest (highest energy) because it was trying to find edges that were gone.
    • The "Tunnel Vision" (Glaucoma) confused the computer the most because it lost the context of the whole picture.
  • The Survivors: Refractive Errors (fuzzy vision) and Retinopathy (spotted vision) didn't mess things up as much. The computer could still figure out the face, even if the picture wasn't perfect. It was like the security guard squinting a bit but still recognizing the person.

4. Why Does This Matter?

This isn't just about computers; it's about understanding us.

The computer's brain is designed to work somewhat like the human brain. When the computer struggled with the "Foggy Window" (Cataracts) or "Tunnel Vision" (Glaucoma), it mirrored what happens to humans with those conditions. It showed us that when we lose specific parts of our vision, our brains (and computers) have to completely reorganize how they process information.

The Takeaway:
This study gives us a "digital twin" to test how eye diseases affect our ability to see the world. By understanding exactly how these disorders break the system, we can build better AI assistants for the future. Imagine an app for someone with glaucoma that knows exactly what parts of the image are missing and helps fill in the gaps, making the world easier to navigate.

In short: If you want to build a robot that can see like a human with bad eyes, you first have to teach it what "bad eyes" actually feel like. This paper taught the robot how to see through broken eyes.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →