High-resolution long-range 3D single-photon imaging with a compact SPAD array

This paper presents a high-resolution long-range 3D single-photon imaging system that combines a digital micromirror device with a compact 64×64 SPAD array to achieve 256×256 effective spatial resolution at a distance of 670 meters under photon-starved conditions.

Original authors: Zunwang Bo, Chenjin Deng, Fei Wang, Wenlin Gong, Yuanhao Su, Yichen Zhang, Mingliang Chen, Chunfang Wang, Shensheng Han

Published 2026-04-13✓ Author reviewed
📖 4 min read☕ Coffee break read

This is an AI-generated explanation of the paper below. It is not written by the authors. For technical accuracy, refer to the original paper. Read full disclaimer

Imagine you are trying to take a high-definition photo of a distant building at night, but you only have a very dim flashlight and a camera with a tiny, low-resolution sensor (like a 64x64 grid of pixels). Normally, your photo would be a blurry, unrecognizable blob. This is the challenge scientists face with photon-starved imaging: trying to see far away objects when almost no light bounces back to your detector.

This paper describes a clever new way to solve this problem using a "smart camera" trick. Here is how it works, explained simply:

The Problem: The "Tiny Window" Limit

Think of the camera's sensor (the SPAD array) as a tiny window with only 64 little panes of glass. If you look through this window at a massive TV tower 670 meters away, you can only see a tiny, blurry piece of it. You can't see the details like railings or steel beams because your "window" isn't big enough to capture the whole picture clearly.

Usually, to get a better picture, you'd need a giant window (a massive sensor with thousands of pixels). But building a giant sensor that can also measure time (to calculate distance) is incredibly expensive, heavy, and power-hungry.

The Solution: The "Digital Shutter" Trick

The researchers didn't build a bigger window. Instead, they added a Digital Micromirror Device (DMD) in front of their tiny window. Think of the DMD as a super-fast, programmable shutter made of thousands of tiny mirrors.

Here is the step-by-step analogy:

  1. The Flash: They shine a laser at the target. The light bounces back, but it's very weak (like a whisper).
  2. The Shutter Dance: Before the light hits the tiny 64-pixel sensor, it hits the DMD. The DMD rapidly flips its tiny mirrors on and off in specific patterns.
    • Imagine the DMD is a giant screen showing a series of complex puzzles.
    • Instead of letting the whole image through at once, the DMD blocks parts of the image and lets other parts through in a specific code.
  3. The Smart Catch: The light that passes through the DMD hits the tiny 64-pixel sensor. Because the sensor is "smart" (it records when each photon arrives), it doesn't just see a blurry blob; it sees a coded message.
  4. The Puzzle Solver: A computer takes all these coded messages from the 64 pixels and solves a giant puzzle. It figures out, "Okay, based on how the light was blocked and the timing, the railing must be here, and the steel beam must be there."

The Magic Result: "Virtual" High Resolution

By doing this, the system creates a virtual window that is much bigger than the physical one.

  • Physical Sensor: 64 x 64 pixels (Blurry).
  • Virtual Result: 256 x 256 pixels (Sharp and detailed).

It's like having a low-resolution camera but using a computer to "zoom in" and reconstruct the missing details by taking many quick, coded snapshots.

What They Actually Did

The team tested this in the real world:

  • The Target: A tall television tower 670 meters away (about 4 football fields).
  • The Conditions: Very little light returning to the camera (photon-starved).
  • The Outcome:
    • Direct Imaging: If they just used the sensor without the trick, they could only see a fuzzy outline of the tower.
    • New Method: They successfully reconstructed the tower in 3D. They could clearly see handrails, steel tubes, and the complex frame structure of the tower.
    • Speed: They did this in just 2.46 seconds per view, which is incredibly fast for such a detailed image.

They even tested it on a hotel building 2 kilometers away using only sunlight (passive imaging), and it worked there too!

Why This Matters

This is a big deal because it means we don't need to build massive, expensive, heavy sensors to get high-resolution 3D images. We can use small, compact, cheap sensors and combine them with smart software and optical tricks to see the world in incredible detail, even from very far away or in very dark conditions.

In a nutshell: They turned a tiny, blurry camera into a high-definition 3D scanner by using a "digital shutter" to take coded snapshots and a computer to solve the puzzle of what the object really looks like.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →