High-Quality and Efficient Turbulence Mitigation with Events

This paper introduces EHETM, a high-quality and efficient turbulence mitigation method that leverages the unique spatiotemporal characteristics of event cameras—specifically polarity-weighted gradients and event tubes—to achieve superior scene restoration with significantly reduced latency and data overhead compared to existing state-of-the-art approaches.

Xiaoran Zhang, Jian Ding, Yuxing Duan, Haoyue Liu, Gang Chen, Yi Chang, Luxin Yan

Published 2026-03-24
📖 4 min read☕ Coffee break read

Imagine you are trying to take a photo of a distant mountain or a car driving down the road, but there is a massive heat wave shimmering in the air between you and the subject. This is atmospheric turbulence. It's like looking through a wavy, distorted window made of hot air. The image you get is blurry, wobbly, and full of weird artifacts.

For a long time, computers tried to fix this by taking many, many photos (frames) in a row and averaging them out. Think of it like trying to find a clear view of a person in a crowd by taking 50 photos and hoping that in at least one of them, the person's face isn't blocked or blurry.

  • The Problem: This takes a lot of time (high latency) and requires a massive amount of data storage. It's like waiting for a bus that only comes once an hour, but you need to get to work now. Also, if the person in the photo is moving, the computer gets confused and blurs them even more.

The New Solution: EHETM (The "Event Camera" Superpower)

The researchers in this paper, EHETM, decided to stop relying on standard cameras and started using a special type of camera called an Event Camera.

Here is the best way to understand the difference:

  • Standard Camera: Like a movie projector. It takes a full picture 30 times a second, even if nothing changed in the scene. It's slow and records a lot of useless "static" data.
  • Event Camera: Like a super-fast security guard who only shouts when something changes. If a pixel stays the same, it stays silent. If a pixel changes brightness (like a car moving or a heat wave rippling), it shouts immediately with microsecond precision. It's incredibly fast and only records what matters.

How They Fixed the Wobbly Image

The team discovered two "secret tricks" about how turbulence behaves when seen through these fast event cameras:

1. The "Polarity Flip" Trick (Fixing the Background)
When turbulence hits a sharp edge (like the outline of a building), the event camera sees the light flickering up and down rapidly. It's like a light switch being flipped on and off thousands of times a second.

  • The Analogy: Imagine trying to draw a straight line on a piece of paper that is shaking. If you look at the direction of the shake, you can figure out exactly where the line should be.
  • The Fix: The computer counts these rapid "flips" (polarity alternations). Where the flips are most intense, it knows that's a sharp edge that needs to be preserved. This helps sharpen the blurry background without needing 50 photos.

2. The "Event Tube" Trick (Saving Moving Objects)
When a car or a person moves, they leave a continuous trail of events, like a snake sliding through the grass. The researchers call this an "Event Tube."

  • The Analogy: Turbulence is like a chaotic crowd of people running in random directions. A moving object is like a single person walking in a straight line. Even if the crowd is pushing them, their path is still mostly straight.
  • The Fix: The computer looks for these "straight tubes" of movement. It ignores the chaotic, random noise of the turbulence and locks onto the smooth, continuous path of the moving object. This allows it to stabilize moving cars or people perfectly, even if they are moving fast.

The Result: Fast, Light, and Clear

By combining these two tricks, the new system (EHETM) can fix the image using only 5 to 8 photos (plus the event data), whereas old methods needed 30 to 60 photos.

  • Speed: It's about 90% faster (low latency). You get the clear image almost instantly.
  • Data: It uses 77% less data. You don't need a massive hard drive to store the video.
  • Quality: The images are sharper, and moving objects don't get blurry or distorted.

Summary

Think of the old method as trying to solve a puzzle by waiting for 60 different puzzle pieces to fall into place. The new method (EHETM) is like having a super-fast assistant who only hands you the pieces that actually moved, allowing you to solve the puzzle in seconds with a few pieces. They even built two new real-world datasets (like a training gym for the AI) to prove this works in both hot city streets and long-distance atmospheric views.

This is a huge step forward for things like long-range surveillance, drone footage, and any situation where you need a clear, real-time view through "shimmering" air.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →