Near--Real-Time Conflict-Related Fire Detection in Sudan Using Unsupervised Deep Learning

This study introduces a lightweight, unsupervised Variational Auto-Encoder model utilizing 3-meter 4-band Planet Labs imagery to detect conflict-related fires in Sudan within 24 to 30 hours, demonstrating superior performance in recall and F1-scores compared to traditional change detection methods for near-real-time war zone monitoring.

Kuldip Singh Atwal, Dieter Pfoser, Daniel Rothbart

Published 2026-03-03
📖 5 min read🧠 Deep dive

Imagine Sudan as a house where a violent fight has broken out. In the chaos, people are throwing things, breaking windows, and setting fires. The people inside are scared, and the people outside (the world) want to know: Where is the fire? How bad is the damage? Can we help?

Usually, getting this information is like trying to watch a fight through a thick, foggy window from a mile away. You might see smoke, but you can't see the broken chairs or the small fires. Or, you have to wait for a messenger to run all the way there, which takes days. By the time they get back, the fire might be out, or the situation might have changed.

This paper introduces a new, super-fast way to "see" the damage using a special pair of eyes (satellites) and a smart brain (artificial intelligence).

Here is the breakdown of how they did it, using simple analogies:

1. The Eyes: PlanetScope Satellites

Think of the PlanetScope satellites as a swarm of tiny, hyper-active bees buzzing around the house every single day.

  • The Old Way: Other satellites (like Sentinel-2) are like a slow-moving owl. It flies over the house once every 5 or 10 days. If a fire starts and ends in a day, the owl misses it completely.
  • The New Way: The PlanetScope bees fly over every day. They take pictures so sharp (3 meters resolution) that you can see individual cars and small piles of rubble, not just the whole neighborhood.

2. The Brain: The "Unsupervised" AI

The researchers built a special AI brain called a Variational Autoencoder (VAE). Here is the tricky part: In a war zone, you don't have a "teacher" to show the AI what a "burned house" looks like. There are no textbooks or labeled photos because the fighting is happening right now.

So, how does the AI learn?

  • The Analogy: Imagine you have a friend who knows your house perfectly. They know exactly where the sofa is, the color of the rug, and where the lamp stands.
  • The "Normal" Baseline: The AI is trained to memorize what a "normal," peaceful house looks like. It learns the "vibe" of the neighborhood.
  • The "Anomaly" Detection: When the AI looks at a new photo taken after a fight, it doesn't need to know what "fire" looks like specifically. It just asks: "Does this look like the normal house I memorized?"
  • If it sees a charred roof or a smoke plume, it says, "Whoa, this doesn't match my memory! Something is wrong here!" This is called unsupervised learning—the AI figures out what is "weird" on its own without being told what to look for.

3. The Speed: Near-Real-Time

The goal was to get this information to humanitarian workers within 24 to 30 hours.

  • The Process:
    1. Take the photo: The satellite snaps a picture.
    2. Clean the data: The computer fixes the colors (like adjusting the white balance on a camera).
    3. The AI checks: The brain compares the "before" photo and the "after" photo.
    4. The Alert: Within a day, a map pops up showing exactly where the "weird" (burned) spots are.

4. The Results: Did it Work?

The researchers tested this in five different places in Sudan where fighting had happened. They compared their "Smart Brain" method against older, simpler methods (like just measuring the difference in pixel colors).

  • The Winner: The Smart Brain (VAE) was much better. It found more fires (high Recall) and didn't get confused by shadows or clouds as often as the old methods.
  • The "Extra" Bands: They also tried feeding the AI more colors (8 bands instead of 4) and watching the house over a few days (time-series).
    • The Surprise: It didn't help much! The simple 4-color version (Red, Green, Blue, Near-Infrared) was already doing a great job. It's like realizing you don't need a 4K TV to see a fire; a good standard TV works just fine and is much faster to set up.

Why This Matters

In a war zone, time is life.

  • Old Way: Wait for news, wait for slow satellites, wait for ground reports. By then, the damage is done, and aid is late.
  • New Way: A map appears the next day showing exactly where the fires are. Humanitarian groups can send food, water, and doctors to the exact right spots immediately.

The Bottom Line

This paper proves that we can use a "smart, unsupervised brain" combined with "daily, high-definition satellite photos" to act like a super-fast emergency responder. It doesn't need a teacher to tell it what a fire looks like; it just knows what "peace" looks like, and when that peace is broken by fire, it raises the alarm instantly.

It's a powerful tool for turning the chaos of war into clear, actionable data, helping the world see the invisible scars of conflict.