Inpainting the Red Planet: Diffusion Models for the Reconstruction of Martian Environments in Virtual Reality

This paper proposes an unconditional diffusion model trained on augmented HiRISE heightmaps to reconstruct missing Martian terrain data in virtual reality, demonstrating superior accuracy and perceptual similarity compared to traditional interpolation methods.

Giuseppe Lorenzo Catalano, Agata Marta Soccini

Published 2026-03-04
📖 5 min read🧠 Deep dive

🚀 The Big Picture: Fixing the "Glitchy" Map of Mars

Imagine you are trying to build a hyper-realistic Virtual Reality (VR) simulation of Mars for astronauts to train in. You need a perfect 3D map of the surface. But here's the problem: The cameras on our satellites orbiting Mars are like old, dusty binoculars. Sometimes they miss a spot, sometimes the signal gets lost in space, and sometimes the data just doesn't make it back to Earth.

The result? Your 3D map of Mars has giant holes in it. It's like a jigsaw puzzle where someone ripped out the middle pieces. If you try to walk through this VR world, you'd fall into a black void.

The Goal: The authors wanted to fill in those missing holes so the map looks smooth, realistic, and safe for astronauts to explore.


🧩 The Old Way: "Guessing" the Missing Pieces

Before this paper, scientists used two main ways to fix these holes:

  1. The "Neighborhood Watch" (Interpolation): Imagine you are missing a tile in a floor. You look at the tiles immediately next to the hole and guess the color based on them. If the neighbors are blue, you paint the hole blue.
    • The Problem: This is too simple. Mars isn't just flat blue; it has craters, dunes, and ridges. If you just "average" the neighbors, you end up with a flat, blurry blob that looks nothing like a real Martian landscape.
  2. The "Fluid Flow" (Navier-Stokes): This treats the missing area like a puddle of water. It tries to "flow" the surrounding terrain into the hole to smooth it out.
    • The Problem: While better than the first method, it still struggles to create complex features like sharp crater edges or sand dunes. It tends to make things look too smooth and artificial.

🎨 The New Way: The "AI Artist" with a Memory

The authors decided to use a Diffusion Model. Think of this not as a calculator, but as a super-creative artist who has studied thousands of photos of Mars.

Here is how their "AI Artist" works:

  1. The Training (Learning the Vibe): They fed the AI 12,000 pictures of Martian terrain. They didn't just show it one big map; they chopped them up into tiny, random pieces of different sizes. This taught the AI that Mars has big features (like giant craters) and small features (like tiny rocks), all mixed together.
  2. The Magic Trick (The Diffusion Process): Imagine the AI starts with a picture that is completely covered in static (like an old TV with no signal).
    • The AI knows what a "clean" Mars looks like.
    • It slowly removes the static, pixel by pixel, asking itself: "If I see a crater here, what should the pixels around it look like?"
  3. The "Unconditional" Superpower: Most AI art tools need a prompt, like "Draw a red rock." But for Mars, we often don't have that extra info. The authors built an Unconditional model. This means the AI doesn't need a prompt. It just looks at the edges of the hole and says, "I know what a crater looks like. I know what a dune looks like. I will just paint the missing part to fit perfectly."

The Analogy:

  • Old Methods: Like trying to fix a torn photo by taping it together with clear tape. You can see the tear, and the image is blurry.
  • This New Method: Like hiring a master painter who has memorized the entire photo. They look at the tear, remember what the scene should look like, and paint over the tear so seamlessly that you can't tell it was ever damaged.

📊 Did It Work? (The Results)

The team tested their "AI Artist" against the old methods (the "Neighborhood Watch" and "Fluid Flow") on 1,000 different maps.

  • The Visual Test: They put the maps into a 3D VR engine.
    • Old methods: Looked like plastic or smooth clay. The craters looked fake.
    • New method: Looked like real Mars. The craters had depth, the rocks looked sharp, and the transition from the "real" data to the "filled-in" data was invisible.
  • The Math Test: They measured the errors.
    • The new method was 4% to 15% more accurate in height measurements than the old methods.
    • It was 29% to 81% more similar to the original data in terms of how the human eye perceives the image.

🌍 Why Does This Matter?

This isn't just about making pretty pictures.

  • Astronaut Training: If an astronaut is training in VR to drive a rover, they need to see a real crater. If the AI fills that crater with a flat blob, the astronaut might crash in real life.
  • Mission Planning: Scientists need to know exactly how high a hill is to plan a landing.
  • Data Scarcity: Mars is far away. We can't just go back and take a new photo if we miss a spot. This AI allows us to "hallucinate" (in a good way) the missing data based on what we already know about the planet.

🏁 The Bottom Line

The authors created a tool that acts like a digital time-traveling artist. It looks at the broken, incomplete maps of Mars we have, remembers what the planet looks like from its training, and fills in the missing pieces with such high quality that it creates a perfect, smooth, and realistic 3D world for us to explore virtually.

They proved that even when we don't have all the data, a smart AI can figure out the rest, making our virtual trips to Mars safer and more real than ever before.