TEMPO-VINE: A Multi-Temporal Sensor Fusion Dataset for Localization and Mapping in Vineyards

This paper introduces TEMPO-VINE, a comprehensive multi-temporal, multi-modal public dataset featuring heterogeneous sensors and ground truth trajectories across diverse vineyard conditions, designed to address the lack of realistic benchmarks for advancing autonomous localization, mapping, and place recognition in precision agriculture.

Mauro Martini, Marco Ambrosio, Judith Vilella-Cantos, Alessandro Navone, Marcello Chiaberge

Published 2026-03-06
📖 4 min read☕ Coffee break read

Imagine you are trying to teach a robot to drive itself through a vineyard. Sounds simple, right? Just follow the rows of grapes. But here's the catch: a vineyard in February looks nothing like a vineyard in August.

In February, the vines are bare sticks, and the ground is short grass. By August, the vines are thick, leafy walls, and the grass might be knee-high. If you train a robot on the winter data, it will get hopelessly lost in the summer.

This is the problem the paper "TEMPO-VINE" is trying to solve.

The Problem: The "Video Game" Trap

Right now, most scientists test their self-driving farm robots in simulations (like a video game) or in very short, perfect field tests. It's like learning to drive a car only in an empty parking lot on a sunny day. You haven't learned how to handle rain, snow, or a sudden detour.

Because there wasn't a big, realistic "test track" for vineyards that changed with the seasons, robots struggled to work in the real world.

The Solution: The "Time-Traveling" Dataset

The authors created TEMPO-VINE, which is essentially a massive, multi-seasonal "training manual" for robots.

Think of it like a time-lapse movie of a vineyard. They didn't just take one photo; they drove a robot through the same rows of grapes 13 times over 10 months, from winter to late autumn.

  • The Robot: They used a small, rugged rover (like a robot dog but with wheels) equipped with a "super-suit" of sensors.
  • The Eyes: They gave the robot two different types of "3D eyes" (LiDARs):
    1. The Expensive Eye (Velodyne): A high-end, rotating laser scanner that sees everything clearly, like a luxury car's radar.
    2. The Budget Eye (Livox): A cheaper, newer laser scanner that sees things differently, like a budget-friendly security camera.
  • The Other Senses: They also added a stereo camera (to see color and depth), a GPS (to know where it is on Earth), and an IMU (a gyroscope to know if it's tilting).

Why This is a Big Deal

The paper compares TEMPO-VINE to other datasets and finds it's the only one that does three specific things:

  1. It changes with the seasons: It captures the "mood swings" of the vineyard (bare branches vs. leafy walls).
  2. It has two different "eyes": It lets researchers test if a cheap robot can do the same job as an expensive one.
  3. It has long rows: The vineyard rows are over 100 meters long, which is a real challenge for robots trying to stay on track.

What They Found (The "Report Card")

The authors tested popular robot navigation software on this new dataset to see how well it worked. Here is what they discovered:

  • The "Winter" vs. "Summer" Shock: Algorithms that worked perfectly in March (when the vines were bare) often crashed or got lost in July (when the vines were thick). It's like trying to recognize a friend by their face, but then they put on a giant winter coat and a hat; suddenly, the robot doesn't know who they are.
  • The "Budget" Eye Struggles: The cheaper LiDAR sensor had a harder time in the summer because its unique scanning pattern got confused by the dense leaves.
  • The "Expensive" Eye is Reliable: The high-end sensor handled the changes better, but even it had trouble when the grass got too tall.
  • Visuals are Tricky: Cameras (which rely on sight) were the worst at navigating because the lighting and shadows changed too much.

The Bottom Line

TEMPO-VINE is a gift to the robotics community. It's a realistic, messy, changing playground that forces robots to learn how to handle real life, not just a perfect simulation.

By making this data public, the authors hope to help engineers build robots that can actually work in vineyards year-round, regardless of whether it's winter, summer, raining, or sunny. It's the first step toward a future where robots can prune, harvest, and care for our grapes without needing a human to hold their hand.