Towards Explainable Deep Learning for Ship Trajectory Prediction in Inland Waterways

This study proposes an interpretable LSTM-based model for predicting ship trajectories in inland waterways that incorporates trained ship domain parameters to analyze attention mechanisms, revealing that while the model achieves competitive accuracy, its attention weights do not fully align with expected causal relationships between interacting vessels.

Tom Legel, Dirk Söffker, Roland Schätzle, Kathrin Donandt

Published 2026-03-06
📖 5 min read🧠 Deep dive

The Big Picture: Predicting the Dance of River Boats

Imagine a busy river, like the Rhine in Germany. It's not just a flowing stream; it's a crowded highway where giant barges, small tugs, and ferries are all trying to get to their destinations without crashing.

The authors of this paper are trying to build a super-smart GPS that doesn't just tell a boat where it is, but predicts exactly where it will be in the next 5 minutes. Why? Because if you want to make river transport automatic (self-driving boats), the computer needs to know: "If that big barge turns left, will I need to turn right to avoid a collision?"

The Problem: The "Black Box" Mystery

In recent years, scientists have used Deep Learning (a type of AI) to get really good at this prediction. These AI models are like genius chefs who can cook a perfect meal, but they are also Black Boxes. You put ingredients in, and a perfect meal comes out, but you have no idea why they chose those specific spices.

The problem is: What if the AI is getting the right answer for the wrong reason?
Maybe the AI predicts a boat will turn left not because it saw another boat coming, but because it noticed the sun was setting. It works, but it's dangerous because it doesn't understand the real rules of the road.

The Solution: Giving the AI a "Social Bubble"

The researchers wanted to make the AI Explainable. They wanted to peek inside the Black Box to see how the AI is thinking.

To do this, they used a concept called a "Ship Domain."

  • The Analogy: Imagine every boat has an invisible, personal bubble around it (like a force field). If another boat enters your bubble, you get nervous and react. If they are far away, you ignore them.
  • The Innovation: Instead of giving every boat a fixed-size bubble (like a standard 100-meter circle), the researchers taught the AI to learn its own bubble size.
    • If two boats are heading toward each other (opposing), the AI learns to make the bubble huge because that's dangerous.
    • If two boats are moving in the same direction, the bubble might be smaller.

The AI has a "menu" of different situations (e.g., "Opposing boats," "Boats passing on the left," "Boats far away"). For each situation, it learns a specific number that represents how far away it needs to pay attention.

The Experiment: Three Different "Brains"

The team built three different versions of this AI to see which one was the most honest and accurate:

  1. The "All-in-One" Chef (EA-DA): This model mixes everything together. It looks at the other boats and decides what to do all at once.
  2. The "Simplified" Chef (E-DA): This model is a bit simpler, only looking at other boats at the very end of the thinking process.
  3. The "Dual-Brain" Chef (E-DDA): This is the most interesting one. It splits the brain into two parts:
    • Brain A (Blind): Predicts where the boat would go if it were alone in the world.
    • Brain B (Attentive): Looks only at the other boats and decides how to adjust the path based on them.
    • Why do this? It's like having a driver who knows how to drive straight, and a co-pilot who only shouts, "Hey, there's a car coming!" This separation makes it easier to see if the co-pilot is actually doing anything useful.

The Results: The Surprise Twist

Here is where the story gets interesting.

  • The Score: All three models were pretty good at predicting where the boats would end up (within about 40 meters of the actual spot after 5 minutes). They were all "smart."
  • The Catch: When the researchers looked at the "Ship Domains" (the invisible bubbles) the AI learned:
    • The "All-in-One" and "Simplified" models learned something weird. They decided that for boats coming toward them, they should shrink their bubble to almost zero. They effectively ignored the oncoming traffic!
    • How did they still get the right answer? It turns out they were just guessing based on patterns in the data, not because they understood the danger of an oncoming boat. They were "cheating" by using other clues, not the interaction itself.
    • The "Dual-Brain" model was the only one that behaved logically. It actually grew its bubble for oncoming boats, just like a human captain would.

The Takeaway: Accuracy Isn't Everything

The main lesson of this paper is: Just because an AI gets the right answer, doesn't mean it understands the situation.

If you rely on the "All-in-One" model, it might work fine today. But if you put it in a new, weird situation it hasn't seen before, it might crash because it never actually learned the rule of "don't hit oncoming traffic."

By separating the "thinking" from the "reacting" (the Dual-Brain approach), the researchers proved that you need to check how the AI thinks, not just what it predicts.

What's Next?

The authors plan to use this "Dual-Brain" system to run "What-If" scenarios (Counterfactual Analysis).

  • Example: "If that boat hadn't slowed down, would we have crashed?"
  • This will help make river traffic safer and help humans trust the self-driving boats more, because they can see the logic behind the machine's decisions.

In short: They built a smart river navigator, realized some of them were just lucky guessers, and fixed the design so the AI actually understands the rules of the road.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →