L2GTX: From Local to Global Time Series Explanations

The paper introduces L2GTX, a model-agnostic framework that generates compact and faithful class-wise global explanations for time series classification by aggregating and merging parameterized temporal event primitives from representative local instances.

Ephrem Tibebe Mekonnen, Luca Longo, Lucas Rizzo, Pierpaolo Dondio

Published 2026-03-16
📖 4 min read☕ Coffee break read

Imagine you have a super-smart robot chef (a Deep Learning model) that can taste a cup of coffee and instantly tell you if it's Arabica or Robusta. It gets this right 99% of the time. But here's the problem: the robot is a "black box." It gives you the answer, but it won't tell you why. Did it taste the sweetness? The bitterness? The acidity?

If you ask the robot, "Why did you pick Robusta?" it might just point to a specific second on the clock and say, "Because of the flavor at 2:14 PM." That's not very helpful. You need to know the pattern, not just the timestamp.

This is where the paper L2GTX comes in. It's a new tool designed to translate the robot's secret thoughts into human language.

The Problem: Too Many Clues, No Story

Currently, there are tools that can explain the robot's decision for one single cup of coffee (Local Explanation). They might say, "For this cup, the robot liked the bitter spike at the end."

But what if you want to know the robot's general rule for all Robusta coffees?

  • Existing tools are like trying to understand a whole forest by looking at one leaf at a time.
  • They are often tied to specific robot "brains" (model-specific), so if you change the robot, the tool breaks.
  • They struggle to find the common patterns that repeat across thousands of cups.

The Solution: L2GTX (Local-to-Global Time Series eXplanations)

Think of L2GTX as a detective who interviews a few key witnesses to write a summary report for the whole case.

Here is how it works, step-by-step, using a simple analogy:

1. The "Detective" (LOMATCE)

First, the system picks a few representative cups of coffee (instances) from the dataset. It uses a tool called LOMATCE to interview the robot about each cup individually.

  • Instead of saying "flavor at 2:14," LOMATCE translates the robot's thoughts into Event Primitives.
  • Analogy: Imagine the robot doesn't say "Time 2:14." Instead, it says, "I saw a sharp spike (Local Max) here," or "I saw a slow rise (Increasing Trend) there." These are the "events."

2. The "Grouping" (Clustering)

Now, the detective has a pile of notes from 15 different cups.

  • Cup 1 had a spike at 2:10.
  • Cup 2 had a spike at 2:12.
  • Cup 3 had a spike at 2:09.
  • The Magic: L2GTX realizes these are all the same type of event: "A spike near the 2-minute mark." It groups them together. It ignores the tiny differences (2:09 vs 2:12) and focuses on the pattern.
  • It creates a "Global Cluster" of events. Think of this as creating a folder labeled "Important Spikes."

3. The "Smart Selection" (Budgeting)

The detective can't interview every single cup in the world (there are too many!). So, L2GTX uses a smart strategy to pick the best cups to interview.

  • It asks: "Which cups, if I interview them, will give me the most information about the 'Important Spikes' folder?"
  • It picks a small, diverse group of cups that covers all the major patterns without repeating the same thing over and over.

4. The "Final Report" (Global Explanation)

Finally, L2GTX writes the summary. Instead of a messy list of 1,000 timestamps, it gives you a clean, readable story:

"To identify Robusta, the robot looks for strong, high peaks in the middle of the taste profile, followed by a steady decline. To identify Arabica, it looks for gentler dips and flatter lines."

Why is this a Big Deal?

  1. It's Universal (Model-Agnostic): It doesn't matter if the robot chef is a "Convolutional Network" or an "LSTM." L2GTX works with any of them. It's like a translator that works for any language.
  2. It's Trustworthy: The paper tested this on medical data (heartbeats) and food data (coffee).
    • Heartbeats: It correctly identified that "Normal" hearts have a specific wave pattern, while "Heart Attack" hearts have a distinct, sharp dip. This matches what real doctors look for!
    • Coffee: It found that Robusta has stronger peaks, which matches what coffee experts know about the beans.
  3. It's Simple: It turns complex math into "shapes" (spikes, dips, rises) that humans can actually visualize and understand.

The Bottom Line

L2GTX takes the robot's scattered, confusing local thoughts and weaves them into a single, coherent story. It tells us not just when the robot is looking, but what it is looking for (the shape of the data).

It's the difference between a robot saying, "I chose this because of the data point at index 452," and a human saying, "I chose this because the heart rate showed a dangerous, sudden drop." That is the power of moving from Local to Global explanations.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →