Hierarchical Industrial Demand Forecasting with Temporal and Uncertainty Explanations

This paper introduces a novel interpretability method for large hierarchical probabilistic time-series forecasting that addresses structural and uncertainty challenges, successfully explaining state-of-the-art industrial models to enhance stakeholder trust and decision-making through real-world case studies and semi-synthetic evaluations.

Harshavardhan Kamarthi, Shangqing Xu, Xinjie Tong, Xingyu Zhou, James Peters, Joseph Czyzyk, B. Aditya Prakash

Published 2026-03-09
📖 5 min read🧠 Deep dive

Imagine you are the captain of a massive, futuristic cargo ship. You have a computer system that predicts exactly how much fuel, food, and spare parts you'll need for the next month. This system is incredibly smart, but it's also a "Black Box." It gives you a number, but it won't tell you why it chose that number.

If the computer says, "You need 500 tons of fuel," you might ask:

  • "Is it because the weather is getting stormy?"
  • "Is it because we're visiting a new port?"
  • "Or is it just guessing?"

In the real world, big companies (like chemical manufacturers) face this exact problem. They use complex AI to predict demand for thousands of products, organized in a giant family tree (a hierarchy). For example: Chemical CompanyRegionFactorySpecific Product.

The problem is that the AI is so complex and deals with so much uncertainty (probabilities) that no one can explain its logic. This paper introduces a new tool called HIEREINTERPRET to open that black box.

Here is the paper explained in simple terms, using some creative analogies.


1. The Problem: The "Too Big to Understand" Tree

Imagine the company's data is a giant, multi-story family tree.

  • The Hierarchy Problem: If you want to know why the "Grandparent" node (the whole company) needs more product, you can't just look at every single "Grandchild" node (individual products) at once. That's like trying to hear every conversation in a stadium of 10,000 people all at once. It's too noisy and too slow.
  • The Probability Problem: The AI doesn't just say "We need 100 units." It says, "We probably need 100, but it could be anywhere between 80 and 120." Most explanation tools only work on single, definite numbers, not on these "ranges of possibilities."

2. The Solution: Two Magic Tricks

The authors created a method to make the AI explain itself using two clever tricks.

Trick #1: The "Subtree" Shortcut (Simplifying the Family Tree)

Instead of trying to connect every single person in the family tree to everyone else (which is chaotic), the new method says: "Let's just look at the immediate family."

  • The Analogy: Imagine you want to know why your Great-Grandfather is happy. Instead of asking every single cousin, aunt, and neighbor in the world, you just ask your Dad, who asks his Dad, who asks his Dad.
  • How it works: The method breaks the giant tree into small, manageable "sub-trees." It calculates the importance of a variable by passing the "importance score" down the chain, step-by-step, from parent to child.
  • The Result: This makes the math much faster and much clearer. It stops the "noise" of the whole stadium and lets you hear the specific conversation that matters.

Trick #2: The "Quantile Translator" (Translating the Uncertainty)

The AI speaks in "probabilities" (a foggy cloud of possibilities), but the explanation tools only understand "definite facts" (clear, solid ground).

  • The Analogy: Imagine the AI is a weather forecaster saying, "There is a 90% chance of rain, but maybe 70%." The explanation tool is a person who only understands "Yes, it rained" or "No, it didn't."
  • How it works: The authors invented a translator. They take the "foggy" probability cloud and slice it into specific layers (like 70%, 90%, and 95% confidence levels). They treat these slices as if they were definite facts.
  • The Result: Now, the explanation tool can look at the "90% slice" and say, "Ah, I see! The reason the AI is worried about rain is because of the wind speed." It turns the fog into a clear picture.

3. The Proof: Did it Work?

The authors tested this on a massive dataset from The Dow Chemical Company (which tracks over 10,000 products) and other public datasets.

  • The Test: They created "fake" scenarios where they knew exactly what should be important (like a fake storm or a fake price hike).
  • The Result: The new method was significantly better at finding the right answers than old methods.
    • For definite predictions, it was 62% more accurate.
    • For uncertain (probabilistic) predictions, it was 26% more accurate.
    • It was also much faster, cutting down the time needed to explain the AI from over 100 minutes to just 2 minutes in some cases.

4. Real-World Stories (Case Studies)

The paper shows three cool examples of how this helps real people:

  1. The Pandemic Shift: The AI noticed a sudden jump in demand for home-furnishing items. The explanation tool showed the AI was looking at the "upward trend" starting in late 2019. It confirmed the AI knew the pandemic was changing people's habits.
  2. The Economic Indicator: The AI predicted a drop in packaging demand. The tool revealed it was reacting to a specific economic number (the Consumer Price Index) dropping. This helped business leaders understand why sales might dip.
  3. The Lost Customer: A major customer stopped buying from the company. The AI became very "uncertain" about future sales. The explanation tool showed the AI was confused because it was trying to balance old data (with big peaks) and new data (flat lines). This told the business, "Don't trust the prediction yet; the situation is too unstable."

The Big Takeaway

This paper gives us a flashlight for the dark room of industrial AI.

Before, companies had to trust the AI blindly. Now, with HIEREINTERPRET, they can ask, "Why did you make that prediction?" and get a clear, logical answer. This builds trust, helps managers make better decisions, and ensures that when the AI says "Order more fuel," they know exactly why.