Heterogeneous Time Constants Improve Stability in Equilibrium Propagation

This paper introduces heterogeneous time steps (HTS) to Equilibrium Propagation, demonstrating that assigning neuron-specific time constants drawn from biologically motivated distributions improves training stability and robustness while maintaining competitive performance.

Yoshimasa Kubo, Suhani Pragnesh Modi, Smit Patel

Published 2026-03-05
📖 4 min read☕ Coffee break read

Imagine you are trying to teach a massive team of workers (a neural network) how to sort a pile of mixed-up mail into the correct bins. The traditional way to do this, called Backpropagation, is like a strict manager shouting instructions from the top down, telling every single worker exactly how to move. It works incredibly well, but in the real world, our brains don't work like that. Our brains are messy, decentralized, and every neuron (brain cell) has its own unique personality and speed.

Enter Equilibrium Propagation (EP). This is a newer, more "biologically friendly" way to train AI. Instead of a manager shouting orders, the team works together to find a state of balance (equilibrium) where the mail is sorted correctly. They nudge each other gently until the whole system settles into the right answer.

However, there was a problem with the original EP models. They treated every single worker as if they moved at the exact same speed. Imagine a relay race where every runner, from the sprinter to the marathoner, is forced to take steps of the exact same length and speed. In reality, our brains are full of variety; some neurons react instantly, while others take their time to process information.

The Big Idea: Giving Everyone Their Own Rhythm

This paper introduces a simple but powerful fix: Heterogeneous Time Steps (HTS).

Think of it like this:

  • The Old Way (Scalar Time Step): The coach tells the whole team, "Take a step every 1 second." Everyone moves in lockstep.
  • The New Way (Heterogeneous Time Steps): The coach gives each worker a custom watch. Some workers are told to move every 0.2 seconds (fast twitch), while others move every 0.4 seconds (slow and steady). These speeds are chosen based on how real brain cells actually behave.

How They Tested It

The researchers set up a digital training ground with three different "mail sorting" challenges (datasets named MNIST, KMNIST, and Fashion-MNIST). They built two types of teams:

  1. The Uniform Team: Everyone moved at the same speed.
  2. The Diverse Team: Everyone had their own unique speed, drawn from a distribution that mimics real biology (like a bell curve or a skewed curve).

They ran these teams through thousands of training sessions to see who sorted the mail better and who stayed more stable.

What They Found

The results were fascinating:

  1. Stability is Key: The "Diverse Team" was much less likely to trip over its own feet. In the world of AI training, systems can sometimes get chaotic and fail to learn. By giving neurons different speeds, the system became more robust, like a group of dancers where everyone has a slightly different rhythm, preventing the whole group from stumbling in unison.
  2. Performance Stayed High: The diverse team didn't just stay stable; they actually got slightly better at the harder tasks (like sorting the more complex KMNIST and Fashion-MNIST data) compared to the uniform team.
  3. It's More Realistic: Most importantly, this makes the AI model look more like a real human brain. Since real brains have neurons with different "time constants" (speeds), this new method bridges the gap between artificial intelligence and biological reality.

The Takeaway

Think of this research as realizing that variety creates strength. Just as a sports team needs a mix of fast sprinters and steady strategists to win, an AI network learns better when its internal components operate at different speeds.

By letting the "neurons" in the AI have their own unique clocks, the researchers made the learning process smoother, more stable, and much more like how nature actually works. It's a small tweak in the math, but it brings us one step closer to building AI that thinks more like us.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →