Robust Spiking Neural Networks Against Adversarial Attacks

This paper proposes a Threshold Guarding Optimization (TGO) method that enhances the robustness of directly trained Spiking Neural Networks against adversarial attacks by constraining membrane potentials away from firing thresholds and introducing noisy spiking neurons to mitigate state-flipping vulnerabilities.

Shuai Wang, Malu Zhang, Yulin Jiang, Dehao Zhang, Ammar Belatreche, Yu Liang, Yimeng Shan, Zijian Zhou, Yang Yang, Haizhou Li

Published 2026-02-25
📖 5 min read🧠 Deep dive

The Big Picture: The "Neuromorphic" City

Imagine a futuristic city called Neuromorphia. Unlike our current cities (which run on standard computers), this city runs on Spiking Neural Networks (SNNs).

  • The Old Way (Standard Computers): Think of a standard computer like a busy highway where cars (data) are constantly flowing, burning fuel (energy) even when there's no traffic.
  • The New Way (SNNs): Neuromorphia is like a smart, event-driven city. The "neurons" (citizens) only speak up (fire a "spike") when something important happens. If nothing is happening, they stay silent. This makes the city incredibly energy-efficient, perfect for battery-powered devices like smartwatches or drones.

The Problem: The "Ticklish" Neighbors

The researchers discovered a major weakness in this city. While the citizens are efficient, they are too sensitive.

Imagine a specific group of citizens standing right on the edge of a cliff (the Threshold).

  • The Threshold: This is the line a citizen must cross to shout "I see something!" (fire a spike).
  • The Vulnerability: Most citizens are far from the cliff. But some are standing right on the edge.
  • The Attack: A hacker (an Adversarial Attack) doesn't need to push the whole city. They just need to blow a tiny, almost invisible gust of wind (a tiny bit of noise) at those people standing on the edge.
  • The Result: Because they are so close to the edge, that tiny wind knocks them over. They flip from "Silent" to "Shouting" (or vice versa). This tiny flip confuses the whole city's decision-making, causing a self-driving car to think a stop sign is a speed limit sign.

The paper proves that these "edge-dwellers" are the weak link. They set the limit on how strong an attack the city can withstand.

The Solution: The "Threshold Guarding" Method (TGO)

To fix this, the researchers proposed a two-part security system called Threshold Guarding Optimization (TGO). Think of it as a city planner redesigning the neighborhood to make it harder for hackers to cause chaos.

1. Moving the Citizens Away from the Cliff (Membrane Potential Constraints)

  • The Analogy: Imagine the city planner realizes people standing on the cliff are dangerous. So, they build a fence and a wide safety zone. They force the citizens to stand far away from the edge.
  • How it works: The researchers add a rule to the training process: "If your energy level (membrane potential) gets too close to the shouting threshold, you get a penalty."
  • The Result: The citizens naturally settle in the middle of the room, far from the cliff. Now, a tiny gust of wind won't knock them over. They need a huge push to change their state. This makes the city much harder to trick.

2. Adding a "Fog of War" (Noisy Spiking Neurons)

  • The Analogy: Even with the safety zone, what if a hacker finds someone who still got too close? The second part of the plan is to introduce a little bit of fog (random noise) into the air.
  • How it works: In a normal city, if you are just past the line, you shout. If you are just before it, you stay silent. It's a strict, binary switch. The researchers changed the rules: "Now, there's a little fog. Even if you are slightly past the line, you might not shout immediately. It's a bit random."
  • The Result: This turns a rigid, brittle switch into a smooth, fuzzy one. A tiny push from a hacker might make a citizen slightly more likely to shout, but it won't guarantee a flip. The "fog" absorbs the shock, preventing the tiny wind from causing a catastrophic state change.

The Results: A Stronger City

The researchers tested this new security system in a virtual lab using standard image datasets (like CIFAR-10, which is like a photo album of cats, dogs, and cars).

  • Before TGO: When hackers used powerful attacks (like PGD or FGSM), the standard SNNs collapsed. Their accuracy dropped to near zero. They were easily tricked.
  • After TGO: The new "Guarded" SNNs held their ground. Even under heavy attacks, they kept recognizing the images correctly.
    • In some tests, they improved their defense by 10% to 20% compared to the best existing methods.
    • Crucially, this didn't make the city slower or more expensive to run. It's a "free" upgrade in security.

Why This Matters

This paper is a big deal because it explains why these efficient, bio-inspired computers are currently fragile and gives a practical recipe to make them robust.

  • Real World Impact: As we start putting these efficient chips into real-world devices (like medical implants, autonomous drones, or smart home cameras), they need to be secure. We don't want a hacker to trick a pacemaker or a drone with a tiny, invisible signal.
  • The Takeaway: By moving the "citizens" away from the edge and adding a little "fog" to the system, we can build neuromorphic computers that are not only energy-efficient but also tough enough to survive in a hostile digital world.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →