Freezing of Gait Prediction using Proactive Agent that Learns from Selected Experience and DDQN Algorithm

This study proposes a reinforcement learning framework utilizing a Double Deep Q-Network with Prioritized Experience Replay to predict Freezing of Gait in Parkinson's Disease patients up to 8.72 seconds in advance, enabling timely proactive interventions through wearable assistive devices.

Septian Enggar Sukmana, Sang Won Bae, Tomohiro Shibata

Published 2026-03-05
📖 5 min read🧠 Deep dive

The Problem: The "Sudden Stop"

Imagine you are walking down a busy street, and suddenly, your feet feel like they are glued to the pavement. You want to move, but your brain and legs just won't cooperate. This is called Freezing of Gait (FOG), and it's a terrifying and dangerous symptom for people with Parkinson's disease. It often leads to falls.

The goal of this research is to build a "crystal ball" that can tell a person, "Hey, you're about to freeze in about 8 seconds. Start moving your feet differently right now!"

The Old Way vs. The New Way

The Old Way (The Fixed Window):
Imagine a security guard who checks the door every 5 seconds exactly. If someone looks suspicious at 4 seconds, the guard misses it. If they look suspicious at 6 seconds, the guard misses it too. Most previous computer models worked like this: they looked at a fixed chunk of time (like a 5-second window) and decided, "Is this a freeze?" They were rigid and often missed the early warning signs.

The New Way (The Proactive Agent):
The researchers built a smart, learning computer program (an "Agent") that acts more like a skilled chess player or a weather forecaster. Instead of checking the clock at fixed intervals, this agent constantly watches the walker's steps and asks itself: "Is it time to sound the alarm yet? Or should I wait a few more seconds to be sure?"

How the Agent Learns (The Video Game Analogy)

The team taught this agent using a method called Reinforcement Learning. Think of it like training a dog or playing a video game:

  1. The Game: The agent watches data from a walker's sensors (like a smartwatch).
  2. The Goal: It wants to press a "Prediction Button" (place a flag) just before the person freezes.
  3. The Rewards & Penalties:
    • Gold Star (+150 points): It presses the button at the perfect time (e.g., 8 seconds before the freeze).
    • Time-Out (-40 points): It presses the button too early (false alarm).
    • Game Over (-60 points): It waits too long and the person has already frozen.
    • Wait (+0.1 points): It's okay to wait and gather more info, but not forever.

The Secret Sauce: "The Highlight Reel"

The agent didn't just learn from every single step it took. It used a special trick called Prioritized Experience Replay (PER).

Imagine you are studying for a big test. You don't just read the whole textbook once. You focus intensely on the questions you got wrong or the ones that were really hard.

  • DDQN (The Brain): This is the part of the AI that makes decisions. It's like a double-check system to make sure the AI doesn't get overconfident or make silly mistakes.
  • PER (The Highlight Reel): When the agent makes a mistake or has a "surprising" moment (like a sudden gait change), it saves that moment in a special "Highlight Reel." It then studies these difficult moments over and over again to learn faster. This helps it handle the weird, unpredictable ways different people walk.

The Results: Giving People a Head Start

The researchers tested this on real data from Parkinson's patients. Here is what they found:

  • The Head Start: The agent could predict a freeze up to 8.72 seconds before it happened (in general tests) and 7.89 seconds (when tested on specific individuals).
  • Why 8 seconds matters: In the world of Parkinson's, 8 seconds is an eternity. It gives the patient enough time to hear a cue (like a beep or a visual line on the floor), process it, and adjust their walking style to avoid the freeze entirely.
  • Flexibility: Unlike the old "fixed window" models, this agent adapts. If a person's walking pattern is weird one day, the agent adjusts its strategy.

The Catch

It's not perfect yet.

  • False Alarms: Sometimes, if a person has a lot of "freezing" episodes close together, the agent gets confused and sounds the alarm too early (like a smoke detector going off when you're just cooking toast).
  • Data Gaps: The agent learned best on the data it saw most often. Some rare walking patterns were missed because the training data wasn't perfectly balanced.

The Bottom Line

This paper presents a smart, adaptive assistant for Parkinson's patients. Instead of a rigid computer program that checks the clock, it's a learning agent that watches, waits, and learns from its mistakes to give the user the maximum amount of time to react.

If you can get a warning 8 seconds before you trip, you can usually catch yourself. This technology aims to turn those terrifying "glued feet" moments into manageable, preventable events, potentially saving lives and preventing falls.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →