ELLMob: Event-Driven Human Mobility Generation with Self-Aligned LLM Framework

This paper introduces ELLMob, a self-aligned Large Language Model framework that leverages Fuzzy-Trace Theory to reconcile habitual patterns with event constraints, addressing the lack of event-annotated datasets and significantly improving the generation of human mobility trajectories during major societal events like typhoons, pandemics, and the Olympics.

Yusong Wang, Chuang Yang, Jiawei Wang, Xiaohang Xu, Jiayi Xu, Dongyuan Li, Chuan Xiao, Renhe Jiang

Published 2026-03-10
📖 4 min read☕ Coffee break read

Imagine you are trying to predict how people will move around a city. Usually, this is easy: people go to work, come home, grab coffee, and visit the park. It's like a well-rehearsed dance.

But what happens when a typhoon hits, a pandemic lockdown begins, or the Olympics arrive? The dance floor changes. The music stops, the rules change, and people have to make split-second decisions: "Do I stick to my usual routine, or do I run for safety?"

This paper, ELLMob, introduces a new way for Artificial Intelligence (AI) to predict these chaotic movements. Here is the simple breakdown:

1. The Problem: The AI is Too "Stubborn" or Too "Panicked"

Current AI models are great at predicting normal days. But when a big event happens (like a storm), they tend to fail in two ways:

  • The Robot: It ignores the storm and tells you people are still going to the beach. (Too stubborn).
  • The Hysterical: It assumes everyone is running for the hills and tells you no one is going to work at all. (Too panicked).

Real life is a mix. People might skip the beach but still go to the grocery store. They balance their habits (what they usually do) with constraints (what the event forces them to do). Existing AI struggles to find this balance.

2. The Solution: A "Smart Mediator"

The authors built a new system called ELLMob. Think of it not just as a calculator, but as a wise mediator or a therapist for the AI.

Instead of just asking the AI to "guess the next move," ELLMob forces the AI to have a conversation with itself using a concept from psychology called Fuzzy-Trace Theory.

Here is how the "conversation" works, using a metaphor of a Judge and a Lawyer:

  • The Lawyer (Pattern Gist): Represents the user's habits. "My client always goes to the gym at 6 PM. That's their routine!"
  • The Prosecutor (Event Gist): Represents the event's rules. "But there is a typhoon! The roads are flooded, and the gym is closed. You can't go!"
  • The Judge (Action Gist): The AI's initial guess. "Okay, I'll send them to the gym anyway."

The Magic Step (Reflection):
The system acts as a Critical Auditor. It looks at the Judge's guess and asks:

  • "Does this make sense with the Lawyer's habits?" (No, the gym is closed).
  • "Does this make sense with the Prosecutor's rules?" (No, it's dangerous).

If the answer is "No," the Judge has to rewrite the plan.

  • New Plan: "Okay, the client can't go to the gym, but they still need exercise. Let's send them to a nearby park that is safe, or have them do yoga at home."

The AI keeps doing this loop—Draft, Audit, Rewrite—until the plan satisfies both the habit and the emergency.

3. The New Data: A "Time Machine" for Tokyo

To teach this AI, the researchers couldn't just use old data. They needed to see how people actually moved during disasters.

  • They built the first-ever dataset that tracks people in Tokyo during three massive events: Typhoon Hagibis, the COVID-19 Pandemic, and the Tokyo Olympics.
  • Think of this as a "Time Machine" that lets them replay those specific weeks to see exactly how people changed their behavior.

4. The Results: The AI Learned to "Think"

When they tested ELLMob against other AI models:

  • Other Models: Either kept people stuck in their old routines (ignoring the storm) or made them disappear entirely (ignoring their need for food/work).
  • ELLMob: Successfully predicted that people would adapt. For example, during the pandemic, it correctly predicted that people would stop going to bars (Event constraint) but would still go to grocery stores (Habit/Need), whereas other models got confused.

The Bottom Line

ELLMob is like teaching an AI to drive in a storm.

  • Old AI: Either drives straight into the wall because it ignores the rain, or stops the car completely because it's scared.
  • ELLMob: Slows down, turns on the wipers, and carefully navigates to the destination, respecting both the road rules (the storm) and the driver's destination (the habit).

This is a huge step forward for city planners and emergency responders, helping them understand how humans will actually behave when the unexpected happens.