Brain-OF: An Omnifunctional Foundation Model for fMRI, EEG and MEG

Brain-OF is the first omnifunctional foundation model that unifies fMRI, EEG, and MEG data through novel components like the Any-Resolution Neural Signal Sampler and Dual-Domain Masked Modeling to overcome modality limitations and achieve superior performance across diverse neuroscience tasks.

Hanning Guo, Farah Abdellatif, Hanwen Bi, Andrei Galbenus, Jon. N. Shah, Abigail Morrison, Jürgen Dammers

Published 2026-03-03
📖 4 min read☕ Coffee break read
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine your brain is a massive, bustling city. To understand how this city works, scientists use different types of "cameras" to take pictures and record sounds. But here's the problem: each camera sees the city differently.

  • fMRI is like a high-resolution satellite photo. It shows you exactly where things are happening in the city (great detail on location), but the photo is blurry in time—it's like taking a picture every few seconds, so you miss the fast-moving cars.
  • EEG and MEG are like super-fast security cameras on the street corners. They capture every split-second movement and sound (great speed), but they struggle to tell you exactly which building the sound came from (blurry location).

For a long time, scientists built AI models that could only look at one type of camera at a time. If you wanted to understand the city's traffic, you had to hire a specialist for the satellite photos and a different specialist for the street cameras. They couldn't talk to each other, and they often missed the big picture.

Enter Brain-OF.

What is Brain-OF?

Think of Brain-OF as a super-intelligent "City Manager" who has been trained to watch all the cameras at once. It is the first AI model designed to understand fMRI, EEG, and MEG data simultaneously. Instead of hiring separate experts, Brain-OF is a single "omnifunctional" brain that learns from all three sources together.

How Does It Work? (The Secret Sauce)

The researchers had to solve three big puzzles to make this City Manager work:

1. The "Language Barrier" (Any-Resolution Neural Signal Sampler)
The satellite photos and street cameras speak different "languages" and have different speeds. Brain-OF uses a special translator called ARNESS.

  • Analogy: Imagine ARNESS is a universal translator booth. No matter if you feed it a slow, detailed satellite image or a rapid-fire street video, ARNESS converts them all into the same "secret code" (a shared semantic space). Now, the AI can read them all without getting confused by the differences in speed or detail.

2. The "Crowded Room" Problem (Sparse Mixture of Experts)
When you put all this data into one brain, it can get chaotic. The AI might get distracted by the noise.

  • Analogy: Imagine Brain-OF is a giant meeting room with many specialized consultants (Experts).
    • Some consultants are Generalists who know the basics of all cities (Modality-Invariant).
    • Other consultants are Specialists who only know about satellite photos or only about street cameras.
    • Brain-OF uses a smart Traffic Cop (DINT Attention) to decide which consultant to listen to for each specific question. If the question is about location, it calls the satellite expert. If it's about speed, it calls the street expert. This prevents the AI from getting overwhelmed and ensures it listens to the right voice at the right time.

3. The "Two-Part Puzzle" (Masked Temporal-Frequency Modeling)
To learn deeply, the AI plays a game of "fill in the blanks."

  • Analogy: Imagine you are listening to a song, but someone mutes parts of the melody (time) and parts of the harmony (frequency).
    • Old AI models would just try to guess the missing melody.
    • Brain-OF is forced to guess both the missing melody and the missing harmony at the same time. This forces it to understand how the rhythm and the pitch work together, giving it a much deeper understanding of how the brain actually "sings."

Why Does This Matter?

1. It's Smarter and More Accurate
Because Brain-OF combines the "where" (from fMRI) with the "when" (from EEG/MEG), it creates a much clearer picture of brain activity. In tests, it beat all previous models at tasks like:

  • Detecting seizures (epilepsy).
  • Diagnosing Alzheimer's disease.
  • Predicting a person's age based on brain activity.
  • Recognizing emotions.

2. It's a "Zero-Shot" Hero
Because it learned from so many different types of data, Brain-OF is very good at figuring out new tasks it hasn't seen before. It's like a chef who has cooked with every ingredient in the world; if you give them a new recipe, they can probably figure it out immediately.

3. It Helps Everyone
The researchers released the biggest version of this model (Brain-OF Huge) for free. This is like giving every neuroscientist and doctor a super-powered microscope that they don't have to build from scratch. It lowers the barrier for researchers who don't have millions of dollars to train their own AI, potentially speeding up cures for brain diseases.

The Bottom Line

Brain-OF is a breakthrough because it stopped treating brain signals as separate, isolated puzzles. Instead, it built a single, unified system that understands the brain's full story—combining the high-definition location of fMRI with the lightning-fast speed of EEG and MEG. It's the first step toward a truly "all-seeing" AI for neuroscience.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →