An Approach to Simultaneous Acquisition of Real-Time MRI Video, EEG, and Surface EMG for Articulatory, Brain, and Muscle Activity During Speech Production

This paper presents a novel framework for the simultaneous acquisition of real-time MRI, EEG, and surface EMG to capture brain, muscle, and articulatory activity during speech, featuring a specialized artifact suppression pipeline to overcome technical challenges and enable unprecedented insights into speech neuroscience.

Jihwan Lee, Parsa Razmara, Kevin Huang, Sean Foley, Aditya Kommineni, Haley Hsu, Woojae Jeong, Prakash Kumar, Xuan Shi, Yoonjeong Lee, Tiantian Feng, Takfarinas Medani, Ye Tian, Sudarsana Reddy Kadiri, Krishna S. Nayak, Dani Byrd, Louis Goldstein, Richard M. Leahy, Shrikanth Narayanan

Published 2026-03-06
📖 4 min read☕ Coffee break read

Imagine trying to understand how a symphony orchestra plays a piece of music. If you only listen to the final sound coming out of the concert hall, you get the melody, but you miss the conductor's cues, the violinist's finger movements, and the drummer's heartbeat. You don't know how the music was made, only what was made.

For decades, scientists studying human speech have been stuck in a similar position. They could hear the voice (the audio) or see the mouth moving (via cameras), but they couldn't easily watch the brain's "conductor" and the muscles' "finger movements" happening at the exact same time.

This paper introduces a groundbreaking new "super-camera" setup that finally lets us watch the entire speech orchestra in real-time. Here is the simple breakdown:

The Three Musicians in the Orchestra

The researchers built a system to record three different things simultaneously while a person speaks:

  1. The Brain (EEG): Like a high-speed camera capturing the conductor's hand signals. It shows the electrical sparks in the brain that decide what to say.
  2. The Muscles (EMG): Like sensors on the violinist's fingers. It tracks the tiny electrical signals in the face and throat muscles that prepare to move.
  3. The Mouth (Real-Time MRI): Like a super-fast X-ray movie. It shows the tongue, lips, and jaw physically moving to shape the sound.

The Big Challenge: The "Noisy Room"

Trying to do this is incredibly difficult because of the environment. To get the MRI video, the person has to lie inside a giant, powerful magnet (the MRI machine).

  • The Problem: MRI machines are like giant, noisy radio stations. They blast out electromagnetic waves that scramble the delicate brain and muscle signals, turning them into static noise. It's like trying to hear a whisper while standing next to a jet engine.
  • The Solution: The team invented a special "noise-canceling" pipeline. Think of it as a sophisticated audio editor that knows exactly what the "jet engine" noise looks like. It creates a template of the noise and subtracts it from the recording, leaving only the clear whisper of the brain and muscles.

What They Discovered

Using this new setup, they tested it on a person speaking, whispering, and even imagining speaking (saying words in their head without moving their lips).

  1. The "Ghost" Movements: Even when the person tried to speak silently in their head, the MRI camera caught tiny, almost invisible movements in the mouth (like a twitch of the tongue). It's as if the body is practicing the dance steps even when the dancer is sitting still.
  2. The Brain's Map: After cleaning up the noise, they could clearly see the brain lighting up in the left side (where language lives), proving that even silent speech activates the same language centers as loud speech.
  3. No Interference: They proved that wearing the brain sensors and muscle wires didn't ruin the MRI movie. The "costume" didn't distract the "camera."

Why This Matters (The Future)

This isn't just about watching people talk; it's about building the future of communication technology.

  • For Brain-Computer Interfaces (BCI): Imagine a person who has lost their ability to speak due to an injury. Currently, computers struggle to guess what they want to say just by reading brain waves. With this new method, scientists can finally train computers to understand the link between "Brain Signal" and "Mouth Movement" directly. This could lead to devices that let paralyzed people "speak" by thinking, with much higher accuracy.
  • For Understanding Speech Disorders: It gives doctors a complete map of what goes wrong in conditions like stuttering or apraxia, showing exactly where the "conductor" and the "musicians" are out of sync.

In a Nutshell

This paper is the first time scientists successfully hooked up a brain monitor, a muscle tracker, and a moving X-ray camera all at once. They figured out how to filter out the loud noise of the MRI machine to see the quiet signals of speech. It's like finally getting a clear, high-definition view of the entire speech production chain, opening the door to better treatments for speech disorders and revolutionary new ways for humans to communicate with machines.