Designing a Generative AI-Assisted Music Psychotherapy Tool for Deaf and Hard-of-Hearing Individuals

This paper presents a co-designed generative AI tool that enables Deaf and Hard-of-Hearing individuals to engage in music psychotherapy through visual and conversational songwriting, demonstrating that collaborative human-AI interaction can effectively facilitate emotional release and self-understanding for this underserved population.

Youjin Choi, Jaeyoung Moon, Jinyoung Yoo, Jennifer G. Kim, Jin-Hyuk Hong

Published Tue, 10 Ma
📖 5 min read🧠 Deep dive

Imagine music as a universal language of the heart. For most people, singing a song or humming a tune is like opening a window to let fresh air (emotions) flow in and out. But for people who are Deaf or Hard-of-Hearing (DHH), that window is often locked. Traditional music therapy relies heavily on hearing the sound, which can leave DHH individuals feeling excluded from a powerful tool for healing.

This paper is about building a new kind of bridge to that window. The researchers created a digital tool that uses Artificial Intelligence (AI) to help DHH people write their own songs, not to hear them perfectly, but to feel and see their emotions.

Here is the story of how they did it, explained simply:

1. The Problem: The "Silent" Gap

Think of music therapy like a dance class. If the instructor only speaks and you can't hear them, you might feel lost. The researchers found that while DHH people often use hearing aids or cochlear implants to hear sounds, they still struggle with the emotional side of music. They often get therapy focused on "fixing" their hearing, rather than using music to heal their hearts. They needed a way to express their feelings without needing perfect ears.

2. The Solution: A "Digital Co-Pilot"

The team built a web-based tool that acts like a creative co-pilot. It combines two types of AI:

  • A Chatbot Therapist: A friendly AI conversation partner that asks questions and listens without judging.
  • A Music Generator: An AI that turns your words into actual songs.

But here is the magic: The tool speaks the language of the DHH community. Instead of asking, "What does this melody sound like?" (which is hard if you can't hear well), it asks, "What color is your sadness?" or "If your frustration were a storm, what would the sky look like?"

3. How It Works: The Four-Step Journey

The researchers worked closely with real music therapists to design a four-step process, like a guided tour through your own mind:

  • Step 1: The Warm-Up (Building Trust): The AI chatbot starts a gentle conversation. It's like sitting down with a friend who never interrupts. It uses empathy (saying things like, "That sounds really heavy") and offers multiple-choice answers so users don't feel stuck trying to find the perfect words.
  • Step 2: Painting with Words (Lyrics): Instead of forcing users to write poetry, the AI helps them visualize. If a user is sad, the AI might ask, "Is your sadness a gray rainy day or a heavy blanket?" This turns abstract feelings into concrete pictures that can be turned into lyrics.
  • Step 3: The Soundtrack (Making Music): Once the lyrics are ready, the AI generates a song. But the user doesn't just listen; they watch. The tool turns the music into a visual show—lyrics dancing on the screen, colors changing with the mood, and shapes pulsing with the beat. It's like watching a movie of your own feelings.
  • Step 4: The Reflection (Looking Back): After the song is made, the AI asks, "How does this song make you feel now?" This helps the user realize they have processed their emotions, often leading to a sense of relief or a new perspective.

4. What Happened in the Test?

The researchers tested this tool with 23 DHH adults. The results were like watching a garden bloom:

  • The "Safe Space" Effect: Many participants said they felt safer talking to the AI than to a human. Why? Because the AI doesn't look at them, judge their voice, or get impatient. It's a judgment-free zone.
  • The "Visual" Key: The visual metaphors were the secret sauce. One participant described their divorce as "sitting by a rainy window." The AI turned that image into a song. Another person described job-hunting stress as "rough waves calming down."
  • The "Aha!" Moment: When users heard their own stories turned into music, many felt a wave of relief. It was like taking a heavy backpack off their shoulders. Even if the AI made a song that wasn't exactly what they imagined, users often found a new meaning in it, realizing, "Oh, I'm actually more anxious than I thought," or "I'm stronger than I realized."

5. The Big Takeaway

This study proves that you don't need to hear music perfectly to use it for healing. By translating sound into visuals, stories, and AI conversations, we can open the door to music therapy for everyone.

Think of it this way: If music is a lighthouse guiding ships through a storm, this tool builds a new kind of lighthouse that uses bright lights and clear signals for those who can't hear the foghorn. It shows that with the right technology, everyone can find their own song, even in the silence.