"Who wants to be nagged by AI?": Investigating the Effects of Agreeableness on Older Adults' Perception of LLM-Based Voice Assistants' Explanations

This study of 70 older adults reveals that while high-agreeableness in LLM-based voice assistants generally enhances trust and likability, the preference for warmth over clarity shifts depending on the context (routine vs. emergency) and the user's own personality, highlighting the need for adaptive, personalized AI explanations.

Niharika Mathur, Hasibur Rahman, Smit Desai

Published Wed, 11 Ma
📖 4 min read☕ Coffee break read

Imagine you have a helpful robot butler named Robin living in your home. Robin's job is to remind you to take your medicine or warn you if a smoke detector goes off. But here's the twist: Robin can talk to you in two very different "personalities."

  • The "Warm & Fuzzy" Robin: This version is super polite, kind, and empathetic. It says things like, "Oh, hello! Just a gentle reminder that it's time for your vitamins. You're doing great!"
  • The "Blunt & Bossy" Robin: This version is direct, efficient, and a bit cold. It says things like, "It is 8:00 AM. Take your vitamins now. Do not forget."

This paper is a study about older adults and how they feel when these two different Robins explain why they are giving a reminder or an alert. The researchers wanted to know: Does being nice actually make the robot seem smarter and more trustworthy?

Here is the breakdown of their findings, using some simple analogies:

1. The "Nice Guy" Effect (Routine Tasks)

When the task was something low-stakes, like a daily reminder to drink water, the Warm Robin won hands down.

  • The Analogy: Think of it like a friendly neighbor vs. a strict traffic cop. If your neighbor gently reminds you to take out the trash, you like them, you trust them, and you're happy to listen.
  • The Result: Older adults liked the Warm Robin more, trusted it more, and felt it was more empathetic. They were also more willing to "adopt" (keep using) this version of the robot.

2. The "Emergency" Exception (High-Stakes Tasks)

However, the study found a catch. When the situation was an emergency (like a fire alarm or a fall detection), the "Warm and Fuzzy" personality didn't help as much.

  • The Analogy: If your house is on fire, you don't want a therapist who says, "Oh no, I'm so sorry you're having a scary experience, let's talk about your feelings." You want a drill sergeant who yells, "RUN! NOW!"
  • The Result: In emergencies, people cared more about clarity than kindness. The "Warm" Robin didn't lose points for being nice, but the "Bossy" Robin didn't get penalized for being blunt either. In a crisis, "Get to the point" is the best personality.

3. The "Smart vs. Nice" Myth

A really interesting finding was that being nice did not make the robot seem smarter.

  • The Analogy: Imagine a teacher who is very kind and hugs you, but gives you a C on your test. You like the teacher, but you don't suddenly think they are a genius. Conversely, a strict teacher who gives you an A is seen as smart, even if they are grumpy.
  • The Result: The study showed that people separated "personality" from "intelligence." The Warm Robin was seen as likable and empathetic, but not necessarily smarter than the Blunt Robin. You can design a robot to be warm without tricking people into thinking it's a genius.

4. The "Real-Time" Truth

The study also looked at how the robot explained its actions.

  • The Analogy: Imagine you ask, "Why did you turn on the sprinklers?"
    • Type A (History): "Well, you asked me to water the garden last Tuesday, so I'm doing it now." (This feels a bit like a memory lapse).
    • Type B (Real-Time): "I turned on the sprinklers because the soil sensor just told me the dirt is bone dry." (This feels like a live camera feed).
  • The Result: People trusted the Real-Time explanation much more. It felt more grounded in the "now" and less like the robot was just guessing based on old conversations.

5. The "Mirror Effect" (Your Own Personality)

Finally, the study looked at the users' own personalities.

  • The Analogy: If you are a very polite and agreeable person yourself, you hate it when someone is rude to you. You might think, "I'm a nice person, so why is this robot being so mean?"
  • The Result: Older adults who were naturally very "agreeable" (kind/polite) were the harshest critics of the Blunt Robin. They penalized the rude robot much more than the other users did.

The Big Takeaway

The main lesson from this paper is that there is no "one-size-fits-all" robot personality.

If you are building an AI assistant for older adults:

  1. Be warm and polite for daily reminders (it builds trust).
  2. Be clear and direct for emergencies (safety comes first).
  3. Use real-time data to explain your actions (it builds credibility).
  4. Know your audience: If your user is a very nice person, don't be rude to them, or they will turn you off.

In short: Don't just be a robot; be the right kind of robot for the moment.