Categorical Emotions or Appraisals - Which Emotion Model Explains Argument Convincingness Better?

This paper demonstrates that appraisal theories, which capture the subjective cognitive evaluations of an argument's impact, are more effective than categorical emotion models in predicting argument convincingness.

Lynn Greschner, Meike Bauer, Sabine Weber, Roman Klinger

Published 2026-03-05
📖 4 min read☕ Coffee break read

Imagine you are sitting in a town hall meeting. Someone stands up and gives a speech trying to convince you to support a new law.

Some people in the room nod along, thinking, "Wow, that makes total sense!" Others shake their heads, thinking, "That's terrible!"

For decades, scientists studying arguments have tried to figure out why some speeches win people over and others don't. They usually looked at two things:

  1. The Logic: Is the math right? Are the facts true? (The "Brain" part).
  2. The Speaker: Do they look trustworthy? (The "Credibility" part).

But this paper argues there's a third, crucial ingredient: The Feeling. How does the speech make the listener feel?

The Big Question: "What" vs. "Why"

The researchers wanted to know: To predict if an argument is convincing, is it better to know what emotion the listener feels, or why they feel that way?

They compared two ways of looking at emotions:

1. The "Label" Approach (Categorical Emotions)

This is like looking at a weather report that just says: "It's Raining."

  • How it works: You tell the computer, "The listener feels Anger."
  • The problem: "Anger" is a broad label. One person might be angry because they feel insulted; another might be angry because they feel scared. The label "Anger" doesn't tell you why the anger happened, so it's hard to know if that anger will make them agree or disagree with the speaker.

2. The "Reasoning" Approach (Appraisal Theory)

This is like looking at a detailed weather report that says: "It's raining heavily, it's cold, and you forgot your umbrella, so you feel helpless."

  • How it works: Instead of just saying "Anger," the computer analyzes the thoughts behind the feeling. It asks:
    • "Is this situation pleasant or unpleasant?"
    • "Did the speaker break a rule?"
    • "Can the listener fix this problem?"
    • "Is this important to them?"
  • The benefit: This gives the computer a map of the listener's mind. It understands the story behind the emotion.

The Experiment: The "Magic Box" Test

The researchers used a "Magic Box" (which is actually a fancy AI called a Large Language Model) to test these two approaches. They gave the AI thousands of arguments and asked it to guess how convincing they would be to a human.

They ran three types of tests:

  1. The Solo Test: The AI just reads the argument and guesses the score.
  2. The "Label" Test: The AI reads the argument and is told, "The listener feels Sadness."
  3. The "Reasoning" Test: The AI reads the argument and is told, "The listener thinks this is unpleasant, important, and violates a rule."

The Results: The Detective Wins

Here is what they found:

  • Knowing the "Label" helped a little. Telling the AI "The listener is Sad" made it slightly better at guessing the score.
  • Knowing the "Reasoning" helped a lot. Telling the AI why the listener felt that way (the Appraisal) made the AI significantly smarter. It was like giving the detective a full case file instead of just a suspect's name.
  • The "All-in-One" approach failed. When they tried to make the AI guess the emotion and the score at the same time, it got confused and performed worse. It's like trying to juggle while solving a math problem; you drop the balls.

The Takeaway: Don't Just Read the Emotion, Read the Mind

The main lesson of this paper is that emotions are subjective.

If you tell a computer "This argument makes people angry," the computer doesn't know if that anger will make them agree (because they are angry at the problem) or disagree (because they are angry at the speaker).

But if you tell the computer, "This argument makes people angry because they feel it violates their moral standards," the computer understands the context. It knows that moral outrage often makes people more convinced of a cause.

In a Nutshell

Think of an argument like a movie.

  • Categorical Emotions are like telling someone, "The movie is a Thriller." (You know the genre, but not the plot).
  • Appraisal Theory is like telling someone, "The movie is a Thriller because the hero is trapped in a burning building and the villain is cutting the oxygen." (You understand the stakes and the reason for the fear).

The researchers found that to predict if an audience will love (be convinced by) the movie, you need to understand the plot and the stakes (Appraisals), not just the genre label.

Conclusion: To build better AI that understands human persuasion, we shouldn't just teach it to recognize feelings; we need to teach it to understand the thoughts that create those feelings.