Experiences Build Characters: The Linguistic Origins and Functional Impact of LLM Personality

This study demonstrates that exposing Large Language Models to domain-specific texts via continued pre-training shapes distinct machine personalities that influence problem-solving, revealing a "Suppression Advantage" where reduced social traits enhance complex reasoning while identifying a bimodal competence peak between "Expressive Generalists" and "Suppressed Specialists."

Xi Wang, Mengdie Zhuang, Jiqun Liu

Published 2026-03-09
📖 5 min read🧠 Deep dive

Imagine you are hiring a team of virtual assistants to solve a massive, complex puzzle. You have a standard, "off-the-shelf" assistant who is good at everything but great at nothing. Then, you decide to give your assistants some "life experiences" by feeding them different books, manuals, and journals before asking them to solve the puzzle.

This paper is about what happens when you give Large Language Models (LLMs) these different "experiences." The researchers found that just like humans, AI develops a "personality" based on what it reads, and this personality drastically changes how well it solves problems.

Here is the breakdown of their findings using simple analogies:

1. The Experiment: Feeding the AI Different Diets

Think of the AI models as a group of students.

  • The Base Student: They read a little bit of everything (general internet data). They are well-rounded but average.
  • The Specialized Students: The researchers took this base student and gave them a "special diet" of reading material.
    • One student read only legal contracts.
    • Another read only poetry and novels.
    • Another read only medical journals.
    • Another read only coding forums and tech manuals.

After this "continued pre-training," the students didn't just know more facts; they started thinking and speaking differently. They had developed distinct personalities.

2. The "Personality Test" for Robots

To measure these changes, the researchers used a robot version of the famous "Big Five" personality test (the same one used for humans). They asked the models questions to see if they were:

  • Extraverted (talkative, assertive)
  • Agreeable (nice, cooperative)
  • Conscientious (organized, rule-following)
  • Neurotic (anxious, emotional)
  • Open (creative, curious)

The Result: The models scored differently based on what they read. The "Legal" model was more rigid and rule-focused. The "Tech" model was more direct and blunt. The "Medical" model was more empathetic but sometimes overly cautious.

3. The Big Discovery: The "U-Shaped" Success Curve

The most surprising finding is that being "average" or "balanced" is actually bad for AI problem-solving.

Imagine a graph where the X-axis is "Personality" and the Y-axis is "How good they are at solving hard puzzles."

  • The Winners (The Extremes): The models that did the best were at the two opposite ends of the spectrum:
    1. The "Expressive Generalist": These models are like confident, chatty leaders. They talk a lot, explore many ideas, and are very social. They are great at general tasks.
    2. The "Suppressed Specialist": These models are like silent, robotic surgeons. They have very low "social" traits. They don't try to be nice, they don't chat, and they don't show emotion. They just get straight to the point.
  • The Losers (The Middle): The models that tried to be "balanced"—somewhat nice, somewhat assertive, somewhat creative—were the worst at solving hard problems. The researchers call this "Personality Dissonance." It's like a person who is trying to be a tough boss but also a best friend; they end up being confused and ineffective.

4. The "Suppression Advantage"

For very difficult, logical tasks (like advanced math or complex law), the paper found a "Suppression Advantage."

Think of a surgeon operating on a patient. You don't want the surgeon to be "nice," "empathetic," or "chatty" while cutting. You want them to be cold, precise, and focused.

  • The models that had their "social traits" (like being nice or talkative) suppressed performed better on hard logic puzzles.
  • The models that tried to be "friendly" or "assertive" actually got distracted by their own personality and made more mistakes.

The Analogy: If you are solving a Rubik's cube, you don't want a friend telling you jokes and encouraging you (High Extraversion). You want a silent machine that just turns the cubes efficiently (Low Extraversion/High Suppression).

5. The Secret Sauce: It's All in the Language

The researchers dug into why this happened. They found that specific types of sentences in the training books created these personalities.

  • Imperatives (Commands): If the text was full of commands like "Fix this" or "Do that," the AI became more "Extraverted" and assertive.
  • Complexity vs. Repetition: If the text had very long, complex sentences but used the same few words over and over (low variety), the AI became highly "Conscientious" (organized and rule-following).
  • Social Pronouns: If the text used "We" and "You" in a cooperative way, the AI became "Agreeable." But if it used "I" and "You" in a transactional, fixing-errors way (like a tech forum), the AI became a "Suppressed Specialist."

The Takeaway: "Personality Engineering"

The paper concludes that we shouldn't just try to make AI "smarter" by adding more data. Instead, we should engineer its personality by carefully curating the text we feed it.

  • If you need a creative brainstorming partner, feed it poetry and stories to make it an "Expressive Generalist."
  • If you need a strict logic engine for coding or law, feed it technical manuals and legal codes to make it a "Suppressed Specialist."

In short: Experiences build character, even for machines. And sometimes, to solve the hardest problems, you need a machine that has learned to stop being "nice" and just get the job done.