Investor risk profiles of large language models

This paper evaluates how three large language models (GPT, Gemini, and Llama) generate and adjust investor risk profiles in response to standardized questionnaires and persona-based prompts, revealing distinct baseline tendencies and varying degrees of adaptability across the models.

Hanyong Cho, Geumil Bae, Jang Ho Kim

Published Wed, 11 Ma
📖 5 min read🧠 Deep dive

Imagine you are walking into a bank to open an investment account. Before the banker gives you any advice, they sit you down with a questionnaire. They ask things like: "How old are you?", "How much money do you have?", and "If your investments dropped 20% tomorrow, would you panic and sell, or buy more?"

Based on your answers, the bank creates a "Risk Profile." This is like a personality badge that tells the bank: "This person is a cautious turtle," or "This person is a daring cheetah." The bank then uses this badge to decide what investments are safe for you.

Now, imagine that instead of a human banker, you are talking to a super-smart computer brain (an AI) like GPT, Gemini, or Llama. You might think, "Great! The AI will just read my answers and give me the perfect advice."

But this paper asks a very important question: "What is the AI's own personality before we even ask it anything? And if we tell it to pretend to be someone else, does it actually change its mind?"

Here is the story of what the researchers found, explained simply.

1. The AI's "Default" Personality

The researchers asked three different AI models to pretend to be an investor and answer the standard bank questionnaire. They didn't tell them who to be; they just asked them to answer.

It turns out, the AIs have their own hidden "default settings," just like a new phone comes with a default wallpaper.

  • Gemini (The Steady Hand): This AI was the most consistent. It answered the same way every time. Its personality was "Moderate." It wasn't too scared of risk, but it wasn't crazy either. Think of it as a sensible middle-aged accountant who always wears a tie.
  • Llama (The Cautious Turtle): This AI tended to be very conservative. It was scared of losing money. If you asked it to invest, it would probably suggest keeping your money in a savings account. It's the "safety first" type.
  • GPT (The Wild Card): This AI was the most aggressive (willing to take more risks), but it was also the most unpredictable. Sometimes it acted like a risk-taker, sometimes like a conservative. It was like a moody teenager who changes their mind every hour.

The Big Takeaway: If you just ask an AI for advice without giving it details about you, it will give you advice based on its own hidden personality, not yours.

2. The "Acting Class" Test (Prompt Engineering)

The researchers then tried something fun. They told the AIs: "Okay, stop being yourself. Pretend you are a 20-year-old with no money," or "Pretend you are a 50-year-old millionaire."

They wanted to see if the AIs could "act" like different people.

  • Did they change? Yes! When the researchers told the AIs to be "Risk-Averse" (scared of losing money), the AIs gave answers that were much more conservative. When they told them to be "Risk-Seeking" (adventurous), the AIs got bolder.
  • Did they act their age? Yes. When told to be in their 20s, all three AIs said they were willing to take big risks. When told to be in their 50s, they became much more careful.
  • Did they act rich? Yes. When told they were wealthy, the AIs said they could afford to take bigger risks. When told they were poor, they got scared.

The Analogy: Imagine an actor on stage.

  • Without a script (Default): The actor just talks about their own life.
  • With a script (Persona): If you hand the actor a script that says, "You are a 20-year-old gambler," they will act like a gambler.
  • The Catch: Even with the script, the actor still has their own "voice." The paper found that while the AIs did change their answers, they didn't all change by the exact same amount. Some actors (like Llama) were harder to convince to change their mind than others (like GPT).

3. Why This Matters to You

This paper is a warning label for the future of financial advice.

In the future, you might talk to an AI to get investment advice. You might say, "I am a 30-year-old teacher with $50,000." The AI will try to act like a 30-year-old teacher.

However, the paper warns us that:

  1. The AI has a hidden bias. Even if you give it your details, its "default" personality might still peek through.
  2. Not all AIs are the same. If you ask GPT for advice, you might get a different answer than if you ask Gemini, even if you give them the exact same details about yourself.
  3. Consistency is key. If you ask the same AI the same question twice, it might give you two different answers (especially GPT).

The Bottom Line

Think of these AI financial advisors as newly hired interns. They are very smart and can learn to act like different types of investors if you tell them to. But they also have their own quirks and habits.

If you use them for your money, you need to know that:

  • They aren't perfect mirrors of you; they are mirrors with a slight tint of their own personality.
  • You need to be very specific in your instructions (prompts) to get the advice you actually want.
  • You shouldn't trust just one AI; it's smart to check a few different ones to see if they agree.

The researchers conclude that while these AIs are getting better at giving personalized advice, we need to be careful and not treat them like human experts just yet. They are powerful tools, but they are still learning how to wear the "financial advisor" hat properly.