Imagine you are a chef trying to create the perfect recipe for a new dish. In the old days, you would taste-test ingredients, talk to diners, and carefully write down a recipe based on real experiences. This is like traditional User Personas: fictional characters built from real data about real people, used by designers to understand who they are building for.
Now, imagine you have a super-smart, magical sous-chef (an AI or Large Language Model) who can instantly write recipes for you. But here's the catch: you have to tell the sous-chef exactly what to do. If you say, "Write a recipe," you might get a generic, boring dish. If you say, "Write a spicy vegan recipe for a 30-year-old chef who loves jazz," you get something specific.
This paper is a taste-test of the instructions (called "prompts") that researchers are giving to these AI sous-chefs to create user personas. The authors looked at 83 different instructions from 27 recent research papers to see what's working, what's weird, and what might be going wrong.
Here is the breakdown of their findings, served up with some analogies:
1. The "One-Off" Problem
The Finding: Most researchers ask the AI to create just one persona at a time.
The Analogy: Imagine a movie director asking an actor to play one character for a whole movie, rather than casting a whole crew to represent different types of people.
Why it matters: Real life is diverse. If you only build a persona for "John, the 30-year-old tech guy," you might forget "Sarah, the 60-year-old grandmother." The paper warns that asking for just one person creates a very narrow view of the world, missing the full spectrum of users.
2. The "Speed vs. Depth" Dilemma
The Finding: Many researchers tell the AI to be short and concise (e.g., "give me a 3-sentence description").
The Analogy: It's like asking a biographer to write a life story of a famous person, but telling them, "Make it a tweet."
Why it matters: Traditional personas are rich, detailed, and emotional—they help you feel what the user feels. By forcing the AI to be brief, researchers are getting "data summaries" instead of "human stories." They are trading depth for speed, which might make the personas less useful for understanding real human needs.
3. The "Robot vs. Human" Format
The Finding: Most prompts ask the AI to output the persona in structured formats like JSON (a code-like list of data) or tables, rather than a story.
The Analogy: It's like asking a painter to give you a spreadsheet of colors and hex codes instead of a painting.
Why it matters: This turns the persona into a database entry rather than a character. While this is great for computers to read, it might make it harder for human designers to connect emotionally with the user. The paper suggests we are treating people like data points rather than people.
4. The "Demographic Obsession"
The Finding: Almost every single prompt asks for demographics (Age, Name, Job, Gender).
The Analogy: It's like filling out a job application where you list your height and shoe size, but forget to mention your hobbies, fears, or dreams.
Why it matters: While age and job are important, the paper notes that these prompts often miss the "soul" of the user—their attitudes, behaviors, and feelings. The AI is good at guessing "35-year-old male," but it's less good at guessing "anxious parent who loves gardening."
5. The "Black Box" of Instructions
The Finding: Researchers are using very complex, multi-step instructions. Some use up to 12 different prompts in a row to build one persona.
The Analogy: Imagine a Rube Goldberg machine where you push a ball, it hits a lever, which drops a cup, which triggers a fan, which finally opens a door. If one step fails, the door never opens, and you don't know why.
Why it matters: When researchers chain these prompts together, it becomes very hard to tell which instruction caused a specific result. If the AI creates a biased or weird persona, it's hard to know if it was the first instruction or the twelfth that messed it up.
6. The "Magic 8-Ball" Effect
The Finding: Researchers are using these AI personas to predict things (e.g., "How would this user react to this ad?").
The Analogy: It's like asking a crystal ball to predict the future based on a made-up character.
Why it matters: This is a new and risky trend. If the AI persona is based on stereotypes rather than real data, the predictions will be wrong, leading designers to make bad decisions.
The Big Takeaway
The paper concludes that while AI is a powerful tool (a "super-sous-chef"), we are currently using it a bit clumsily.
- We are too focused on speed: We want quick, short answers instead of deep, rich stories.
- We are too focused on data: We want spreadsheets instead of characters.
- We are too focused on one person: We are forgetting the diversity of the real world.
The Advice: If you want to use AI to create user personas, don't just ask for a quick list of facts. Feed the AI real data about real people, ask for rich stories (not just bullet points), and remember to cast a whole cast of characters to represent the diversity of your users. Otherwise, you might end up designing for a robot's idea of a human, rather than a real human.
Get papers like this in your inbox
Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.