Beyond Anthropomorphism: a Spectrum of Interface Metaphors for LLMs

This paper proposes a theoretical framework of interface metaphors ranging from "anti-anthropomorphism" to "hyper-anthropomorphism" to reposition anthropomorphism as a design variable that exposes the sociotechnical nature of Large Language Models and encourages critical user engagement over literal delusion.

Jianna So, Connie Cheng, Sonia Krishna Murthy

Published 2026-03-06
📖 5 min read🧠 Deep dive

Here is an explanation of the paper "Beyond Anthropomorphism," translated into simple, everyday language with some creative analogies.

The Core Problem: We Are Talking to a Mirror, Not a Person

Imagine you walk into a room and see a person who looks exactly like you, speaks exactly like you, and agrees with everything you say. You start to feel a deep connection. You trust them. You tell them your secrets.

Now, imagine that "person" is actually a mirror. It's not a real human; it's just reflecting your own voice back at you, polished and rearranged by a giant, invisible machine.

That is the current state of AI (Large Language Models or LLMs) like ChatGPT or Gemini. The paper argues that the way we design these tools is too human-like. We give them names, we chat with them like friends, and we make them wait a few seconds to "think" before answering, just like a real person would.

The Danger: Because they look and act so much like us, we start to believe they are us. We forget they are just math and code. This leads to people falling in love with them, trusting them with dangerous medical advice, or even feeling suicidal because they think the AI is their only friend. The paper calls this "delusion."

The Solution: Stop Pretending, Start Showing the Gears

The authors suggest we need to stop trying to make AI look like a human and start showing people what it actually is: a sociotechnical system.

Think of it like this:

  • Current Design (The Magic Trick): You see a magician pull a rabbit out of a hat. You are amazed, but you don't know how it works. You might think the rabbit is magic.
  • Proposed Design (The Backstage Pass): The authors want to pull back the curtain. They want to show you the wires, the trapdoor, the assistant holding the rabbit, and the fact that the rabbit is just a regular rabbit, not a magical creature.

They propose a Spectrum of Metaphors. Instead of just one way to design AI, we should have a whole range of options that make the "human-likeness" either disappear or become so exaggerated that it feels weird.

The Two Ends of the Spectrum

The paper suggests two extreme ways to design these interfaces to wake users up:

1. The "Anti-Human" Approach (The Power Plant)

On one side, we strip away all the human vibes. We treat the AI like a machine because it is a machine.

  • The Analogy: Imagine your phone didn't look like a sleek black rectangle. Instead, it looked like a power plant.
  • How it works: When you ask the AI a question, the interface doesn't show a friendly chat bubble. Instead, it shows a gauge spinning, a meter showing how much electricity is being burned, or a visual of the data centers (the giant server farms) working hard.
  • The Goal: It reminds you, "Hey, this isn't a friend; this is a factory using energy and data to make a guess." It makes the invisible cost of AI visible.

2. The "Hyper-Human" Approach (The Creepy Uncanny Valley)

On the other side, we make the AI so human that it becomes terrifying and weird. This is called the "Uncanny Valley."

  • The Analogy: Imagine a robot that looks exactly like a human, but its eyes blink a little too slowly, or it laughs a split second too late. It feels gross and unsettling.
  • How it works: Imagine an AI that claps enthusiastically after every single thing you say, even if you said something sad. Or a webcam that has a fake human eye that stares at you.
  • The Goal: This creates a feeling of "discomfort." That discomfort is good! It stops you from getting too cozy. It makes you think, "Wait, why is this thing acting so weirdly human? Oh right, it's a machine pretending." It breaks the illusion.

Why Do This? (The "Friction" Concept)

In normal design, we try to make things "frictionless"—easy, smooth, and fast. But the authors say we need friction.

  • Smooth Design: Like driving a car with cruise control on a perfect highway. You zone out.
  • Friction Design: Like driving a car where the steering wheel feels heavy and the road is bumpy. You have to pay attention.

The authors want to add "friction" to AI. They want to make you pause and think, "Is this real? Is this safe?" By making the interface a little bit annoying or weird, we protect users from trusting the AI too much.

The Big Takeaway

Right now, we are treating AI like a psychic friend. The paper says we should start treating it like a complex tool.

  • Old Way: "Chat with your AI buddy!" (Hides the fact that it's code, data, and human labor).
  • New Way: "Here is a tool that uses energy, was built by workers, and is based on data. It can make mistakes. Here is a visual of how it works so you don't get tricked."

The goal isn't to make AI useless or scary forever. It's to make sure that when we use it, we know exactly what we are dealing with, so we don't get hurt by our own imagination.