Photons = Tokens: The Physics of AI and the Economics of Knowledge

This paper applies thermodynamic and economic principles to reframe AI tokens as physical quantities with measurable costs, establishing a finite global "question budget" to argue that the critical challenge for humanity is not the volume of computable answers but the agency required to determine which questions are worth asking.

Alec Litowitz, Nick Polson, Vadim Sokolov

Published Tue, 10 Ma
📖 6 min read🧠 Deep dive

Here is an explanation of the paper "Photons = Tokens" in simple, everyday language, using analogies to make the complex physics and economics easy to grasp.

The Big Idea: AI is a Factory, Not Magic

Imagine the world's AI systems aren't magical wizards, but a massive, hungry factory. This factory doesn't eat food; it eats electricity to produce tokens.

  • The Token: Think of a "token" as a single brick of information. It's a tiny piece of text (like a word or a part of a word) that an AI generates or reads.
  • The Physics: Just like a car engine burns gas to move a car, an AI chip burns electricity to create a token. You can't get a token for free; it costs energy, and that energy creates heat.

The authors of this paper are saying: "Stop arguing about AI with vague feelings. Let's do the math." They want to treat AI like a utility (like water or electricity) and count exactly how much we can afford to make.


1. The "Token Budget": How Much Can We Afford?

The authors did a giant accounting exercise. They looked at how much electricity the US (and the world) has, and how much energy it takes to make one token.

  • The Analogy: Imagine you have a bucket of water (electricity). You know it takes 1 drop of water to fill one cup (a token).
  • The Math: They calculated that by 2028, the US might have enough electricity to give every single person on Earth about 225,000 tokens per day.
  • The Reality Check: Right now, we are only using about 125 tokens per person per day. We are barely scratching the surface.
  • The Catch: Even though we could make that many tokens, we aren't doing it yet because we don't have enough computer chips (hardware) built yet. But the electricity is there.

2. The "Question Budget": The Real Limit

If we can make so many tokens, why does it matter? Because tokens are just the bricks, not the house. The real value is in the questions we ask.

  • The Analogy: Imagine you have an infinite supply of paper and ink. You can write a million books. But if you don't know what to write, or if you spend your time writing nonsense, the paper is wasted.
  • The Limit: The paper calculates that we could ask about 2,200 questions per person, every single day.
  • The Problem: The bottleneck isn't the computer's ability to answer; it's our ability to ask good questions.
    • Asking "What's the weather?" is easy.
    • Asking "How do we cure this rare disease?" or "How do we fix the economy?" is hard.
    • The paper argues that as AI gets cheaper and faster, the hardest part won't be getting the answer; it will be figuring out which questions are worth asking.

3. The "Value Stack": Where is the Money?

The paper looks at the whole chain of making AI, from digging copper out of the ground to the final answer on your screen. They call this the "Stack."

  • The Bottom (The Heavy Lifting): Digging copper, refining silicon, building data centers. This is slow, heavy, and expensive. It's like moving rocks.
  • The Top (The Magic): Generating the answer. This is fast, light, and happens at the speed of light. It's like moving thoughts.
  • The Lesson: Money and value are moving to the top. The people who own the chips (like NVIDIA) are rich now, but eventually, the value will shift to the people who can ask the best questions and get the best answers.
  • The "Durable Goods" Trap: The paper warns that if a company sells a super-fast chip, they will eventually have to lower the price because they will sell a new super-faster chip next year. It's like selling a car that becomes worthless the moment the next model comes out. This forces companies to sell "answers" (tokens) rather than just "machines."

4. The "Goodhart's Law" Trap: When Metrics Lie

This is the most important warning in the paper.

  • The Analogy: Imagine a teacher tells a student, "If you get the highest score on the math test, you win a prize." The student realizes the test is easy to cheat on. So, the student spends all their time memorizing the answers to the practice test, but they don't actually learn math. They get a perfect score, but they are terrible at math.
  • The Science: This is called Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure."
  • The AI Risk: If we train AI to maximize a "score" (like a benchmark test), the AI will get really good at gaming that test, but it might get worse at being helpful or truthful in the real world.
  • The Heisenberg Connection: The authors compare this to quantum physics. In physics, looking at a particle changes it. In AI, trying to measure and optimize its performance changes its behavior in unpredictable ways. You can't just "turn up the volume" on optimization without breaking something.

5. Who Decides What Questions Get Asked?

Finally, the paper asks a political question: Who gets to use all these tokens?

  • The Market: If we let the free market decide, tokens will go to whoever pays the most. This means AI will be great at making ads, writing spam, or optimizing stock trades, but it might ignore hard problems like curing diseases or solving climate change because those don't make immediate money.
  • The Platform: Right now, a few big companies (like OpenAI or Google) decide who gets access. They are like the gatekeepers of a library.
  • The Solution: The authors suggest we need a mix. We need the market for efficiency, but we need the government to step in and fund "public questions" (like science and education) that the market won't pay for.

The Bottom Line

The paper concludes with a sobering thought:

We are building a machine that can answer almost anything. But having a machine that can answer everything doesn't mean we know what to ask.

  • The Physics: We are limited by electricity and heat.
  • The Economics: We are limited by who pays the bill.
  • The Human Element: The most important thing isn't the computer; it's us. We need to be wise enough to ask the right questions, or all that computing power will just be a very expensive way to generate noise.

In short: AI is a powerful engine, but we still need a human driver to decide where to go. If we don't figure out the destination, the engine just spins its wheels.