The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge

This paper argues that while Large Language Models function as reliable externalist sources of information, their widespread use threatens collective intelligence by eroding the internalist reflective justification necessary for true knowledge, necessitating a three-tier normative framework to preserve epistemic duties and standards.

Angjelin Hila

Published 2026-03-05
📖 6 min read🧠 Deep dive

Here is an explanation of Angjelin Hila's paper, "The Epistemological Consequences of Large Language Models," translated into simple, everyday language with analogies.

The Big Picture: The "Smart Assistant" Trap

Imagine you have a super-smart, incredibly fast assistant named LLM (Large Language Model). This assistant has read almost every book, article, and website on the internet. If you ask it a question, it can give you a perfect, well-written answer in seconds.

The paper asks a scary question: What happens to our brains if we let this assistant do all our thinking for us?

The author argues that while this assistant is great at repeating facts, it doesn't actually understand them. If we rely on it too much, we might stop understanding things ourselves, and eventually, the whole system of human knowledge could start to crumble.


1. Two Ways of Knowing: The "Chef" vs. The "Microwave"

To understand the problem, the author distinguishes between two types of knowledge:

  • Reflective Knowledge (The Chef): This is when you truly understand why something is true. You know the ingredients, you know the recipe, and you know the chemistry of how the food cooks. If the recipe changes, you can adapt.
    • Analogy: A master chef who can cook a meal from scratch, taste it, and fix it if it's salty.
  • Reliabilist Knowledge (The Microwave): This is when you get a result that is usually correct, but you don't know how it happened. You just trust the machine.
    • Analogy: A microwave that heats up a frozen pizza perfectly every time. You get a hot pizza (the truth), but you have no idea how the machine works, and if the machine breaks or gets the wrong setting, you have no way to fix it.

The Paper's Point: LLMs are super-microwaves. They are incredibly reliable at giving you "hot pizzas" (correct-sounding answers) because they have been trained on human knowledge. But they are not chefs. They don't understand the ingredients, and they can't explain why the pizza tastes good. They just predict the next word based on patterns.

2. The Danger: The "Outsourcing" Effect

The author worries about a phenomenon called Epistemic Outsourcing.

Imagine a town where everyone used to learn how to build bridges (the "reflective" part). Now, a magical robot appears that builds perfect bridges instantly.

  • The Short Term: Everyone is happy! Bridges are built faster and cheaper.
  • The Long Term: No one learns how to build bridges anymore. The engineers forget the math. The architects stop drawing blueprints.

Then, one day, the robot gets a glitch and builds a bridge that looks perfect but collapses. Because no one remembers how to build bridges, no one can fix it.

The Paper's Warning: If we outsource our thinking to LLMs:

  1. We lose the "Chef" skills: We stop learning the deep reasons behind facts.
  2. We lose the ability to spot errors: If the LLM hallucinates (makes things up), we won't have the knowledge to catch it.
  3. The "Knowledge Pool" dries up: If humans stop generating new, deep knowledge because they are just copying the LLM, the LLM eventually runs out of good data to learn from. It starts feeding on its own mistakes, creating a cycle of garbage-in, garbage-out.

3. The "Black Box" Problem

The paper mentions that LLMs are "black boxes."

  • Analogy: Imagine a magic box that gives you the answer to a math problem. You can't see inside the box to see the steps. You just have to trust the box.
  • The Problem: In science, law, and medicine, we need to know the steps to trust the answer. If a doctor uses an AI to diagnose a patient, but the AI can't explain why it chose that diagnosis, it's dangerous. The AI might be right by accident, but without understanding, we can't verify it.

4. The Solution: A Three-Layer Safety Net

The author doesn't say "ban AI." Instead, she suggests we need to build a safety net to keep our "Chef" skills alive while using the "Microwave." She proposes a three-tiered approach:

Level 1: The Individual (You)

  • The Rule: Don't just ask the AI for the answer. Ask it to help you learn the answer.
  • Analogy: Instead of asking the microwave to cook the meal for you, ask it to show you the recipe, then cook it yourself. Use the AI as a tutor, not a replacement.
  • Goal: Cultivate "Epistemic Virtues" like curiosity and intellectual humility. Don't be lazy; stay curious.

Level 2: The Institution (Schools, Companies, Libraries)

  • The Rule: Create rules about how we use AI.
  • Analogy: A school decides that students can use calculators for math, but they must still show their work on the test. A company might say, "You can use AI to draft an email, but a human must verify the facts before sending."
  • Goal: Make sure AI is a tool for enhancing human work, not replacing the thinking part of it.

Level 3: The Law (Government & Policy)

  • The Rule: Pass laws that force AI companies to be transparent and prevent harmful uses.
  • Analogy: Just as we have safety regulations for cars (seatbelts, brakes), we need regulations for AI. Maybe laws that say, "AI cannot be used to write final medical diagnoses without a human doctor signing off," or "AI models must tell you when they are guessing."
  • Goal: Stop the "race to the bottom" where speed and convenience destroy truth.

Summary: The Takeaway

The paper is a wake-up call. Large Language Models are amazing tools, but they are reliable transmitters of information, not creators of understanding.

If we let them do all our thinking, we risk becoming a society of people who can get answers but don't understand the questions. To avoid this, we must treat AI like a powerful library assistant, not a brain replacement. We need to keep our own "thinking muscles" strong by staying curious, checking the work, and never letting the machine do the heavy lifting of understanding for us.