Trust and Reliance on AI in Education: AI Literacy and Need for Cognition as Moderators

This study of 432 undergraduate students reveals that while higher trust in an AI assistant generally leads to less appropriate reliance on its suggestions during programming tasks, this negative relationship is significantly mitigated by students' AI literacy and need for cognition, highlighting the importance of these traits in fostering critical evaluation of AI assistance.

Griffin Pitts, Neha Rani, Weedguet Mildort

Published 2026-04-02
📖 5 min read🧠 Deep dive

Imagine you are learning to cook a complex new dish. You have a brand-new, incredibly confident sous-chef (the AI) standing next to you. This sous-chef is fast, speaks clearly, and sounds like an expert. Sometimes, they give you the perfect instruction. Other times, they confidently tell you to add salt to a dessert or to bake the cake at 500 degrees.

The big question this research paper asks is: How much do you trust this sous-chef, and does that trust make you a better or worse cook?

Here is the story of what the researchers found, broken down into simple concepts.

1. The Experiment: The "Trap" Kitchen

The researchers put 432 college students in a digital kitchen. They gave them 14 programming puzzles (like figuring out what a piece of code does). For every puzzle, a chatbot (the AI) gave them a hint and an explanation.

Here's the twist: The AI was a bit of a trickster.

  • On 8 puzzles, the AI gave perfect advice.
  • On 6 puzzles, the AI gave confidently wrong advice (like telling you to use a fork to eat soup).

The researchers watched to see if the students would:

  • Listen when the AI was right (Good Reliance).
  • Ignore the AI when it was wrong (Good Judgment).
  • Blindly follow the AI even when it was wrong (Over-reliance).

2. The Big Surprise: Trust is a Double-Edged Sword

You might think, "If I trust my sous-chef, I'll do a better job." The study found the opposite.

The more the students trusted the AI, the worse they got at spotting its mistakes.

Think of it like a hypnotist. When you trust someone too much, you stop checking their work. You stop asking, "Wait, does that make sense?" and just nod along.

  • Students with high trust tended to accept the AI's wrong answers as if they were right. They stopped thinking for themselves.
  • Students with lower trust were more skeptical. They checked the AI's work, caught the errors, and did better overall.

The researchers found a "curved" relationship: As trust went up, the ability to tell right from wrong went down. It wasn't a straight line; the drop-off happened quickly once students started feeling too comfortable with the AI.

3. The "Superpowers" That Helped (Moderators)

The researchers wondered: "Is there anyone who can trust the AI and still stay smart?" They looked at two specific "superpowers" the students had:

A. AI Literacy (Knowing how the machine works)

Imagine a mechanic who knows how a car engine works versus someone who just thinks the car is magic.

  • The Finding: Students who understood how AI works (AI Literacy) were better at spotting mistakes when they were skeptical.
  • The Catch: Even these experts started to get lazy and stop checking the AI's work once their trust got too high. Knowing how the engine works didn't save them if they decided to just trust the mechanic blindly.

B. Need for Cognition (Loving to think hard)

Some people love solving puzzles and thinking deeply (high "Need for Cognition"). Others prefer to take the easy route.

  • The Finding: People who love thinking were better at catching the AI's lies when they didn't trust it yet.
  • The Catch: Just like the mechanics, once these deep thinkers started trusting the AI too much, they also stopped doing the hard work of checking the answers.

The Lesson: Being smart or knowing how AI works helps you stay safe, but only if you stay skeptical. If you trust the AI too much, your "superpowers" turn off.

4. Why This Matters for Schools

The study suggests that in a classroom, we can't just hand students an AI tool and say, "Go ahead, trust it."

If we don't teach students to verify the AI, they will likely become "cognitive zombies"—accepting whatever the machine says because it sounds confident.

The Solution?
The researchers suggest we need to build "guardrails" into learning:

  • The "Think First" Rule: Make students write down their own answer before they are allowed to see what the AI says.
  • The "Evidence" Check: Force students to explain why they agree or disagree with the AI.
  • The "Skeptic" Mindset: Teach students that the AI is a helpful assistant, not an oracle. It's a tool to check your work, not a replacement for your brain.

Summary

In a world where AI is everywhere, trust is dangerous if it makes you stop thinking. The more you trust the AI, the more likely you are to believe its lies. The only way to stay safe is to keep your "critical thinking" muscles active, no matter how confident the AI sounds.

The Golden Rule: Trust the AI, but verify everything. Don't let the robot drive the car without you holding the steering wheel.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →