I'm Not Reading All of That: Understanding Software Engineers' Level of Cognitive Engagement with Agentic Coding Assistants

This paper presents a formative study revealing that software engineers' cognitive engagement declines when using agentic coding assistants due to limited design affordances for reflection and verification, highlighting the need for new interaction modalities and cognitive-forcing mechanisms to sustain critical thinking in AI-assisted programming.

Carlos Rafael Catalan, Lheane Marie Dizon, Patricia Nicole Monderin, Emily Kuang

Published 2026-03-17
📖 5 min read🧠 Deep dive

The Big Picture: The "Lazy Co-Pilot" Problem

Imagine you are a chef (a Software Engineer) trying to cook a complex, high-stakes meal for a VIP. You hire a super-smart robot assistant (an Agentic Coding Assistant or ACA) to help you.

The robot is amazing. It can chop vegetables, mix sauces, and even invent new recipes faster than you can blink. However, the paper asks a scary question: Because the robot is so fast and efficient, are we chefs stopping thinking entirely?

The researchers found that when we let these robots do the heavy lifting, our brains tend to go on "autopilot." We stop tasting the food, stop checking the ingredients, and just assume the robot knows what it's doing. The paper argues that this is dangerous because if the robot makes a mistake (like adding salt instead of sugar), we might not notice until it's too late.


The Experiment: Watching Chefs Cook with Robots

The researchers watched four software engineers (from a beginner to a veteran) use a specific AI tool called Cline to write a script that organizes Excel files. They wanted to see how much the engineers were actually thinking versus just watching.

They broke the process down into three stages, like a movie:

  1. The Planning Scene: The engineer tells the robot what to do.
  2. The Action Scene: The robot starts typing code and running tools.
  3. The Review Scene: The engineer checks the final result.

What They Found: The "Energy Dip"

The study revealed a funny but worrying pattern. The engineers' brain power (cognitive engagement) was highest at the start and lowest at the end.

  • At the Start (Planning): The engineers were super focused. They were like a captain giving strict orders to a ship's crew. They made sure the robot understood the mission.
  • In the Middle (Action): This is where things went wrong. The robot started spitting out a massive wall of text—lines of code, logs, and status updates.
    • The Metaphor: Imagine the robot is a news anchor who starts reading the entire newspaper at 100 miles per hour. The engineers got overwhelmed. One engineer literally said, "I'm not reading all of that." They looked away, checked their phones, or just waited for the robot to say "Done." They treated the robot's output as a "black box"—they didn't care how it was done, only that it finished.
  • At the End (Review): The engineers only looked at the final result (the Excel file). If the file looked right, they said, "Great, job done!" They rarely looked at the code the robot wrote to make that file. They were checking the destination, not the journey.

The "Happy Path" Trap

The researchers noticed that the engineers only paid attention to the "Happy Path."

  • The Analogy: Imagine driving a car. The "Happy Path" is driving on a sunny day with no traffic. The engineers only checked if the car drove forward. They didn't think about what would happen if a tire blew out, if it rained, or if a deer jumped out.
  • Because they were so focused on the robot getting the "right answer" quickly, they forgot to ask: "What if the robot is lying?" or "What if this code breaks in a weird situation?"

The Solution: How to Wake Up the Chef

The paper suggests that we need to redesign these AI assistants so they don't let our brains go to sleep. Here are their two main ideas:

1. Stop Talking, Start Showing (Multimodal Interaction)

Right now, these robots just dump text on the screen. It's like a teacher reading a textbook out loud without showing any pictures.

  • The Fix: The robot should use visuals and voice. Instead of writing 50 lines of code, it should draw a flowchart, a graph, or a mind map to show how it's solving the problem.
  • Why it works: Just like a map is easier to read than a list of street names, a visual diagram helps the engineer understand the logic without getting a headache. It forces the engineer to look at the "big picture" rather than getting lost in the text weeds.

2. The "Speed Bump" (Cognitive Forcing)

Currently, the robot is too eager to please. It solves the problem instantly.

  • The Fix: We need to design the robot to slow down and make the human think.
  • The Analogy: Imagine a video game that pauses before giving you a hint. The game says, "I think the answer is X, but why do you think that? Explain your logic first."
  • This is called Cognitive Forcing. It's like putting a speed bump in the road. It forces the driver (the engineer) to slow down, look at the road, and think before they proceed. This stops them from blindly trusting the robot and forces them to double-check the "edge cases" (the deer, the rain, the tire blowout).

The Takeaway

The paper concludes that AI shouldn't just be a tool that does the work for us; it should be a "Thinking Partner."

If we let AI do everything without checking its work, we lose our ability to think critically. To build safe, reliable software, we need to design AI that:

  1. Shows its work clearly (using pictures and voice, not just text).
  2. Forces us to pause and think, rather than just handing us the final answer.

In short: Don't let the robot drive the car while you nap. Keep your hands on the wheel, and make sure the robot shows you the map.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →