Agentic AI in Engineering and Manufacturing: Industry Perspectives on Utility, Adoption, Challenges, and Opportunities

This qualitative study of over 30 industry interviews reveals that while agentic AI offers significant potential for orchestrating complex manufacturing workflows, its widespread adoption is currently hindered more by fragmented data, legacy toolchain limitations, and organizational governance gaps than by model capabilities, necessitating a staged progression toward automation grounded in robust verification and human-in-the-loop frameworks.

Original authors: Kristen M. Edwards, Maxwell Bauer, Claire Jacquillat, A. John Hart, Faez Ahmed

Published 2026-04-14
📖 7 min read🧠 Deep dive

This is an AI-generated explanation of the paper below. It is not written or endorsed by the authors. For technical accuracy, refer to the original paper. Read full disclaimer

The Big Picture: The "Super-Intern" That Needs a Map

Imagine the engineering and manufacturing world as a massive, bustling construction site. For decades, the workers (engineers) have been building skyscrapers, bridges, and rockets using blueprints, calculators, and specialized tools.

Now, a new type of worker has arrived: Agentic AI.

Think of this AI not just as a smart calculator, but as a super-intern. This intern is incredibly fast at reading manuals, organizing files, and suggesting ideas. However, unlike a human engineer, this intern cannot yet "think" like a master builder. It can't look at a wobbly bridge and intuitively know why it might fall without being told exactly what to look for.

This paper is the result of the authors interviewing 33 experts (from NASA to small machine shops) to answer one question: "How do we get this super-intern to actually help us build things without causing a disaster?"


1. What Can the AI Do Right Now? (The "Paperwork" Phase)

The paper finds that the AI is currently great at the boring, repetitive stuff that humans hate doing.

  • The Analogy: Imagine you are a chef. You love cooking the main course, but you hate chopping 500 onions or organizing the spice rack. The AI is the perfect sous-chef for the chopping and organizing.
  • Real-world examples:
    • Reading the Fine Print: Engineers spend hours reading 500-page requirement documents. The AI can read them in seconds and say, "Hey, if we change this bolt, it breaks that electrical wire."
    • Data Entry: Filling out forms, copying data from old PDFs to new spreadsheets, and checking if a part number matches a drawing.
    • Finding the Needle in the Haystack: If a company has 20 years of data scattered across hard drives, emails, and dusty filing cabinets, the AI can find the specific part you need much faster than a human digging through boxes.

The Verdict: The AI is a fantastic assistant for "data-heavy" and "repetitive" tasks. It frees up humans to do the creative, high-level thinking.


2. What Can't It Do Yet? (The "Black Box" Problem)

The AI struggles with the "hard stuff" where safety is critical and the rules are complex.

  • The Analogy: Imagine asking the super-intern to design the engine for a jet plane. It might suggest a design that looks cool and follows the rules, but it might miss a subtle vibration that causes the engine to explode at 30,000 feet.
  • The Problem:
    • No "Gut Feeling": Human engineers have "spatial reasoning." They can look at a 3D model and visualize how parts fit together, how heat moves, or how metal bends. The AI is still bad at this "3D thinking."
    • The Black Box: If a traditional computer program fails, you can look at the code and see exactly where it went wrong. If the AI fails, it's like a magic trick. You get a result, but you don't know why it happened. In engineering, if you can't explain why a bridge is safe, you can't build it.
    • Safety First: Because of this, no one is letting the AI drive the car alone yet. It can suggest the route, but a human must keep their hand on the wheel.

3. The Three Big Roadblocks (Why We Aren't Using It Everywhere)

The paper identifies three main reasons why companies aren't just flipping a switch and letting the AI take over.

A. The "Messy Attic" (Data Problems)

  • The Analogy: Imagine trying to teach a robot to cook, but all your recipes are written in crayon on napkins, some are in a different language, and the rest are stuck in a locked safe.
  • The Reality: Engineering data is a mess. It's scattered across old computers, PDFs, and handwritten notes. It's often "machine-unfriendly." Before the AI can learn, companies have to spend months cleaning up their data. Also, because this data is often secret (like military designs), it can't be sent to the cloud to be processed by big AI companies. It has to stay in a "digital bunker" (on-premise servers).

B. The "Old Tools" (Legacy Software)

  • The Analogy: Imagine trying to plug a modern, high-speed electric car into a 1970s gas pump. The connection just doesn't fit.
  • The Reality: Many engineering tools (CAD software, manufacturing machines) were built decades ago. They were designed for humans to click buttons with a mouse, not for AI to talk to them via code. The AI wants to say, "Open this file, change this number, and save it." But the old software says, "I don't speak that language; you have to click the menu manually."

C. The "Trust Gap" (Culture and Fear)

  • The Analogy: You wouldn't let a stranger drive your family car just because they have a great GPS. You need to know they are safe, responsible, and that you can stop them if they go off a cliff.
  • The Reality: Engineers are trained to be risk-averse. They need to know exactly why a decision was made. If an AI suggests a design, the engineer needs to be able to prove to a regulator (or a judge) that it's safe. Right now, the AI is too "guessy" (probabilistic) for high-stakes jobs. Also, many engineers don't know how to use these tools properly yet, leading to either over-trusting them or ignoring them completely.

4. The Future: How Do We Fix It?

The paper suggests that to make Agentic AI truly useful, we need a few "breakthroughs":

  1. Standardized Handshakes: We need a universal language (like a standard USB port) so that AI can talk to any engineering software without needing custom code for every single machine.
  2. The "Truth Detector": We need new ways to verify that the AI isn't hallucinating. We need tools that can say, "I am 99.9% sure this design is safe, and here is the proof."
  3. Better 3D Brains: AI needs to get better at understanding space, physics, and how things fit together, not just reading text.
  4. Human-in-the-Loop: The best future isn't AI replacing engineers; it's AI acting as a "co-pilot." The AI does the heavy lifting and the boring work, and the human engineer acts as the captain, making the final call and taking responsibility.

The Bottom Line

The paper concludes that AI is not a magic wand that will solve everything tomorrow.

It's more like a powerful new engine that we are trying to install in an old car. The engine is amazing, but the car's wiring is old, the fuel is messy, and the driver is nervous.

To get the full benefit, companies need to:

  • Clean up their data (organize the attic).
  • Update their software tools (fix the wiring).
  • Train their people (teach the driver).
  • And most importantly, keep a human in the driver's seat until the AI proves it can handle the most dangerous turns safely.

The goal isn't to replace the engineer; it's to give them a super-powerful assistant so they can build better, safer, and faster things.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →