An LLM-Assisted Multi-Agent Control Framework for Roll-to-Roll Manufacturing Systems

This paper presents an LLM-assisted multi-agent framework that automates the design, tuning, and adaptation of control systems for roll-to-roll manufacturing, significantly reducing manual effort while ensuring safety and maintaining high-quality tension and velocity regulation under model uncertainty.

Jiachen Li, Shihao Li, Christopher Martin, Zijun Chen, Dongmei Chen, Wei Li

Published 2026-03-10
📖 5 min read🧠 Deep dive

Imagine a massive, high-speed factory machine that unspools a giant roll of material (like a very long, thin sheet of plastic or metal) and winds it back up onto another roll. This is called Roll-to-Roll (R2R) manufacturing. It's used to make everything from flexible solar panels to printed sensors.

The biggest challenge? Keeping the "web" (the material) perfectly tight.

  • Too loose: The material wrinkles, tears, or gets misaligned.
  • Too tight: The material snaps or the motors burn out.

Traditionally, getting this machine to work perfectly required a highly experienced human engineer. They would spend days or weeks manually tweaking knobs and dials, guessing what the machine needed, and hoping for the best. It was slow, expensive, and prone to human error.

This paper introduces a new "AI Co-Pilot" system that automates this entire process. Think of it not as a single robot taking over, but as a team of specialized AI assistants working together, guided by a super-smart "Chief Engineer" (a Large Language Model, or LLM).

Here is how this team works, explained through simple analogies:

1. The Team of AI Specialists

Instead of one giant brain trying to do everything, the system uses five specialized "agents" (AI workers), each with a specific job:

  • The Detective (System ID Agent): Before the machine can be controlled, the AI needs to understand how it behaves. The Detective looks at the machine's past data and physical rules to build a digital "twin" (a simulation) of the machine. It's like a mechanic listening to an engine to figure out exactly how it runs before trying to fix it.
  • The Architect (Initial Control Agent): Once the AI understands the machine, the Architect designs the control strategy. It looks at three different "blueprints" (PID, MPC, and LQR—types of control math) and picks the best one. It's like a chef tasting three different recipes and choosing the one that will make the perfect dish.
  • The Pilot (Adaptation Agent): This is the most critical part. The AI takes its "perfect" design from the computer simulation and tries it on the real machine. But here's the catch: The Pilot never flies without a safety net.
  • The Safety Inspector (Safety Filter): Before the Pilot can touch the real machine, the Safety Inspector runs a thousand simulations in a split second. It asks: "If we change this setting, will the machine break? Will the tension snap the material?" If the answer is "maybe," the change is rejected. This ensures the AI never makes a dangerous mistake.
  • The Watchman (Monitoring Agent): Once the machine is running, the Watchman never sleeps. It monitors the machine 24/7. If the performance starts to drift (like a car engine getting slightly rough), it diagnoses why. Is it a loose belt? A worn-out sensor? Or just a change in the material? It tells the human operators exactly what to do.

2. The "Sim-to-Real" Dance

The paper highlights a major problem in AI: The Simulation Gap.
Imagine you learn to drive in a video game (the simulation). You might be perfect there. But when you get into a real car (the real factory), the wind, the road texture, and the weight of the car are different. You might crash.

This framework solves that by using a "Practice Loop":

  1. The AI proposes a change in the simulation.
  2. The Safety Inspector checks it.
  3. If safe, the AI tries it on the real machine.
  4. If the real machine acts differently than expected, the AI analyzes the difference, learns from it, and proposes a new change.
  5. It repeats this cycle until the real machine performs as well as the simulation.

3. The Results: Faster, Safer, Smarter

The researchers tested this on a lab-scale machine. Here is what happened:

  • The Old Way: A standard computer controller (MPC) struggled to keep the tension steady, resulting in errors.
  • The AI Way: The AI team started with a basic setting, then iteratively "tuned" itself.
  • The Outcome: The AI system reduced errors by 55% to 82% compared to the standard controller. It learned to handle the machine's quirks much faster than a human could.

The Big Picture: Why This Matters

Think of this framework as giving a factory machine a "self-driving" mode for its own maintenance and tuning.

  • No more "Black Boxes": Unlike some AI that just gives an answer, this system explains why it made a change (e.g., "I increased the tension because the material got thinner").
  • Safety First: It treats safety as a non-negotiable rule. The AI can't just "guess"; it must prove its ideas are safe in a digital sandbox first.
  • Democratizing Expertise: You don't need a PhD in control theory to run this factory. The AI brings the expert knowledge, allowing the factory to run efficiently even if the human operators are less experienced.

In short, this paper presents a way to let AI handle the complex math and dangerous trial-and-error of factory tuning, while keeping humans in the loop to oversee the process. It turns a slow, expert-dependent process into a fast, automated, and safe one.