A Human-Centred Architecture for Large Language Models-Cognitive Assistants in Manufacturing within Quality Management Systems

This paper addresses the lack of human-centred software architectures for integrating Large Language Model-Cognitive Assistants into manufacturing Quality Management Systems by proposing and validating a flexible, modular, component-based design that supports continuous process improvement and knowledge management.

Marcos Galdino, Johanna Grahl, Tobias Hamann, Anas Abdelrazeq, Ingrid Isenhardt

Published 2026-03-18
📖 5 min read🧠 Deep dive

Imagine a bustling factory floor. It's a place of heavy machines, precise instructions, and a constant flow of workers trying to make perfect products. Now, imagine giving every worker a super-smart, tireless assistant who knows everything about the factory, the machines, and the rules. This assistant is an AI Chatbot (specifically, a Large Language Model or LLM).

But here's the catch: In a factory, if the assistant gives the wrong advice, a machine could break, or a product could be unsafe. If it "hallucinates" (makes things up), it could cause a disaster. And if it forgets the latest safety rule, the whole factory could fail an inspection.

This paper is about building a safe, smart, and organized "brain" for these AI assistants, specifically designed to fit perfectly inside a factory's Quality Management System (QMS). Think of the QMS as the factory's "Rulebook and Logbook" combined—it's how they ensure everything is done right, safely, and legally.

Here is the breakdown of their solution using some everyday analogies:

1. The Problem: A Wild Horse vs. A Trained Guide

Currently, AI assistants are like wild horses. They are fast and can talk to you, but they might run off a cliff if you aren't careful. They don't naturally know how to follow strict factory rules, keep a perfect log of changes, or admit when they don't know something.

The authors say: "We need to build a stable and a harness for this horse so it can run safely on the track."

2. The Solution: A "Smart Team" Architecture

Instead of building one giant, monolithic AI brain, the authors designed a team of specialized workers (microservices) that talk to each other. They call this a "Human-Centred Architecture."

Here are the key team members in their design:

  • The Receptionist (ChatController): This is the front door. It takes your voice or text, figures out what you need, and directs you to the right person.
  • The Librarian (RAGRetrieval): This is the most important part. Instead of the AI guessing facts from its memory (which can be wrong), the Librarian goes to the factory's official digital library (work instructions, manuals, safety rules), finds the exact page you need, and hands it to the AI. This ensures the AI only answers based on real, up-to-date facts.
  • The Editor-in-Chief (FeedbackEvaluation): What happens if the AI gets something wrong? A human worker can flag it. This "Editor" checks the correction. It has two security guards:
    • The Jailbreak Guard: Checks if someone is trying to trick the AI into breaking the rules.
    • The Fact-Checker: Makes sure the new information is actually true and fits the context.
    • Only after passing these checks does the new knowledge get added to the library.
  • The Safety Officer (Guardrailling): Before the AI speaks, this officer reads the answer to make sure it's polite, safe, and follows company policies (like not giving advice that violates labor laws).
  • The Specialized Expert (LLM with Adapters): The AI isn't just a general chatbot; it has "special glasses" (adapters) that help it understand specific factory jargon and math problems better.

3. The "Human-in-the-Loop" Concept

The paper emphasizes that humans must remain in charge. Imagine the AI is a very fast intern. It can draft a report or find a manual in seconds, but a Supervisor (the human) must sign off on it before it becomes official.

The system is designed so that:

  • Workers can ask questions in plain English.
  • The AI finds the answer from the official documents.
  • If the AI is unsure or the worker disagrees, a human supervisor can step in, correct the answer, and update the system for everyone else.

4. Why This Matters (The "So What?")

Without this architecture, putting AI in a factory is risky. It's like letting a toddler drive a forklift.

With this architecture:

  • Trust: Workers trust the AI because it checks its facts against the official rulebook.
  • Audit Trail: If an auditor asks, "Where did this answer come from?" the system can show the exact document and who approved the update.
  • Continuous Improvement: As workers find mistakes or new solutions, they feed them back into the system, making the whole factory smarter over time, just like a team learning from a game.

The Bottom Line

The authors built a blueprint (a software design) for a factory AI that doesn't just "chat," but actually works. It respects the strict rules of quality management, keeps a perfect log of changes, and ensures that humans are always the captains of the ship, with the AI serving as the ultimate navigator.

They tested this idea with a group of experts (like a focus group of engineers and managers), and everyone agreed: "Yes, this is how we should build it." Now, the next step is to actually build it and try it out in a real factory.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →