A Retrieval-Augmented Language Assistant for Unmanned Aircraft Safety Assessment and Regulatory Compliance

This paper presents a retrieval-augmented language assistant designed to support unmanned aircraft safety assessments and regulatory compliance by grounding responses in authoritative sources to ensure traceable, auditable, and citation-driven decision support without replacing expert judgment.

Gabriele Immordino, Andrea Vaiuso, Marcello Righi

Published Thu, 12 Ma
📖 4 min read☕ Coffee break read

Imagine you are a pilot trying to get permission to fly a new type of drone. To do this, you have to read thousands of pages of complex government rules, cross-reference different documents, and figure out exactly which rules apply to your specific flight. It's like trying to solve a giant, shifting jigsaw puzzle where the pieces are made of dense legal text.

This paper introduces a smart digital assistant designed to help pilots and safety inspectors solve that puzzle without getting lost or making mistakes.

Here is the breakdown of how it works, using simple analogies:

1. The Problem: The "Hallucinating" Librarian

Usually, when we ask a standard AI (like a chatbot) a question, it acts like a librarian who has read a million books but doesn't have them on the shelves anymore. It relies on its memory.

  • The Risk: If the librarian forgets a detail or mixes up two books, they might confidently make up a rule that doesn't exist. In aviation, making up a rule is dangerous. If the AI says, "You can fly here," but the rule actually says "No," people could get hurt.

2. The Solution: The "Index Card" System (RAG)

The authors built a different kind of assistant. Instead of relying on memory, this assistant acts like a super-efficient librarian with a strict rule: "I will never answer unless I can point to the exact page in the book."

This technology is called Retrieval-Augmented Generation (RAG). Here is how the system works, step-by-step:

  • The Library (The Database): The assistant is fed only the official, up-to-date government rulebooks (like the EASA regulations for drones). It doesn't know anything else.
  • The Index Cards (Chunking): The system breaks these massive books down into small, manageable "index cards" (chunks). Each card is labeled with exactly where it came from (page number, section title).
  • The Search (Retrieval): When you ask a question, the assistant doesn't guess. It frantically searches its index cards to find the ones that match your question. It uses two search methods at once:
    • The Keyword Search: Looking for exact words (like "drone" or "altitude").
    • The Concept Search: Understanding that "flying high" means the same thing as "high altitude," even if the words are different.
  • The Safety Check (Filtering): Before answering, the system checks the cards it found. If the cards don't have enough info to answer safely, the assistant is programmed to say, "I don't know, and here is exactly what is missing." It refuses to guess.

3. The Two Modes of Operation

The paper tests this assistant in two specific ways, like a Swiss Army knife with two main tools:

Tool A: The "Regulatory Chatbot" (Conversational)

  • Scenario: You ask, "Can I fly my drone over a crowded market?"
  • Action: The assistant finds the specific rule about crowds, reads it, and answers: "No, according to Section 4.2, page 12, you cannot."
  • The Magic: It shows you the exact page and paragraph it used. If you doubt it, you can open the book and verify it yourself. It's like a lawyer handing you the specific law they are quoting.

Tool B: The "Safety Checklist" (Structured)

  • Scenario: You fill out a form with your drone's weight, how high you'll fly, and where you are.
  • Action: The assistant acts like a calculator. It takes your inputs, looks up the rules, and spits out a structured report: "Risk Level: Medium," "Required Assessment: Full."
  • The Magic: It doesn't write a long story; it gives you a clean, machine-readable answer that fits into official paperwork, ensuring the data is consistent every time.

4. Why This Matters: The "Human in the Loop"

The most important part of this paper is what the assistant doesn't do.

  • It does not make the final decision.
  • It does not replace the human expert.

Think of the assistant as a highly skilled research intern. The intern does all the heavy lifting: finding the documents, summarizing them, and highlighting the relevant parts. But the Senior Pilot (the human) is the one who signs the final permission slip. The intern is there to make sure the Senior Pilot doesn't miss a page or misread a rule, but the Senior Pilot remains responsible.

The Bottom Line

This paper proves that we can build AI tools for dangerous, high-stakes jobs (like aviation safety) if we force the AI to stick to the facts and show its work.

By separating the "searching" (finding the rules) from the "speaking" (answering the question), and by forcing the AI to cite its sources, the authors created a system that is fast and helpful but, most importantly, trustworthy. It turns a chaotic mountain of paperwork into a clear, traceable path for safety.