Delta1 with LLM: symbolic and neural integration for credible and explainable reasoning

This paper introduces Delta1 with LLM, a neuro-symbolic framework that combines the deterministic, polynomial-time theorem generation of the Automated Theorem Generator Delta1 with large language models to produce credible, auditable, and naturally explained reasoning across critical domains like healthcare and compliance.

Yang Xu, Jun Liu, Shuwei Chen, Chris Nugent, Hailing Guo

Published 2026-03-16
📖 5 min read🧠 Deep dive

Imagine you are trying to build a house, but your blueprint has a hidden flaw: the instructions say the roof must be made of ice, the walls of fire, and the foundation of water. If you try to build it exactly as written, the house collapses immediately.

Now, imagine two different experts trying to help you fix this:

  1. The "Math Wizard" (Symbolic Logic): This expert is incredibly precise. They look at your blueprint and say, "If you remove the 'ice roof' instruction, the house can stand." They can prove this mathematically, 100% of the time, with zero guesswork. However, they speak only in complex equations and symbols. They can tell you that the house will fall, but they can't explain why in a way a regular person understands.
  2. The "Storyteller" (Large Language Model): This expert is great at speaking human language. They can look at a blueprint and say, "Oh, ice roofs melt in the sun!" But if you ask them to prove the house will fall, they might guess, hallucinate, or make up a reason that sounds good but is actually wrong.

This paper introduces a new team called "∆1 + LLM" that combines the best of both worlds.

The Core Idea: "Explainability by Construction"

Instead of building a house and then trying to explain why it fell (which is messy and often wrong), this system builds the explanation into the construction process from the very beginning.

Here is how the team works, step-by-step, using a simple analogy:

1. The Translator (The Front-End LLM)

First, you give the system a messy paragraph of rules (like a medical policy or a legal contract).

  • What happens: The "Storyteller" reads your text and translates it into a clean list of simple facts (like "Patient has infection" or "Data is shared"). It turns your messy English into a structured list of ingredients.

2. The Logic Engine (The ∆1 Generator)

This is the "Math Wizard." It takes your list of ingredients and runs a special, deterministic algorithm (called FTSC).

  • What happens: Instead of guessing, it mathematically constructs every possible way these ingredients could clash. It finds the smallest possible group of rules that cause the problem.
  • The Magic: It doesn't just say "There is a problem." It says, "If you remove this specific rule (Rule D), the conflict disappears." It does this without any guessing, in a predictable amount of time. It guarantees that the problem it found is real and minimal.

3. The Translator (The Back-End LLM)

Now, the "Math Wizard" hands the result to the "Storyteller" again.

  • What happens: The Storyteller looks at the specific rule the Math Wizard flagged and says, "Ah! This rule says 'If you have a fever, take antibiotics.' But another rule says 'If you have a virus, don't take antibiotics.' The system found that these two rules can't exist together."
  • The Result: It gives you a clear, human-readable explanation: "Your policy is contradictory. You can't have both rules active at the same time. Here is how to fix it..."

Why is this a big deal?

1. No More "Black Box" Guessing
Usually, when AI finds a problem, we don't know if it's right or if the AI just made it up. With this system, the "Math Wizard" part is 100% provable. If the system says there is a contradiction, there is definitely a contradiction. The logic is ironclad.

2. From "What" to "How"
Old systems might tell you, "Your contract is invalid." This system tells you, "Your contract is invalid because Clause A conflicts with Clause B, and here is exactly how to rewrite Clause A to fix it." It turns a diagnosis into a prescription.

3. It Works in Real Life
The paper shows this working in three scary-but-important areas:

  • Healthcare: Catching rules where a doctor is told to do two opposite things for a patient (e.g., "Treat with drug X" vs. "Do not give drug X if patient has condition Y").
  • Law & Compliance: Finding where a company's privacy policy contradicts a government regulation (e.g., "We must share data" vs. "We must never share data").
  • Contracts: Spotting clauses in a business deal that cancel each other out before anyone signs the paper.

The Bottom Line

Think of ∆1 + LLM as a super-powered editor for complex rules.

  • The Math part ensures the editing is logically perfect and never makes a mistake.
  • The AI part ensures the editing suggestions are written in plain English that humans can actually understand and act upon.

It bridges the gap between "cold, hard math" and "warm, human conversation," creating a system that is not only smart but also trustworthy and explainable. It moves us from "The computer says no" to "The computer says no, and here is exactly why, and here is how to fix it."

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →