Making Implicit Premises Explicit in Logical Understanding of Enthymemes

This paper proposes a neuro-symbolic pipeline that integrates large language models for generating implicit premises and translating natural language into logical formulas, alongside a SAT-based reasoner, to systematically decode enthymemes and verify logical entailment, demonstrating promising performance on existing datasets.

Xuyao Feng, Anthony Hunter

Published 2026-03-09
📖 5 min read🧠 Deep dive

Imagine you are a detective trying to solve a mystery, but the witness gives you a clue that feels incomplete.

The Witness says: "The window is broken, and there's a baseball on the floor."
The Detective concludes: "Someone threw the ball through the window."

In the real world, we instantly fill in the missing piece: Baseballs break windows. But in the world of computers, that missing piece is a huge problem. The computer sees "Window broken" and "Ball on floor" and doesn't know how to jump to "Someone threw the ball." It's missing the invisible bridge.

This paper is about building a machine that can not only spot that invisible bridge but also build it, translate it into a language the computer understands, and prove it works.

Here is the breakdown of their solution, using a simple analogy.

The Problem: The "Enthymeme"

In logic, an argument with a missing piece is called an enthymeme.

  • Explicit Premise: The facts we see (Window broken).
  • Claim: The conclusion (Someone threw the ball).
  • Implicit Premise: The missing logic (Baseballs break windows).

Current computers are bad at this. Some can guess the missing word (like a spellchecker), but they don't understand the logic. Others can do the logic, but they need a giant library of rules to start with, which doesn't exist for every situation.

The Solution: A Three-Step Detective Pipeline

The authors built a "Neuro-Symbolic Pipeline." Think of this as a three-person detective team working together to solve the case.

Step 1: The Creative Writer (The LLM)

Role: Fills in the missing gaps.
How it works: You give the computer the facts and the conclusion. The computer (using a Large Language Model, like a super-smart AI writer) asks, "What story connects these?"

  • Input: "Window broken" + "Someone threw the ball."
  • Output: "Baseballs are hard and windows are fragile."
    The AI generates this missing "bridge" in plain English. It might even generate a chain of bridges (Step 1, Step 2, Step 3) to make the logic very clear.

Step 2: The Translator (AMR & Logic)

Role: Turns the story into a strict code.
How it works: Computers can't argue with sentences; they need math. This step takes the English sentences from Step 1 and translates them into Abstract Meaning Representation (AMR).

  • Imagine AMR is like a flowchart of the sentence's meaning, ignoring grammar and focusing on who did what to whom.
  • Then, it translates that flowchart into Propositional Logic (mathematical formulas like ABCA \land B \rightarrow C).
  • Why? This turns a fuzzy story into a rigid structure that a calculator can check.

Step 3: The Strict Judge (The Reasoner)

Role: Checks if the math adds up.
How it works: This is where the "Neuro-Symbolic" magic happens.

  • The "Neuro" part: The computer knows that "walking" and "moving" are similar, even if the words are different. It uses a "similarity score" (like a vibe check) to say, "These two concepts are close enough to be treated as the same."
  • The "Symbolic" part: It uses a SAT solver (a super-fast logic checker) to run the math. It asks: "If I combine the Premise + the Missing Bridge, does the Conclusion have to be true?"
  • If the math says "Yes," the argument is valid. If "No," the bridge is broken.

Why is this cool? (The "Relaxation" Trick)

Real life is messy. "Walking" isn't exactly the same as "moving," but for the sake of the argument, they are close enough.
The paper introduces a clever trick called Relaxation.

  • Imagine you are trying to fit a square peg in a round hole. A strict computer says "No."
  • This system says, "Wait, let's sand down the corners of the square peg just a little bit (using AI similarity scores) so it fits."
  • It allows the computer to be slightly flexible with language while still being strict with the logic.

The Results: Does it work?

The team tested this on two big datasets of tricky arguments (ARCT and ANLI).

  • The Finding: The more steps of "missing logic" the AI generated, the better it got.
  • The Analogy: If you ask a human to explain a jump, saying "He jumped" is okay. Saying "He ran, bent his knees, and pushed off the ground" is much better. The AI that generated three steps of missing logic performed the best. It proved that breaking a complex argument into smaller, explicit steps helps the computer understand the whole picture.

The Big Picture

This paper is a bridge between Human Intuition (we know what's missing) and Computer Rigor (we need exact math).

Before this, computers were either:

  1. Too fuzzy: They guessed the missing words but couldn't prove the logic.
  2. Too rigid: They could prove logic but needed a pre-written rulebook for everything.

This new pipeline is like a smart translator that listens to a human story, fills in the missing chapters, translates the whole book into a math equation, and then solves the equation to prove the story makes sense. It's a major step toward computers that can truly understand human arguments, not just the words we use.