Learning to Rank the Initial Branching Order of SAT Solvers

This paper proposes using graph neural networks to predict initial branching orders for CDCL SAT solvers, demonstrating significant speedups on random and pseudo-industrial benchmarks while noting that the approach struggles with complex industrial instances due to the solver's dynamic heuristics overriding the predictions.

Arvid Eriksson (KTH Royal Institute of Technology), Gabriel Poesia (Kempner Institute at Harvard University), Roman Bresson (Mohamed Bin Zayed University of Artificial Intelligence), Karl Henrik Johansson (KTH Royal Institute of Technology), David Broman (KTH Royal Institute of Technology)

Published Tue, 10 Ma
📖 5 min read🧠 Deep dive

Imagine you are trying to solve a massive, incredibly complex maze. You have a robot (the SAT Solver) whose job is to find the exit. The robot is very smart and follows a set of rules to navigate, but sometimes it gets stuck in dead ends or takes a very long, winding path because it doesn't know which turn to take first.

This paper is about giving that robot a map before it even starts walking. Specifically, it's about teaching an AI to guess the very first few turns the robot should take to solve the maze faster.

Here is the breakdown of the paper using simple analogies:

1. The Problem: The Robot's "Guessing Game"

In the world of computer science, there are problems called SAT problems (Boolean Satisfiability). Think of these as giant logic puzzles with thousands of switches (variables) that can be either ON or OFF. The goal is to find a combination of switches that makes the whole puzzle work.

The robot (the SAT solver) tries to solve this by making a guess, checking if it works, and if it fails, it backtracks and tries a different guess. The biggest question is: "Which switch should I flip first?"

If the robot picks a "bad" switch to start with, it might wander through thousands of dead ends. If it picks a "good" switch, it might solve the puzzle in seconds. The paper asks: Can we teach an AI to predict the best starting switches?

2. The Solution: The "Smart Map" (Graph Neural Network)

The authors built a special type of AI called a Graph Neural Network (GNN).

  • The Input: They turn the logic puzzle into a picture (a graph) where the switches and the rules connecting them are dots and lines.
  • The Training: They taught the AI by showing it thousands of puzzles and telling it, "If you start with these switches, you solve it fast."
  • The Output: When given a new puzzle, the AI looks at the picture and says, "Start with Switch A, then Switch B, then Switch C."

3. How Did They Teach the AI? (The Three "Labeling" Methods)

To teach the AI, they needed to know what the "perfect" starting order was. Since they didn't have a magic crystal ball, they tried three different ways to figure it out:

  • Method A: The "Conflict Detective" (Conflict Labeling)
    They let the robot solve the puzzle normally and watched which switches caused the most arguments (conflicts). They told the AI: "The switches that cause the most fights are the most important to solve first." It's like a detective saying, "The person who argues the most is likely the key to the mystery."
  • Method B: The "First Step" Experiment (First Variable Labeling)
    They forced the robot to start with every possible switch, one by one, and saw which one led to the fastest solution. They told the AI: "Start with the switch that, when tried first, gets you out of the maze quickest."
  • Method C: The "Evolutionary" Approach (Genetic Labeling)
    They used a computer program that acted like natural selection. It created random orders of switches, kept the ones that worked best, mixed them up, and tried again, slowly evolving a "perfect" order over many generations.

4. The Results: Speeding Up the Robot

The team tested this "Smart Map" on two different robots (MiniSat and CaDiCaL).

  • The Good News: On smaller, simpler puzzles (like a maze with 100 rooms), the AI was amazing. It helped the robots solve problems 50% faster or more. It was like giving the robot a flashlight that showed the exit immediately.
  • The Surprise: The AI learned the patterns so well that it could even help solve puzzles that were 10 times bigger than the ones it was trained on. It generalized its knowledge, like a student who learns the rules of chess and can play against a grandmaster they've never seen before.

5. The Bad News: Why It Fails on Giant Mazes

When they tried this on huge, real-world industrial puzzles (like designing a microchip or verifying a complex software system), the speedup disappeared.

Why?

  1. The Robot Overwrites the Map: The robots have their own built-in "instincts" (heuristics) that are very strong. As soon as the robot starts moving, its own instincts quickly ignore the AI's "Smart Map" and start making its own guesses. It's like giving a GPS to a driver who refuses to look at the screen and just drives by instinct.
  2. Too Much Noise: The real-world puzzles are so complex and messy that the AI couldn't find a clear pattern to learn. The "map" it drew was too blurry to be useful.

The Big Takeaway

This paper is a proof-of-concept. It shows that AI can learn to give a head-start to logic solvers, making them much faster on many types of problems.

However, it also highlights a challenge: AI needs to work with the solver's existing rules, not just try to replace them. If the AI's advice is ignored after the first step, the speedup is lost. The authors suggest that future work needs to find a way to keep the AI's advice relevant throughout the whole solving process, not just at the very beginning.

In short: We taught an AI to pick the best starting moves for a logic puzzle. It worked great on small puzzles and even some medium ones, but on the hardest, real-world puzzles, the robot's own instincts took over too quickly. It's a promising start, but the race isn't over yet!