Using GPUs And LLMs Can Be Satisfying for Nonlinear Real Arithmetic Problems

This paper introduces GANRA, a novel SMT solver that combines Large Language Models with GPU acceleration to efficiently solve quantifier-free nonlinear real arithmetic problems, achieving significant performance improvements over existing state-of-the-art methods.

Christopher Brix, Julia Walczak, Nils Lommen, Thomas Noll

Published 2026-03-10
📖 5 min read🧠 Deep dive

Imagine you are trying to solve a massive, tangled knot of mathematical equations. In the world of computer science, this is called a Nonlinear Real Arithmetic (NRA) problem. It's like trying to find a specific combination of numbers that makes a complex machine work perfectly.

For decades, computers have struggled with these knots. The old methods are like trying to untie the knot by pulling on every single string one by one, in a very slow, linear fashion. Sometimes, the knot is so complex that the computer gets stuck for days or even years.

This paper introduces a new tool called GANRA (GPU Accelerated solving of Nonlinear Real Arithmetic problems). Think of GANRA as a team of two super-powered assistants working together to untie that knot: a Graphics Processing Unit (GPU) and a Large Language Model (LLM).

Here is how they work, using some everyday analogies:

1. The Problem: The "Kissing" and "Sturm" Puzzles

The authors tested their tool on two types of puzzles:

  • The Kissing Problem: Imagine trying to arrange a bunch of oranges on a table so that they all touch a central orange but don't crush each other. It's a geometry puzzle about distance.
  • The Sturm-MBO Problem: Imagine a giant recipe book with thousands of ingredients (variables) mixed together in a specific way. You need to find the exact amount of each ingredient to make the final dish taste exactly "zero."

2. The Old Way: The Single Chef

Previously, computers used a method called "Gradient Descent." Imagine a single chef trying to find the perfect recipe. They taste the soup, realize it's too salty, add a little water, taste again, add a little more water, and so on. It works, but it's slow because the chef does one thing at a time.

3. The New Team: The GPU and the LLM

The authors realized they could speed this up massively by using two modern technologies:

The GPU: The Super-Parallel Kitchen Crew

A GPU (the chip usually found in video game computers) is amazing at doing thousands of simple tasks at the exact same time.

  • The Analogy: Instead of one chef tasting the soup, imagine a kitchen with 10,000 chefs. They all taste the soup simultaneously. If the soup is too salty, they all add water at the exact same moment.
  • The Catch: To use this kitchen crew, you have to organize the work perfectly. You can't just tell them to "cook randomly." You have to group similar tasks together. For example, if 500 chefs need to chop onions, you give them all the onions at once, rather than asking them to chop one onion, then another, then another.

The LLM: The Pattern-Spotting Manager

This is where the LLM (like the AI you are talking to now) comes in.

  • The Analogy: The GPU is the muscle, but it needs a manager to tell it how to organize the work. In the past, a human programmer had to look at the math problem, figure out which parts were similar, and write code to group them for the GPU. This is like a manager manually writing a schedule for 10,000 chefs. It takes a long time and is hard to do for every new puzzle.
  • The Innovation: The authors asked an AI (specifically, OpenAI's o1-preview) to look at the math problems and say, "Hey, I see a pattern here! These 500 multiplication steps are all the same. Let's group them together!"
  • The AI acts as a Pattern-Spotting Manager. It looks at the messy knot of equations, spots the repetitive parts, and writes the code to tell the GPU how to do them all at once.

4. The Result: Speeding Up Time

The results were shocking.

  • On the "Kissing" puzzle, the new tool found solutions for 5 times more problems than the best existing tools.
  • It did this in less than 1/20th of the time.
  • Imagine if a task that used to take 20 minutes now takes 1 minute. That's the kind of speedup they achieved.

Why This Matters

The most exciting part isn't just that it's fast; it's that the AI helped write the code to make it fast.

  • Before: You needed a human expert to look at a new math problem and manually figure out how to optimize it for the computer's hardware.
  • Now: You can ask an AI, "Here is a new math problem. Write me a super-fast code to solve it using a GPU." The AI looks at the structure, finds the patterns, and builds the "kitchen crew" schedule automatically.

The Bottom Line

The paper shows that combining AI (the brain that finds patterns) with GPUs (the muscle that does massive parallel work) is a winning strategy for solving math problems that were previously too hard or too slow for computers. It's like giving a supercomputer a smart manager who knows exactly how to organize the work to get the job done in record time.

In short: They taught an AI to organize a super-fast computer crew, and together they solved math knots that used to take forever, doing it in the blink of an eye.