Reinforced Generation of Combinatorial Structures: Hardness of Approximation

This paper demonstrates that AI-driven code mutation agents, specifically AlphaEvolve, can significantly advance complexity theory by discovering new gadget reductions and extremal graph constructions that yield improved hardness of approximation results for MAX-CUT, MAX-Independent Set, MAX-k-CUT, and the metric Traveling Salesman Problem.

Ansh Nagda, Prabhakar Raghavan, Abhradeep Thakurta

Published 2026-03-11
📖 6 min read🧠 Deep dive

Imagine you are trying to solve a massive, impossible jigsaw puzzle. The pieces are mathematical rules, and the picture you are trying to reveal is the "truth" about how hard certain computer problems are to solve.

For decades, human mathematicians have been trying to find the perfect pieces to prove that some puzzles are inherently unsolvable in a reasonable amount of time. They use logic, intuition, and brute force. But sometimes, the puzzle is so complex that human eyes can't see the pattern, and traditional computers get stuck in a loop.

This paper is about a new team member joining the puzzle-solving squad: AI. Specifically, the authors used an AI agent named AlphaEvolve to find new, better puzzle pieces that humans and old computers missed.

Here is the breakdown of their adventure, explained through simple analogies.

The Main Character: AlphaEvolve

Think of AlphaEvolve not as a robot that "knows" math, but as a super-creative code monkey.

  • How it works: You give it a goal (e.g., "Build a bridge that is as strong as possible"). It writes a computer program to build a bridge. It tests the bridge. If it falls down, the AI tweaks the code and tries again.
  • The Twist: The AI doesn't just build the bridge; it also learns how to build a better testing machine. If the testing machine is too slow, the AI rewrites the tester to be 10,000 times faster. This allowed it to try millions of designs that would have taken humans years to check.

The Three Big Puzzles They Solved

The paper details three specific "puzzles" (mathematical problems) where the AI found better solutions than anyone else.

1. The "Random City" Puzzle (Average-Case Hardness)

The Problem: Imagine a city with random streets. You want to know the maximum number of houses you can paint red so that no two red houses are next to each other (or, how many streets you can cut to split the city in two).
The Challenge: For random cities, it's hard to prove exactly how many red houses you can have without checking every single possibility.
The AI's Move: The AI searched for a specific type of "perfectly random" city map (called a Ramanujan graph). It found a map with 163 intersections that was much more complex than the ones humans had found before (which were only about 12 intersections).
The Result: By finding this massive, complex map, they proved that the "limit" of how well we can solve these random problems is slightly higher than we thought. It's like finding a new, more difficult maze that proves you can't run through it as fast as you thought.

2. The "Coloring Party" Puzzle (MAX-k-CUT)

The Problem: Imagine a party where you want to divide guests into 3 or 4 groups. You want to maximize the number of arguments (edges) between people in different groups. This is called "MAX-k-CUT."
The Challenge: To prove how hard this is to solve, mathematicians use "gadgets." Think of a gadget as a miniature, self-contained machine that forces a specific behavior. If you can build a tiny machine that forces a guest to argue, you can prove the whole party is chaotic.
The AI's Move: Humans had built these machines for years, but they were clunky. AlphaEvolve was asked to design a new machine.

  • For the 4-group party, the AI built a machine that proved the problem is even harder to approximate than we knew (improving the score from 0.9883 to 0.987).
  • For the 3-group party, it built a machine that improved the score from 0.9853 to 0.9649.
    The Result: The AI designed these machines by "evolving" the code. It tried millions of variations, kept the ones that worked best, and discarded the rest. It found a machine design that humans had never thought of because it was too weird or complex to imagine by hand.

3. The "Traveling Salesman" Puzzle (Metric TSP)

The Problem: A salesman needs to visit a list of cities and return home, taking the shortest possible route. This is the classic Traveling Salesman Problem (TSP).
The Challenge: We know how to get close to the best route, but we want to know: "How close can we guarantee to get?" The current record was that we can't guarantee a route better than 117/116 times the perfect length.
The AI's Move: The researchers needed a new "gadget" (a specific map structure) to prove the limit is actually lower.

  • The AI found a new map structure (a gadget with 12 vertices) that acts like a trap. If the salesman tries to take a shortcut, the trap forces him to take a longer detour.
  • This new trap proved that the limit is actually 111/110.
    The Result: They tightened the noose. They proved it is even harder to find a near-perfect route than we previously thought.

The Secret Sauce: Speeding Up the Tester

The biggest hurdle wasn't just finding the solution; it was checking if the solution worked.

  • The Bottleneck: Checking if a complex gadget works usually takes exponential time (like trying every combination of a lock). For the big puzzles, this would take longer than the age of the universe.
  • The AI's Hack: The authors told AlphaEvolve: "Don't just find the gadget; find a way to check the gadget faster."
  • The Outcome: The AI rewrote the checking code itself. It optimized the math so that a check that used to take hours now took milliseconds. In one case, it made the verification 10,000 times faster. This allowed the AI to explore a "search space" so vast that humans could never have navigated it.

Why This Matters

This paper suggests a new era for mathematics and computer science:

  1. AI as a Discovery Tool: We aren't just using AI to summarize what we know; we are using it to find things we don't know.
  2. The "Black Box" is Useful: Even if we don't fully understand how the AI found the perfect gadget, the fact that it did, and that we can verify the result with standard math, is enough to move science forward.
  3. Human + AI: The authors didn't just press a button. They had to design the framework, define the rules, and verify the AI's output. It was a partnership where the AI did the heavy lifting of searching, and the humans did the heavy lifting of understanding.

In a nutshell: The authors used an AI that can write and optimize its own code to find new, complex mathematical structures. These structures proved that some of the hardest computer problems are even harder to solve than we thought, pushing the boundaries of what we know about the limits of computation.