Accelerating multijet-merged event generation with neural network matrix element surrogates

This paper proposes a method to accelerate multijet event generation for LHC analyses by employing neural-network surrogates within a two-stage rejection-sampling algorithm, achieving over a tenfold reduction in generation time for Z+jets processes at the HL-LHC compared to the baseline Sherpa generator.

Tim Herrmann, Timo Janßen, Mathis Schenker, Steffen Schumann, Frank Siegert

Published 2026-03-11
📖 5 min read🧠 Deep dive

Imagine you are running a massive, high-stakes lottery to predict what happens when two particles smash together at the speed of light inside the Large Hadron Collider (LHC).

To understand the results of these collisions, physicists need to simulate billions of "what-if" scenarios. They use a computer program (a Monte Carlo generator) to calculate the odds of every possible outcome. However, there's a catch: calculating the exact odds for complex collisions (like a Z boson smashing into 6 jets of particles) is incredibly slow and expensive, like trying to solve a million-piece puzzle for every single ticket you buy.

Most of the time, the computer calculates a scenario, realizes the odds are tiny, and throws it away. This is called "rejection sampling." It's like a bouncer at a club who checks every ID, but 99% of people get rejected. The bouncer spends all his time checking IDs that don't matter, and the club (the simulation) moves very slowly.

The Problem: The "Bottleneck" at the High-Luminosity LHC

The LHC is getting upgraded to the "High-Luminosity" (HL-LHC) version, which means it will produce data at a rate that is currently impossible to simulate. If we keep using the old "bouncer" method, we will run out of computing power before we can analyze the data. We need a way to speed up the process without losing accuracy.

The Solution: The "AI Bouncer" (Neural Network Surrogates)

The authors of this paper propose a clever two-step solution using Artificial Intelligence (AI). Think of it as hiring a super-fast, slightly imperfect AI bouncer to do the initial screening.

Step 1: The AI Guess (The Fast Filter)
Instead of the slow, perfect computer calculating the exact odds for every single event, the AI (a Neural Network) makes a quick, educated guess.

  • The Analogy: Imagine the AI is a bouncer who can glance at a crowd and instantly say, "Hey, 90% of these people probably don't belong here, let's skip them."
  • The AI is fast but not perfect. It might let a few "bad" people in or reject a few "good" ones. But because it's so fast, it clears the line much quicker.

Step 2: The Double-Check (The Correction)
If the AI says, "Okay, this person might belong," the slow, perfect computer steps in to do the final, exact calculation.

  • The Analogy: The AI lets a few people through the first door. Then, a second, very strict bouncer (the real computer) checks their ID one last time. If the AI was wrong and let a bad person in, the second bouncer kicks them out. If the AI was right, they get in.

The Magic Trick:
Because the AI is so fast, it filters out the vast majority of useless calculations before the slow computer ever has to touch them. The few times the slow computer does have to work, it's only on the events that actually matter.

Why This Paper is Special

Previous attempts at this "AI bouncer" method were like testing it on a small, empty street. This paper takes the method to the "busy city center" of real-world physics. They had to solve several tricky problems to make it work for the LHC:

  1. The "Color" Confusion: Particles have a property called "color charge" (unrelated to actual color). Calculating this is hard. The authors found that for the AI to work best, they had to simplify how they handled these colors, essentially grouping similar particles together so the AI didn't get confused.
  2. The "Rare Event" Bias: Sometimes, physicists are looking for very rare, weird collisions (like a particle flying off at a weird angle). The standard simulation ignores these because they are rare. The authors taught the AI to over-produce these rare events so they can be studied, then mathematically corrected the results later.
  3. The "Overweight" Problem: Sometimes the AI is too generous and lets in people who shouldn't be there. The authors developed a system to handle these "overweight" events so they don't ruin the final statistics.

The Results: A Speed Boost of 10x

When they tested this on simulating Z + Jets (a common but complex collision) at the future HL-LHC:

  • Old Method: Took a huge amount of time and computing power.
  • New Method: Took 10 times less time.

To put it in perspective: If the old method took 100 years of computer time to generate the data needed for the next 10 years of experiments, the new method would only take 10 years.

The Bottom Line

This paper is a breakthrough because it proves that we can use AI not just to guess physics, but to accelerate the most difficult parts of physics simulations. By using a fast AI to filter out the noise and a slow, perfect computer only for the signal, we can simulate the future of particle physics today. It's like upgrading from a hand-cranked calculator to a supercomputer, ensuring that when the High-Luminosity LHC turns on, we won't be left behind by our own data.