Cross-Domain Transfer with Particle Physics Foundation Models: From Jets to Neutrino Interactions

This paper demonstrates that the OmniLearned particle physics foundation model, pre-trained on high-energy collision data, can be effectively transferred to low-energy fixed-target neutrino experiments to outperform models trained from scratch on tasks like energy regression and pion classification, thereby validating the potential for detector-agnostic inference across vastly different energy scales and physics processes.

Original authors: Gregor Krzmanc, Vinicius Mikuni, Benjamin Nachman, Callum Wilkinson

Published 2026-04-15
📖 5 min read🧠 Deep dive

This is an AI-generated explanation of the paper below. It is not written or endorsed by the authors. For technical accuracy, refer to the original paper. Read full disclaimer

Imagine you are trying to teach a robot to recognize different types of cars.

The Old Way: You would start with a blank robot. You'd show it pictures of a Ford, a Toyota, and a Honda, one by one, and say, "This is a Ford," "This is a Toyota." It would take a long time, and it would need thousands of examples just to learn the basics.

The New Way (This Paper): Imagine you first taught that same robot to be an expert on all vehicles. You showed it millions of pictures of trucks, motorcycles, buses, and race cars from every angle. It learned the fundamental rules of how wheels work, how engines sound, and how aerodynamics shape a vehicle.

Now, you want to teach it to identify a specific, rare type of boat. Instead of starting from scratch, you just say, "Hey, remember those rules about wheels and engines? Well, boats have hulls and propellers, but the logic of how they move through water is similar." Because the robot already understands the "physics of movement," it learns to spot the boat incredibly fast, with very few examples.

That is exactly what this paper is about.

The Cast of Characters

  1. The "Super-Student" (OmniLearned): This is a massive AI model that was already trained on data from the world's biggest particle colliders (like the Large Hadron Collider). It has seen trillions of high-energy particle collisions. It's like a master chef who has cooked every dish in a 5-star restaurant.
  2. The "New Kitchen" (MINERvA): This is a different experiment. Instead of smashing particles at near-light speed, it shoots a beam of neutrinos (ghostly particles) at a block of material to see how they bounce off. It's a much smaller, quieter, and very different "kitchen" than the collider.
  3. The Challenge: The scientists wanted to see if the "Master Chef" (OmniLearned) could walk into the "New Kitchen" (MINERvA) and start cooking immediately, or if they had to teach it everything from the beginning.

The Big Hurdle: A Massive Gap

Usually, transferring knowledge is hard.

  • The Collider: Imagine a massive explosion where hundreds of particles fly out in every direction (like a firework display). The energy is huge (Trillions of electron-volts).
  • The Neutrino Experiment: Imagine a gentle tap where only a few particles bounce off (like a cue ball hitting a few other balls on a pool table). The energy is much lower.

It's like asking a Formula 1 race car driver to immediately drive a tractor in a muddy field. The machines are totally different, the terrain is different, and the physics of driving are different.

What They Did

The researchers took the pre-trained "Master Chef" (OmniLearned) and tried to use it on the neutrino data. They gave it two specific jobs:

  1. The "Energy Meter" (Regression): Guessing how much energy was released in the crash.
  2. The "Party Guest Counter" (Classification): Figuring out exactly what kind of particles came out of the crash (e.g., "Did we get a pion? Did we get a neutral pion?").

The Results: A Miracle of Transfer

The results were surprising and exciting.

  • Speed: The pre-trained model learned the new task much faster than a model trained from scratch. It reached the same level of skill in half the time.
  • Accuracy: Even with the same amount of computing power, the pre-trained model was more accurate.
  • The "Inductive Bias": This is the fancy term for the "intuition" the model learned. Even though the collider and the neutrino experiment are different, the model learned fundamental rules about how particles move and interact in space. It learned that "particles tend to cluster in certain shapes" and "energy flows in predictable ways." These rules apply whether you are looking at a high-energy explosion or a low-energy bounce.

Why This Matters

Think of it like universal language learning.
If you learn English, Spanish, and French, you understand the concept of "grammar" and "sentence structure." If you then try to learn a completely new language, you don't start from zero; you just learn the new vocabulary because you already understand the structure of language.

This paper suggests that Particle Physics has a "Universal Grammar."

By training one giant AI model on diverse data, scientists can create a "Foundation Model" that can be adapted to any new experiment.

  • For new experiments: They won't need to spend years training a new AI from scratch. They can just "fine-tune" the existing giant model.
  • For the future: This could lead to a world where one single AI model can help analyze data from the Large Hadron Collider, the Deep Underground Neutrino Experiment, and even future telescopes, all with minimal retraining.

The Bottom Line

The scientists proved that an AI trained on the most violent, high-energy crashes in the universe can instantly become an expert at analyzing gentle, low-energy neutrino interactions. It's a huge step toward a future where we don't need to reinvent the wheel for every new physics experiment; we just need to teach the wheel how to roll on a new road.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →