Measurement of event shape variables using charged particles inside jets in proton-proton collisions at s\sqrt{s} = 13 TeV

Using 138 fb1^{-1} of proton-proton collision data at s\sqrt{s} = 13 TeV collected by the CMS detector, this paper presents a measurement of five event shape variables derived from charged particles inside jets, showing general agreement between the corrected distributions and various theoretical predictions for multijet production.

Original authors: CMS Collaboration

Published 2026-02-20
📖 5 min read🧠 Deep dive

This is an AI-generated explanation of the paper below. It is not written or endorsed by the authors. For technical accuracy, refer to the original paper. Read full disclaimer

The Big Picture: A Cosmic Smackdown

Imagine the Large Hadron Collider (LHC) at CERN as the world's most powerful slingshot. Scientists fire two tiny particles (protons) at each other at nearly the speed of light. When they collide, it's like smashing two Swiss watches together at full speed.

The result isn't just a broken watch; it's an explosion of tiny fragments (quarks and gluons) that fly out in all directions. These fragments quickly clump together to form "jets" of new particles, like debris from a car crash forming piles of metal.

This paper is about measuring the shape of that debris field.

The Problem: Too Much Noise

In the real world, when you try to study a car crash, you have to deal with a lot of background noise: other cars passing by, wind, and rain. In the LHC, this "noise" is called pileup. Because the slingshot fires so fast, dozens of other collisions happen at the exact same time, creating a messy cloud of extra particles that muddies the data.

The Solution: The scientists decided to ignore the "dust" (neutral particles) and only look at the "shrapnel" that has an electric charge. Why? Because charged particles leave a clear trail in the detector, allowing scientists to trace them back to the exact moment of the main crash, ignoring the background noise. It's like looking only at the license plates of the cars involved in the crash, ignoring the wind and the rain.

The Five "Shapes" They Measured

The researchers looked at five specific ways to describe the shape of the particle explosion. Think of these as different ways to describe the aftermath of a party:

  1. Transverse Thrust (τ\tau_\perp): The "Back-to-Back" Test

    • The Analogy: Imagine two people throwing snowballs at each other. If they are perfectly aligned, the snowballs fly straight back and forth. If they throw wildly in all directions, the snowballs scatter everywhere.
    • What they found: They measured how "straight" the jets were. If the jets are perfectly back-to-back, the score is low. If they are scattered like a spherical explosion, the score is high.
  2. Third-Jet Resolution (Y23Y_{23}): The "Third Wheel" Detector

    • The Analogy: Imagine a dance floor with two main couples dancing perfectly. Suddenly, a third person crashes the dance. This variable measures how "awkward" that third person is. Are they just a tiny, shy dancer on the edge, or are they a massive, loud intruder?
    • What they found: This tells them how often a third jet (a third "couple" of particles) appears in the collision.
  3. Jet Broadening (BtotB_{tot}): The "Spread Out" Meter

    • The Analogy: Imagine throwing a handful of confetti. If you throw it tight, it falls in a small pile. If you throw it wide, it covers the whole room. This measures how "spread out" the energy is in the collision.
    • What they found: They checked if the energy was concentrated in a tight beam or if it was scattered loosely.
  4. Total Jet Mass (ρtot\rho_{tot}): The "Heaviness" Check

    • The Analogy: If you have a pile of feathers and a pile of bricks that take up the same amount of space, the bricks are "heavier" (more massive). This measures how much "stuff" is packed into the jets.
    • What they found: They looked at the mass of the particle clusters to see if the models predicted the right amount of "stuff."
  5. Total Transverse Jet Mass (ρtotT\rho^T_{tot}): The "Sideways Weight"

    • The Analogy: Similar to the previous one, but this only cares about the weight of the particles moving sideways, ignoring how fast they are moving forward or backward. It's like weighing a suitcase while it's lying on its side.

The Great Debate: Theory vs. Reality

The scientists took their measurements (the "Reality") and compared them to three different computer simulations (the "Theories"):

  • PYTHIA 8: A very popular, well-tuned simulation.
  • HERWIG 7: Another famous simulation that thinks about particle interactions slightly differently.
  • MADGRAPH5: A simulation that tries to calculate the very first split-second of the crash with extreme precision.

The Results:

  • The Good News: For the "Back-to-Back" and "Sideways Weight" measurements, the computer models (especially PYTHIA 8) matched the real data almost perfectly. The theories are doing a great job here.
  • The Bad News: For the "Spread Out" and "Heaviness" measurements, the models started to drift apart from reality.
    • PYTHIA 8 tended to think the collisions were "heavier" and more "spread out" than they actually were.
    • MADGRAPH5 tended to think there were fewer "third wheels" (extra jets) than there actually were.

The Takeaway: We Need Better Recipes

Think of the computer models as recipes for predicting how the universe behaves.

  • For simple, clean collisions, the recipes are perfect.
  • For messy, complex collisions (where lots of particles are created), the recipes are slightly off. They are either adding too much salt (mass) or not mixing the ingredients (hadronization) quite right.

Why does this matter?
The scientists aren't just trying to be pedantic. If the recipes are wrong, it means our understanding of the "glue" that holds the universe together (Quantum Chromodynamics) is incomplete. By finding exactly where the recipes fail, they can tweak the ingredients to make the models better. This is crucial because if we can't predict the "background noise" of the universe perfectly, we might miss a signal of something completely new and exciting (like dark matter or new physics) hiding in the data.

In short: The CMS team took a very messy, high-speed crash, filtered out the noise, measured the shape of the debris in five different ways, and told the computer models: "You're doing great, but you need to fix your recipe for the messy parts."

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →