The impact of baryons on weak lensing statistics as a function of halo mass and radius

This paper investigates the limits of semi-analytic baryon correction models by systematically replacing dark matter halo regions with hydrodynamical counterparts, revealing that while massive halos drive most baryonic suppression in power spectra, different weak lensing statistics exhibit distinct sensitivities to mass and radius that expose specific calibration failures in current models.

Max E. Lee, Zoltan Haiman, Shy Genel

Published Fri, 13 Ma
📖 6 min read🧠 Deep dive

Imagine you are trying to take a perfect photograph of a crowded city skyline at night to measure the exact brightness of every streetlight. This is what astronomers are trying to do with the universe using "weak lensing" surveys. They want to map out invisible dark matter by seeing how its gravity bends light from distant galaxies.

However, there's a problem: Baryons (normal matter like gas, stars, and black holes) are like mischievous street performers in the city. They move around, blow up balloons, and build bonfires. Their activity changes the density of the city, making the "streetlights" (galaxies) look brighter or dimmer than they really are. If we don't account for this, our map of the city will be wrong, and we'll get the wrong measurements of the universe's expansion.

The Big Challenge: The "Perfect" Simulation is Too Expensive

To fix this, scientists usually run supercomputer simulations.

  • The "Dark Matter Only" (DMO) Simulation: This is like a city map that only shows the buildings (dark matter) but ignores the people, cars, and fireworks (baryons). It's fast and cheap to run.
  • The "Hydrodynamic" Simulation: This includes everything—people, cars, fireworks. It's accurate but so expensive to run that we can only make a few copies of it.

We need the accuracy of the "Hydro" simulation but the volume of the "DMO" simulation. So, scientists invented Baryon Correction Models (BCMs). Think of these as "patches" or "filters" applied to the cheap DMO map to try and make it look like the expensive Hydro map.

The Experiment: The "Frankenstein" Universe

This paper asks a critical question: Are these patches working correctly?

The authors created a new way to test this. Instead of just applying a mathematical patch, they literally took the "Hydro" simulation and started swapping pieces into the "DMO" simulation, like a surgeon performing a transplant.

They called these "Replace" fields.

  • They took a specific group of halos (gravitational clumps of matter) in the DMO simulation.
  • They replaced the particles inside them with the real, messy particles from the Hydro simulation.
  • They did this for different sizes of halos (from small galaxy groups to massive clusters) and different distances from the center (from the core to the outskirts).

By doing this, they created a "Frankenstein" universe where they knew exactly which parts were "real" (Hydro) and which were "fake" (DMO). They then measured how much this swap changed the statistics.

The Key Findings: The "Fingerprints" of Baryons

The authors found that different statistics (different ways of measuring the universe) react to baryons in very different ways. They call these "Fingerprints."

  1. The Matter Power Spectrum (The "Big Picture" Map):

    • This statistic looks at the overall distribution of matter.
    • The Result: To get this right, you need to fix almost everything. You need to replace the cores of massive clusters and the gas far out in the outskirts, and even fix some smaller galaxy groups. Even if you fix 90% of the mass, you still miss about 10% of the effect because it comes from tiny, low-mass halos or gas floating in the space between galaxies that your model didn't touch.
  2. Weak Lensing Peaks (The "Spotlight" on Clusters):

    • This statistic counts the brightest, most concentrated spots in the sky (usually massive galaxy clusters).
    • The Result: This is very different! To get the peaks right, you only need to fix the very cores of the massive halos. The gas far away in the outskirts doesn't matter much for this specific statistic. It's like trying to judge the brightness of a spotlight; you only need to fix the bulb, not the whole room.

The "Two Mistakes" That Cancel Out

Here is the most surprising discovery.

The authors looked at existing correction models (BCMs) that are currently used by major surveys. These models are tuned to match the "Big Picture" map (the Power Spectrum) perfectly.

  • Mistake #1: These models under-predict the mass in the very center (core) of galaxy clusters. They think the core is too light.
  • Mistake #2: To compensate, they over-predict the mass in the outer regions (outskirts). They push too much gas out to the edges.

The Magic Trick: Because the "Big Picture" map averages everything out, these two mistakes cancel each other perfectly! The model looks 99% accurate for the Power Spectrum.

The Disaster: But when you look at the "Spotlight" (Peak Counts), the model fails miserably. Why? Because the Peak Counts only care about the core. Since the model got the core wrong (Mistake #1), the prediction for the peaks is wrong, even though the total map looked fine.

The Analogy: The Baking Cake

Imagine you are trying to bake a cake that tastes exactly like a famous chef's cake.

  • The Chef's Cake (Hydro): Has the perfect amount of sugar in the center and perfect frosting on the outside.
  • Your Cake (DMO): Has no sugar or frosting.
  • The Correction Model (BCM): You try to fix your cake by adding sugar and frosting.

You tune your recipe so that if you eat a slice from the middle and a slice from the edge and mix them together, the total sweetness matches the Chef's cake. You succeeded! Your "average sweetness" is perfect.

However, the Chef's cake has a specific texture in the center that is crucial for a specific dessert (the "Peaks"). Because your recipe added too much frosting to the edge to compensate for not enough sugar in the center, the center of your cake is still dry and wrong. If you try to make that specific dessert, it fails, even though your "average sweetness" was perfect.

Why This Matters

Future telescopes like Euclid, LSST, and Roman are going to take incredibly precise pictures of the universe. They need to control errors to within 1%.

This paper tells us:

  1. One size does not fit all. You cannot use a single "patch" to fix all types of measurements.
  2. Current models are "lucky." They look good because two errors cancel out, but they are physically wrong in the places that matter most for certain measurements (like the cores of clusters).
  3. The Solution: We need new models that are tested against multiple different "fingerprints" (not just the big map, but also the peaks and other shapes) to ensure they are getting the physics right in the right places, not just faking the average.

In short, the universe is complex, and to map it correctly, we need to stop trying to fix the whole thing with a single average and start understanding exactly where the "baryonic fireworks" are happening for each specific measurement.