Multi-task deep neural network for predicting both nuclear fission yields and their experimental errors in peak-shaped data

This paper introduces a multi-task deep neural network equipped with a novel loss function and odd-even effect incorporation to more accurately predict both nuclear fission product yields and their experimental errors in peak-shaped data compared to conventional independent learning methods.

Original authors: Maomi Ueno, Enbo Zhang, Kazuma Fuchimoto, Satoshi Chiba, Jingde Chen, Chikako Ishizuka

Published 2026-04-01
📖 4 min read🧠 Deep dive

This is an AI-generated explanation of the paper below. It is not written or endorsed by the authors. For technical accuracy, refer to the original paper. Read full disclaimer

Imagine you are trying to predict the weather. You have a lot of data about sunny days, rainy days, and stormy days. But you need to predict a very specific, rare event: a sudden, violent thunderstorm that happens only in a tiny, jagged valley.

This is exactly the challenge nuclear physicists face when studying Nuclear Fission.

The Problem: The "Jagged Mountain"

When a heavy atom (like Uranium) splits, it doesn't just break into two random pieces. It breaks into a specific set of smaller atoms called Fission Products. If you graph how often each type of atom is produced, you don't get a smooth hill. You get a double-humped mountain range with very sharp, jagged peaks and deep valleys.

  • The Peaks: These are the most common results.
  • The Valleys: These are rare results.
  • The Problem: Existing computer models are great at drawing smooth hills, but they struggle to predict the sharp, jagged peaks accurately. They also struggle to tell you how confident they are in their prediction (the "error").

The Solution: A "Twin-Brain" AI

The researchers in this paper built a new kind of Artificial Intelligence (AI) to solve this. Instead of using a standard AI, they used a Multi-Task Deep Neural Network.

Think of this AI as a twin-brain system:

  1. Brain A is tasked with predicting the amount of fission products (the height of the mountain).
  2. Brain B is tasked with predicting the uncertainty or error (how shaky the ground is under that mountain).

In the past, scientists trained two separate AIs: one for the mountain and one for the shaky ground. But the researchers realized these two tasks are deeply connected. If you know the mountain is very high, you can guess the ground is more stable. By training them together, they help each other learn, much like how a pianist's left and right hands improve each other when practicing a duet.

The Secret Sauce: Two New Tricks

To make this twin-brain system work perfectly, the researchers added two special ingredients:

1. The "Spotlight" Loss Function (Weighted Loss)

Standard AI training is like a teacher who grades a student's test by giving every question the same weight. But in nuclear fission, the "peak" questions (the jagged mountains) are the most important. If the AI gets those wrong, the whole prediction fails.

The researchers invented a Weighted Loss Function. Imagine a teacher who puts a giant spotlight on the most important questions.

  • If the AI gets a "valley" question wrong, the teacher gives a small penalty.
  • If the AI gets a "peak" question wrong, the teacher screams, "This is critical! Try again!"
    This forces the AI to obsess over getting the sharp peaks right, rather than just averaging out the whole graph.

2. The "Odd-Even" Clue

Nature has a quirky habit: atoms with an even number of particles are generally more stable and common than those with an odd number. This creates a "jagged" pattern in the data (like a sawtooth wave).

The researchers gave the AI a cheat sheet: a simple Odd-Even switch.

  • If the number is even, the switch flips to "1" (High Stability).
  • If the number is odd, the switch flips to "0" (Low Stability).
    This helps the AI understand the underlying rhythm of the jagged peaks, allowing it to draw the sawtooth pattern much more accurately.

The Results: A Masterpiece of Prediction

When they tested this new system against older methods (like standard AI or Bayesian Networks):

  • Accuracy: It drew the jagged peaks much closer to the real experimental data.
  • Confidence: It didn't just guess the height of the mountain; it also gave a very accurate estimate of how much that height might vary (the error).
  • Versatility: It worked well not just for known energy levels, but also predicted what would happen at new, untested energy levels by learning the patterns from the data it did have.

Why Does This Matter?

Nuclear energy is the future of clean power, but it requires precise safety calculations.

  • Reactor Design: Engineers need to know exactly how much radioactive waste will be produced and what kind.
  • Safety: If you don't know the "error" (the uncertainty), you can't design safe containment systems.

This new AI method acts like a super-accurate crystal ball. It tells engineers not only what will happen when a nuclear reactor runs, but also how sure we can be about that prediction. This leads to safer, more efficient nuclear power plants and better waste management strategies.

In short: The researchers taught a computer to stop guessing the shape of a jagged mountain and start mapping it with a spotlight and a rhythm guide, giving us a much clearer picture of the future of nuclear energy.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →