A Surrogate model for High Temperature Superconducting Magnets to Predict Current Distribution with Neural Network

This paper presents a fully connected residual neural network (FCRN) surrogate model trained on finite element method data to rapidly and accurately predict current density distributions and optimize the design of large-scale high-temperature superconducting magnets, overcoming the computational limitations of traditional simulations.

Mianjun Xiao, Peng Song, Yulong Liu, Cedric Korte, Ziyang Xu, Jiale Gao, Jiaqi Lu, Haoyang Nie, Qiantong Deng, Timing Qu

Published Wed, 11 Ma
📖 5 min read🧠 Deep dive

Imagine you are trying to design a giant, super-powerful magnet for a fusion reactor or a futuristic motor. This magnet is made of a special material called REBCO (a type of high-temperature superconductor).

The problem is that these magnets are incredibly complex. Inside them, electricity doesn't flow evenly like water in a smooth pipe; it gets "jammed" and uneven due to magnetic forces, a phenomenon called the screening current effect. To design a safe and efficient magnet, engineers need to know exactly how this electricity is distributed inside every tiny layer of the material.

The Old Way: The Slow, Exhausting Calculator

Traditionally, engineers use a method called Finite Element Method (FEM) to figure this out. Think of FEM as a super-precise, but incredibly slow, calculator.

  • If you want to design a small magnet, it might take a few hours.
  • If you want to design a meter-scale magnet (huge!), that same calculation can take days or even weeks on a powerful computer.
  • If you want to test 100 different designs to find the best one, you'd have to wait months. It's like trying to find the perfect recipe by baking a cake, waiting a week for it to cool, tasting it, and then starting over with a slightly different recipe.

The New Way: The "Surrogate" Oracle

This paper introduces a clever shortcut: a Surrogate Model powered by Artificial Intelligence (Neural Networks).

Think of this AI model as a highly trained apprentice or a crystal ball.

  1. The Training: First, the engineers let the slow, old calculator (FEM) do its work on a bunch of different magnet designs. They feed these results into the AI.
  2. The Learning: The AI studies these results and learns the "rules of the game." It learns how changing the size of the magnet, the number of wire turns, or the current affects the electricity flow.
  3. The Prediction: Once trained, the AI doesn't need to do the heavy math anymore. When you ask it, "What happens if I make the magnet slightly bigger?" it answers in milliseconds with incredible accuracy.

The Secret Sauce: The "Residual" Network

The researchers didn't just use a standard AI; they used a specific type called a Fully Connected Residual Network (FCRN).

  • The Analogy: Imagine a standard AI as a student trying to solve a math problem by writing every single step on a long piece of paper. If the problem is too long, the student gets tired, loses focus, and makes mistakes (this is called the "vanishing gradient" problem).
  • The Residual Fix: The Residual Network is like giving that student a shortcut. It allows the student to peek at the answer from the previous step and add it directly to the current step. This keeps the "signal" strong and clear, even for very deep, complex problems. This allowed the AI to learn much better than older models.

Testing the Crystal Ball

The team tested their AI in two scenarios:

  1. The Fast Ramp (Case 1): Imagine turning the magnet on very quickly. The AI had to predict how the electricity behaves while things are changing fast.

    • Result: The AI was amazing. Even when they asked it to predict magnets 50% larger than anything it had ever seen before, it was still accurate (less than 10% error). It was like the apprentice being able to guess the size of a giant's house just by seeing a dollhouse.
  2. Steady State (Case 2): Imagine the magnet is running at a constant, high power.

    • Result: The AI was great at predicting the shape of the magnet (geometry). However, if they asked it to predict what happens with a much higher electrical current than it had seen in training, it got a bit confused.
    • Why? Because at very high currents, the material behaves in a totally new, non-linear way (like a sponge that suddenly stops absorbing water). The AI hadn't seen enough examples of this "full saturation" to learn the rule.

The Grand Finale: Designing a Magnet in Minutes

The real magic happened when they used the AI to design a magnet.

  • The Goal: Find the smallest, most efficient magnet that creates a magnetic field of 16 Tesla (super strong!) without wasting too much material.
  • The Process: Instead of waiting weeks for the slow calculator to test thousands of designs, the AI tested them all in 3 minutes.
  • The Outcome: It found the perfect design. When the engineers double-checked this "AI-designed" magnet with the slow, old calculator, the results matched almost perfectly.

The Takeaway

This paper shows that we can stop waiting weeks for magnet designs. By training a smart AI on a smaller set of data, we can create a "surrogate" that acts like a super-fast, highly accurate oracle. It allows engineers to explore "what-if" scenarios and design massive, powerful magnets for fusion energy and advanced motors in the blink of an eye, rather than over a cup of coffee that has gone cold.