General Explicit Network (GEN): A novel deep learning architecture for solving partial differential equations

This paper proposes the General Explicit Network (GEN), a novel deep learning architecture that improves upon traditional PINNs by utilizing basis functions to implement point-to-function PDE solving, thereby achieving solutions with superior robustness and extensibility.

Genwei Ma, Ting Luo, Ping Yang, Xing Zhao

Published 2026-04-07
📖 6 min read🧠 Deep dive

The Big Picture: Solving the "Unsolvable" Equations

Imagine you are trying to predict how a drop of ink spreads in water, how a guitar string vibrates, or how heat moves through a metal rod. These are all described by Partial Differential Equations (PDEs). For decades, scientists have used two main ways to solve these:

  1. Old School Math: Like a super-precise calculator that takes tiny steps. It's accurate but slow and gets stuck if the problem gets too complex (the "curse of dimensionality").
  2. Modern AI (PINNs): Like a student who memorizes the answers to a specific practice test. It's fast, but if you ask a slightly different question (extrapolation), it often fails or gives nonsense answers because it only learned the "look" of the data, not the underlying rules.

The Problem: The current AI methods (PINNs) are like a student who memorizes the answers to a specific set of math problems but doesn't understand the logic behind them. If you change the numbers slightly, they get confused. They are "black boxes"—you put data in, and a solution comes out, but you don't know why it worked, and it often breaks when you look outside the training area.

The New Idea: GEN (The "Lego" Architect)

The authors propose a new architecture called GEN (General Explicit Network). Instead of trying to guess the whole answer from scratch, GEN builds the solution like a master architect building a house with specific, pre-chosen bricks.

Here is the core concept broken down:

1. The "Point-to-Point" vs. "Point-to-Function" Analogy

  • Old AI (PINN): Imagine trying to draw a smooth curve by connecting a million dots. If you miss a dot, the line wobbles. If you try to draw the line beyond the dots you have, you have no idea where to go. This is Point-to-Point fitting.
  • New AI (GEN): Imagine you know the curve is a wave. Instead of guessing every dot, you say, "I will build this wave using sine waves." You pick a few "wave bricks" (basis functions) and mix them together. Even if you haven't seen the end of the wave, you know exactly how the wave continues because the "brick" (the sine wave) has a built-in rule for how it behaves. This is Point-to-Function fitting.

2. The Secret Sauce: "Basis Functions"

In the GEN model, the AI doesn't just learn random numbers. It learns how to mix specific, pre-defined shapes called Basis Functions.

  • Trigonometric Functions (Sine/Cosine): These are like musical notes. If you know a sound is a song, you can describe it by mixing specific notes. These are great for things that repeat or oscillate (like waves).
  • Gaussian Functions: These are like bell curves or hills. They are great for things that peak in the middle and fade out, like heat spreading or a localized bump.

The AI's job isn't to invent the shape; its job is to figure out which shapes to use and how much of each to mix together to match the real-world physics.

How It Works in Real Life (The Experiments)

The authors tested this on three classic physics problems:

  1. The Heat Equation (Cooling Coffee):

    • The Test: Predicting how hot coffee cools down over time.
    • The Result: The old AI (PINN) got the temperature right while the coffee was hot (the training zone), but when they asked what happens after the coffee has cooled for a long time (the extrapolation zone), the AI went crazy and predicted the coffee would get hot again or freeze instantly.
    • The GEN Win: Because GEN used "decay" bricks (functions that naturally get smaller over time), it correctly predicted the coffee would cool down smoothly, even far into the future.
  2. The Wave Equation (Vibrating String):

    • The Test: Predicting how a wave travels along a string.
    • The Result: Waves repeat. The old AI forgot the pattern once it left the training area. The GEN, using "sine wave" bricks, naturally kept the wave repeating correctly, just like a real string would.
  3. Burgers' Equation (Traffic Jams):

    • The Test: Modeling how traffic slows down and speeds up (shockwaves).
    • The Result: GEN showed that if you use enough "bricks," you can capture the sharp, sudden changes in traffic flow with incredible precision, whereas the old AI smoothed them out or got them wrong.

Why This Matters (The "Aha!" Moment)

The paper argues that we need to stop treating AI like a magic black box.

  • Robustness: GEN is like a carpenter who knows how wood behaves. If you ask them to build a chair for a giant, they know the legs need to be thicker. The old AI is like someone who builds a chair for a human and tries to stretch it for a giant, causing it to collapse.
  • Explainability: With GEN, you can look at the solution and say, "Ah, this part of the answer comes from the 'sine wave' brick, and this part comes from the 'bell curve' brick." You understand the solution.
  • Extensibility: Because the "bricks" have built-in mathematical rules, the solution works even outside the area where the AI was trained.

The Catch (Limitations)

The authors are honest about the flaws:

  1. You need to know your stuff: You have to tell the AI which bricks to use (e.g., "Use sine waves for this problem"). If you pick the wrong bricks, it won't work well. The AI doesn't magically know the physics yet; the human still needs to guide it.
  2. It takes time: Training this new system takes longer than the old methods because it's doing more complex math.
  3. The "Author's Note": The paper ends with a humble confession. The author admits they aren't a deep PDE expert and that the specific "bricks" they chose might not be the best ones. They are essentially saying: "Here is a new, better way to build the house. I've laid the foundation and shown it works, but I hope real architects (PDE experts) come along to pick the perfect bricks and build the skyscrapers."

Summary

The General Explicit Network (GEN) is a smarter way to use AI for physics. Instead of blindly memorizing data, it builds solutions by mixing known, mathematically sound shapes (like sine waves and bell curves). This makes the AI more reliable, more accurate outside of its training data, and easier to understand, bridging the gap between raw data and physical laws.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →