Data-Driven Control of a Magnetically Actuated Fish-Like Robot

This paper proposes and validates a data-driven control framework for magnetically actuated fish-like robots that combines a neural network-based forward dynamics model with gradient-based model predictive control and imitation learning to achieve precise path following without relying on analytical modeling.

Akiyuki Koyama, Hiroaki Kawashima

Published 2026-03-06
📖 4 min read☕ Coffee break read

Imagine you have a tiny, robotic fish swimming in a bowl of water. This isn't a fish with a battery and a motor in its tail; instead, it's a soft, flexible creature that moves because a giant magnet outside the bowl pulls and pushes it. It's like a puppet, but the puppeteer is an invisible magnetic force.

The problem? Controlling this puppet is incredibly hard. Water is messy and unpredictable, the fish's tail is floppy and doesn't snap back perfectly every time (it has "hysteresis," or memory), and the "commands" you send to the magnet don't last for a fixed amount of time. If you tell the magnet to turn on for 200 milliseconds, the fish moves a little. If you tell it to turn on for 1,000 milliseconds, the fish moves a lot. But because the water fights back differently every time, predicting exactly where the fish will end up is like trying to guess where a leaf will land in a storm.

This paper is about teaching a computer how to control this tricky robotic fish without needing a physics textbook. Here is how they did it, broken down into three simple steps:

1. The "Crystal Ball" (The Forward Dynamics Model)

First, the researchers needed to understand how the fish moves. Instead of trying to write complex math equations about water pressure and magnetic fields (which is nearly impossible for a floppy tail), they just let the fish swim around and recorded what happened.

Think of this as training a Crystal Ball. They showed the computer thousands of examples: "When the fish was here, and we pulled the magnet for this long, it ended up there." They used a Neural Network (a type of AI brain) to memorize these patterns. Now, this AI "Crystal Ball" can look at the fish's current position and say, "If you pull the magnet for 500 milliseconds, the fish will likely end up right here." It learned the rules of the water by watching, not by calculating.

2. The "Chess Grandmaster" (Gradient-Based MPC)

Once the computer had its Crystal Ball, they needed a strategy to make the fish follow a specific path, like a winding river. They used a system called Model Predictive Control (MPC).

Imagine you are playing chess. A grandmaster doesn't just think one move ahead; they think ten moves ahead. They ask, "If I move here, my opponent moves there, then I move there..." to see the best path to victory.

The researchers' computer did the same thing. It used the Crystal Ball to simulate the next 10 steps of the fish's journey. It asked, "If I send this command, where will I be? If I send that one, where will I be?" It kept adjusting its plan until it found the perfect sequence of magnetic pulls to guide the fish along the line. This is the "Grandmaster" strategy. It's very smart, but it's also very slow because it has to do all that thinking in real-time.

3. The "Apprentice" (Imitation Learning)

The "Grandmaster" strategy is too slow for a real-time robot. By the time the computer finishes its 10-step simulation, the fish has already moved, and the plan is outdated.

So, they created an Apprentice. They took the "Grandmaster" (the slow, perfect planner) and let it run thousands of simulations offline. Then, they trained a second, simpler AI (the Apprentice) to watch the Grandmaster and copy its moves.

Think of it like a student watching a master chef. The student doesn't need to understand the chemistry of the ingredients; they just need to learn that "when the sauce looks like this, add a pinch of salt." The Apprentice learned to look at the fish's position and immediately say, "Do this!" without doing the slow, heavy math. It's fast, it's light, and it mimics the perfect plan.

The Result

When they tested this in a computer simulation:

  • The Grandmaster (MPC) was able to guide the fish to the target line with incredible precision, missing by less than a centimeter.
  • The Apprentice (Imitation Learning) watched the Grandmaster and learned to do the exact same thing, but much faster, with almost the same level of precision.

Why This Matters

This is a big deal because it proves you don't need to be a genius physicist to control a complex, floppy robot in water. You just need to let the robot swim, record the data, and let AI learn the "feel" of the water. This opens the door for tiny, cable-free robots that can swim through coral reefs, inspect pipes, or monitor oceans, guided by a brain that learned to swim just like a real fish.