Transpiling quantum circuits by a transformers-based algorithm

This paper presents a transformer-based algorithm that efficiently transpiles quantum circuits from QASM to IonQ's native gate sets with over 99.98% accuracy for up to five qubits, demonstrating polynomial scaling complexity suitable for training on high-performance computing infrastructures.

Michele Banfi, Paolo Zentilini, Sebastiano Corli, Enrico Prati

Published 2026-03-06
📖 4 min read🧠 Deep dive

Imagine you have a very complex recipe written in Italian (let's say, for a specific type of pasta). You want to cook this exact same dish, but your kitchen only has ingredients and tools that work with Japanese cooking methods. You can't just serve the Italian recipe to a Japanese chef; the ingredients won't match, and the tools won't work. You need a translator who understands both languages perfectly and can rewrite the recipe so the Japanese chef can cook the exact same dish using only Japanese tools.

This is exactly what the researchers in this paper have done, but instead of cooking, they are dealing with Quantum Computers.

The Problem: Quantum "Dialects"

Quantum computers are like different brands of smartphones.

  • IBM's quantum computers speak one "language" (a specific set of instructions called gates).
  • IonQ's quantum computers speak a different "language."

If you write a program (a quantum circuit) for IBM, it won't run on IonQ. It's like trying to plug a US charger into a UK socket. You need a Transpiler (a translator) to convert the code so it works on the new hardware without changing the actual result of the computation.

The Solution: The "AI Chef" (The Transformer)

Usually, this translation is done by rigid, rule-based software. But the authors asked: What if we used an AI that learns like a human?

They used a Transformer, the same type of AI architecture that powers tools like ChatGPT.

  • How it works: Just as a language model learns that "The cat sat on the..." is usually followed by "mat," this model learns that a specific sequence of IBM quantum instructions is usually followed by a specific sequence of IonQ instructions.
  • The Magic: Instead of hard-coding rules, the AI "reads" the IBM code and "predicts" the IonQ code, token by token, just like predicting the next word in a sentence.

The Training: Teaching the AI

To teach this AI, the researchers created a massive library of "sentence pairs."

  1. They generated thousands of random quantum circuits.
  2. They wrote them in IBM's language.
  3. They used existing tools to translate them into IonQ's language (the "correct" answer).
  4. They fed these pairs to the AI, letting it learn the patterns.

The "Token" Trick:
Quantum code often involves numbers (like rotation angles). Computers don't understand continuous numbers like 3.14159... well in this context. The researchers turned these numbers into "buckets" or "tokens."

  • Analogy: Instead of saying "rotate by 3.14159 degrees," the AI learns to say "Rotate by Bucket #64." It's like rounding off a recipe to "a pinch of salt" rather than "0.004 grams," making it easier for the AI to memorize.

The Results: A Near-Perfect Translator

The results were impressive:

  • Accuracy: The AI successfully translated circuits with 99.98% accuracy. It got the "grammar" right almost every time.
  • Scale: It worked perfectly for circuits with up to 5 qubits (the quantum equivalent of bits).
  • Speed: The complexity of the AI grew in a manageable way (polynomially) as the circuits got bigger. This means we can train even bigger models on supercomputers to handle even more complex tasks.

The Catch: The "Solovay-Kitaev" Bottleneck

The researchers also tried a harder test. They took the IBM code, broke it down into its absolute simplest, most basic building blocks (using a mathematical method called the Solovay-Kitaev algorithm), and then tried to translate it.

  • The Problem: Breaking the code down into these tiny blocks made the "sentences" incredibly long.
  • The Limit: The AI has a "memory window" (it can only look at 768 words at a time). When the translated sentences got too long, the AI couldn't hold the whole picture in its mind.
  • The Result: It worked great for small circuits (1 or 2 qubits) but failed for larger ones because the "sentence" was too long for the AI's short-term memory.

Why This Matters

This paper is a major step forward because it proves that AI can act as a universal translator for quantum computers.

  • Before: Translating code was a rigid, manual, and error-prone engineering task.
  • Now: We have a flexible, learning system that can adapt to new hardware.

The Big Picture: As we build more different types of quantum computers (some using trapped ions, some using superconducting loops, some using light), we won't need to rewrite our software for every new machine. We can just train a "Transformer" to translate our code on the fly, making the quantum future much more accessible.

In short: They built an AI that speaks "IBM-Quantum" and "IonQ-Quantum" fluently, allowing us to write code once and run it anywhere, with almost zero errors.