torch-projectors: A High-Performance Differentiable Projection Library for PyTorch

This paper introduces torch-projectors, a high-performance, differentiable library for Fourier-space projections in PyTorch that significantly accelerates electron microscopy workflows by outperforming existing solutions by 1–2 orders of magnitude across CPU, Apple Silicon, and CUDA devices.

Original authors: Tegunov, D.

Published 2026-03-10
📖 5 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine you are trying to solve a giant, 3D jigsaw puzzle, but you can only see the puzzle pieces as flat, 2D shadows cast on a wall from different angles. This is essentially what scientists do in Cryo-Electron Microscopy (Cryo-EM) to figure out the shape of tiny proteins. They take thousands of 2D "shadows" (projections) and use math to reconstruct the original 3D object.

For a long time, doing this reconstruction with modern AI (Machine Learning) was like trying to run a Formula 1 race on a bicycle. The math required to turn those 2D shadows back into 3D shapes was too slow and clunky for computers to handle efficiently while learning.

Enter torch-projectors. Think of this paper as the unveiling of a brand-new, high-speed engine for these computers.

Here is the breakdown of what this paper is about, using some everyday analogies:

1. The Problem: The "Slow Shadow"

In the world of electron microscopy, there is a rule called the Fourier Slice Theorem. In simple terms, it says: If you look at a 3D object from a specific angle, the "shadow" you see is actually a slice through the object's invisible frequency map.

To train AI models to solve these puzzles, the computer needs to do this "shadow casting" and "shadow reading" millions of times, adjusting its guesses based on errors.

  • The Old Way: The standard tools (PyTorch) were like using a spoon to dig a tunnel. It worked, but it was painfully slow and ate up all the computer's memory (RAM), causing it to crash.
  • The Result: Scientists couldn't use powerful AI because the math was too heavy.

2. The Solution: The "Magic Lens" (torch-projectors)

The author, Dimitry Tegunov, built a new library called torch-projectors. Think of this as replacing that spoon with a laser drill.

  • Speed: It is 10 to 100 times faster than the previous best tools. It's so fast that on powerful graphics cards (GPUs), it can process thousands of these "shadows" in a single second.
  • Memory: It's incredibly efficient. Instead of saving every intermediate step (which fills up the computer's desk with clutter), it does the calculation in one smooth motion, keeping the workspace clean.
  • Smart Math: It uses a technique called Cubic Interpolation. Imagine trying to guess the color of a pixel between two known pixels.
    • Linear (Old way): Just drawing a straight line between them. It's okay, but can look blocky.
    • Cubic (New way): Drawing a smooth, curved line that fits perfectly. This gives a much sharper, more accurate picture without needing to zoom in (oversample) and waste memory.

3. How It Works: The "Shadow Puppet" Analogy

The library handles two main tasks, which the paper calls "Forward" and "Backward" projections.

  • Forward Projection (The Shadow Puppet): You have a 3D puppet (the protein model). You shine a light from a specific angle, and the library instantly calculates what the 2D shadow on the wall looks like.
  • Backward Projection (The Reverse Shadow): You have a 2D shadow on the wall. The library figures out how to "scatter" that shadow back into 3D space to update the puppet's shape.

The magic of torch-projectors is that it can do both of these instantly, and if the AI makes a mistake, it can instantly calculate exactly how to fix the puppet's shape (this is called "differentiation" or "backpropagation").

4. The "Friedel Symmetry" Trick

One of the tricky parts of this math is that the data is "symmetric" (like a mirror image). If you count both sides of the mirror, you double-count the information.

  • The Paper's Fix: The library has a built-in "traffic cop" that knows exactly how to handle these mirror images so the computer doesn't get confused or double-count data. This ensures the math stays perfect even when things get complicated.

5. Why Should You Care?

Before this library, training AI to understand protein structures was like trying to drive a car with the parking brake on. It was possible, but you could only go very slowly.

torch-projectors takes the parking brake off.

  • For Scientists: It means they can now use powerful AI to solve protein structures faster and more accurately than ever before.
  • For Medicine: Faster protein understanding leads to faster drug discovery. If we can understand how a virus works faster, we can build vaccines faster.
  • For AI: It proves that we can build highly specialized, super-fast math tools that fit perfectly into the AI ecosystem, opening the door for more complex scientific simulations.

The Bottom Line

This paper introduces a super-charged tool that makes the heavy math of 3D protein reconstruction fast, light, and accurate. It turns a slow, memory-hungry process into a sleek, high-speed operation, allowing scientists to use the full power of modern AI to unlock the secrets of life at the atomic level.

Where to find it: It's free and open-source, ready for anyone to download and use on their computers (whether they have a standard PC, a Mac, or a powerful server).

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →