Classifying Novel 3D-Printed Objects without Retraining: Towards Post-Production Automation in Additive Manufacturing

This paper introduces the ThingiPrint dataset and a contrastive fine-tuning approach that enables the classification of novel 3D-printed objects using their CAD models without requiring model retraining, thereby addressing a critical bottleneck in automating industrial post-production workflows.

Fanis Mathioulakis, Gorjan Radevski, Silke GC Cleuren, Michel Janssens, Brecht Das, Koen Schauwaert, Tinne Tuytelaars

Published 2026-03-10
📖 4 min read☕ Coffee break read

Imagine you run a busy 3D printing factory. Every day, your machines churn out hundreds of unique objects: a custom gear, a fancy vase, a specific drone part. Once they are printed, they all get dumped into a big, messy bin.

The Problem: The "Lost in the Bin" Dilemma
When a worker needs to grab a specific part from that bin, they have to look at it, figure out what it is, and check it off a list. If the factory prints a new type of object tomorrow, the worker has to learn what it looks like all over again.

Currently, if you want a computer to do this job, you'd usually have to "teach" it by showing it thousands of photos of that specific new object. But in a fast-paced factory, you can't stop the line every day to retrain the computer. You need a system that can look at a brand-new object it has never seen before and say, "Ah, I know what that is!" without needing a lesson.

The Solution: The "Blueprint" Trick
The authors of this paper came up with a clever solution. They realized that for every 3D-printed object, there is already a perfect digital "blueprint" (a CAD model) sitting in the computer.

Instead of teaching the computer by showing it photos of the real object, they taught it to recognize the object by looking at its blueprint.

  • The Analogy: Imagine you have a friend who has never seen a real, physical chair. But you show them a perfect 3D drawing of the chair from every angle. Later, when they see a real chair in a store, they recognize it because they know exactly what the "ideal" chair looks like from the drawing.

The New Dataset: "ThingiPrint"
To prove this works, the researchers built a new "training gym" called ThingiPrint.

  • They picked 100 random 3D models (like a toy car, a keychain, a gear).
  • They 3D printed them in the real world.
  • They took photos of the real objects while spinning them around (just like a worker holding them).
  • They paired these real photos with the original digital blueprints.

This dataset is like a dictionary that links the "digital ideal" to the "messy reality" of a printed object.

The Magic Ingredient: The "Rotation-Invariant" Brain
The biggest challenge is that a worker might hold a part upside down, sideways, or tilted. A standard computer vision model might get confused if it only knows what a gear looks like from the top.

The researchers taught their AI a special trick called Contrastive Fine-Tuning with Rotation Invariance.

  • The Analogy: Think of a child learning what a dog is. If you only show them a dog sitting, they might think a standing dog is a different animal. But if you show them the same dog running, sleeping, and jumping, they learn that no matter how the dog moves, it's still the same dog.
  • The researchers did this for the AI. They showed it the same 3D object from dozens of different angles and told the AI, "These are all the same thing." This made the AI robust. It stopped caring about how the object was held and started focusing on what the object actually was.

The Results: A Super-Worker
When they tested this system:

  1. Standard AI models (like the ones used for recognizing cats and dogs in your phone) were terrible at this. They got confused easily, getting only about 27-60% right.
  2. The new "Blueprint" AI got it right 76.5% of the time.
  3. Even if they printed the same object on a different machine (which changes the texture slightly), the AI still recognized it.

Why This Matters
This research is a game-changer for "Post-Production Automation." It means factories can finally automate the boring, messy job of sorting printed parts.

  • No Retraining: If you introduce a new product tomorrow, you don't need to take photos and retrain the AI. You just feed the computer the new digital blueprint, and the AI is ready to go.
  • Wearable Tech: The system is designed to work with "smart glasses." A worker can pick up a part, look at it, and the glasses will instantly whisper, "That's Part #405," saving hours of manual checking.

In a Nutshell
The paper solves the problem of "How do we teach a robot to recognize a million different 3D printed objects without teaching it one by one?" The answer is: Don't teach it the objects; teach it the blueprints, and let the robot learn to ignore the weird angles and messy lighting.