Intracoronary Optical Coherence Tomography Image Processing and Vessel Classification Using Machine Learning

This paper presents a fully automated machine learning pipeline that integrates image preprocessing, guidewire artifact removal, and K-means clustering with Logistic Regression and SVM classifiers to achieve highly accurate (99.68%) vessel segmentation and classification in intracoronary Optical Coherence Tomography images with minimal manual annotation.

Amal Lahchim, Lambros Athanasiou

Published 2026-02-20
📖 5 min read🧠 Deep dive

Imagine your heart's arteries are like tiny, winding garden hoses buried deep underground. Sometimes, these hoses get clogged with gunk (plaque) or damaged. To fix them, doctors need to look inside, but the hoses are so small that regular cameras can't see the details.

Enter OCT (Optical Coherence Tomography). Think of this as a super-powered, high-definition flashlight that can see inside the hose with incredible clarity. It takes thousands of tiny slices of the artery to build a 3D picture.

The Problem:
While the OCT camera is amazing, the pictures it takes are messy. They are often:

  • Noisy: Like a radio with static.
  • Distorted: The camera is on a spinning wire, so the images look like a swirl of a tornado (polar coordinates) rather than a straight line.
  • Obscured: A tiny metal wire (the guidewire) used to hold the camera often casts a shadow, blocking the view of the artery wall, just like a finger blocking a flashlight beam.

Reading these messy images by hand is like trying to find a specific grain of sand on a beach while wearing foggy glasses. It takes a long time and is prone to human error.

The Solution: A Digital "Smart Assistant"
The authors of this paper built a computer program (a pipeline) that acts like a smart assistant to clean up these images and automatically tell the doctor: "Here is the healthy wall of the artery, and here is the empty space inside."

Here is how their "Smart Assistant" works, step-by-step:

1. The Cleanup Crew (Preprocessing)

Before the computer can analyze the image, it has to clean it up.

  • Noise Reduction: Imagine the image is a photo full of dust specks. The program uses a "median filter" (think of it as a smart eraser) to wipe away the random noise while keeping the important edges sharp.
  • Shadow Removal: Remember the guidewire shadow? The program finds the darkest vertical strip (the shadow), cuts it out, and then uses a "seamless blend" technique to stitch the two sides of the artery back together. It's like using Photoshop to remove a person from a photo so the background looks natural.

2. Unrolling the Carpet (Polar to Cartesian)

Because the camera spins, the raw image looks like a spiral or a donut slice. To make it easy to analyze, the program "unrolls" this spiral into a flat, rectangular image.

  • Analogy: Imagine a roll of wrapping paper with a pattern on it. It's hard to see the whole pattern while it's rolled up. The program unrolls it flat on a table so you can see the whole picture at once.

3. The "Sorter" (K-Means Clustering)

Now that the image is flat and clean, the program needs to guess what is "artery wall" and what is "background." It uses a technique called K-Means Clustering.

  • Analogy: Imagine you have a bag of mixed red and blue marbles. You don't know which is which, but you can sort them by color. The computer looks at the brightness of every pixel and says, "These bright pixels look like the wall, and these dark pixels look like the background." It groups them into two piles automatically, without needing a human to label them first.

4. The Detective Work (Feature Extraction)

Now the program wants to be sure. It zooms in on every single pixel and asks, "What does your neighborhood look like?" It looks at a small square patch (11x11 pixels) around each dot and calculates 7 clues:

  • Brightness: Is it light or dark?
  • Texture: Is it smooth or bumpy?
  • Edges: Are there sharp lines nearby?
  • Analogy: It's like a detective looking at a suspect's neighborhood. If the area is smooth and uniform, it's probably the background. If it's bumpy and has sharp edges, it's probably the artery wall.

5. The Final Verdict (Machine Learning)

Finally, the program uses two "brainy" models (Logistic Regression and SVM) to make the final decision based on those 7 clues.

  • The Result: These models are incredibly accurate. They got it right 99.68% of the time.
  • The Magic: In the test, the computer was able to draw the outline of the artery wall perfectly, matching what a human expert would draw, but doing it in a fraction of a second.

Why Does This Matter?

Currently, doctors have to stare at these messy, swirling images for hours to find the blockages. This new method is like giving them a GPS for their eyes. It automatically cleans the map, unrolls the road, and highlights the destination.

In short: This paper teaches a computer how to clean up a messy, spinning photo of a heart artery, unroll it, and automatically draw a perfect line around the healthy tissue, saving doctors time and helping them treat patients faster and more accurately.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →