Bioresorbable Scaffold Visualization in IVOCT Images Using CNNs and Weakly Supervised Localization

10/22/2018 ∙ by Nils Gessert, et al. ∙ 4

Bioresorbable scaffolds have become a popular choice for treatment of coronary heart disease, replacing traditional metal stents. Often, intravascular optical coherence tomography is used to assess potential malapposition after implantation and for follow-up examinations later on. Typically, the scaffold is manually reviewed by an expert, analyzing each of the hundreds of image slices. As this is time consuming, automatic stent detection and visualization approaches have been proposed, mostly for metal stent detection based on classic image processing. As bioresorbable scaffolds are harder to detect, recent approaches have used feature extraction and machine learning methods for automatic detection. However, these methods require detailed, pixel-level labels in each image slice and extensive feature engineering for the particular stent type which might limit the approaches' generalization capabilities. Therefore, we propose a deep learning-based method for bioresorbable scaffold visualization using only image-level labels. A convolutional neural network is trained to predict whether an image slice contains a metal stent, a bioresorbable scaffold, or no device. Then, we derive local stent strut information by employing weakly supervised localization using saliency maps with guided backpropagation. As saliency maps are generally diffuse and noisy, we propose a novel patch-based method with image shifting which allows for high resolution stent visualization. Our convolutional neural network model achieves a classification accuracy of 99.0 stent classification which can be used for both high quality in-slice stent visualization and 3D rendering of the stent structure.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Coronary heart disease is one of the most frequent causes of death despite being treatable. To treat the obstructive plaques, stenting is commonly used with mostly metallic stents being used in the past. As metal stents come with the risk of late stent thrombosis and in-stent restenosis[1], bioresorbable scaffolds such as bioresorbable vascular scaffolds (BVS) have gained popularity recently. After implantation and in later follow-up examinations, the stents have to be assessed by the medical expert in order to detect malapposition or assess endothelialisation. Typically, intravascular optical coherence tomography (IVOCT) is used as an imaging modality for stent analysis[2] as it provides high resolution images of the lumen and vessel walls. As a single IVOCT pullback contains hundreds of image slices to be assessed, manual evaluation is labor-intensive and time consuming. Therefore, automatic stent detection and visualization methods have been proposed, mostly for metallic stents[3, 4, 5]. These methods largely rely on classic image processing to detect the high-intensity metal stent struts. For bioresorbable scaffolds, a classic approach has also been proposed [6]. However, since these struts are less pronounced in IVOCT images and different types of scaffolds show different characteristics, recent approaches have used machine learning methods combined with handcrafted feature extraction for detection and visualization [7]. While showing promising results, these methods require pixel-level image annotations to learn local detection of stent struts within the slices. This is, once again, time consuming and limits the potential dataset size and variability. Moreover, features need to be engineered for a specific stent type which might not be suitable for future stent variations which has already become evident during the transition from metal stents to bioresorbable scaffolds[6, 7].

For this reason, we propose a novel deep learning-based method for stent visualization and potential detection using only image-level label annotations. A convolutional neural network (CNN) is trained to classify an IVOCT slice into the categories ”metal stent”, ”bioresorbable scaffold” and ”no device”. This way of image-level labeling has been successful for IVOCT-based deep learning

[8] as it is fast and thus allows for larger datasets. Moreover, it is easily extensible to new stent types as a new class simply needs to be added to the learning problem and no new feature engineering is required. After training the model, we employ the concept of weakly supervised localization[9] to derive local stent strut information from the model. In particular, we compute saliency maps with guided backpropagation[10] which can be interpreted as a gradient image which shows the regions that were most important for the model’s prediction. In our case, the trained network should have learned to focus on stent struts. However, saliency maps are generally diffuse and high quality localization from global information only is very challenging[9]. For this reason, methods such as SmoothGrad[11] have been proposed which are targeted at improved saliency map quality. As we found this method to be insufficient for the problem at hand, we propose a new patch-based approach with image shifting which leads to high resolution, high quality saliency maps that can serve as a visualization. The approach is used for regularization during model training and also for generation of stitched and averaged, smooth saliency maps.

In this paper, we introduce our method for BVS visualization. We show that it is effective when visualizing stents for assessment after apposition with struts at the tissue surface as well as for follow-up review of, e.g. endothelialisation, where struts are starting to decay within the vessel tissue. Moreover, we show visualization of classic metal stents and the very recent Fantom Encore bioresorbable scaffold.

2 Methods and Materials

2.1 Dataset

The dataset we use consists of clinical 2D IVOCT images in polar representation acquired with a St. Jude Medical Ilumien OPTIS. The set contains images with a metal stent, with a BVS, and without any device. The slices were extracted from pullbacks. Each of the images is assigned the label ”metal stent”, ”bioresorbable scaffold”, or ”no device”. Note, that we include metal stents for the sake of completeness and the main focus is on the more challenging bioresorbable scaffolds which have gained a lot more relevance in todays clinical routines. We use approximately of the dataset for training and for testing. The images are separated by pullbacks, i.e., all images from a pullback are either entirely in the training set or the test set. In addition to this dataset, we consider several 2D image slices from the recent Fantom Encore bioresorbable scaffold for evaluation. We do not train on any images containing this stent and we use the image slices to show the generalization of our method. We use the IVOCT images in their polar representation, i.e., the image axes are the depth and the angle of acquisition . For visualization, we transform the images to cartesian space using and .

2.2 Model Training

We use a 2D CNN which takes the polar 2D IVOCT images as its input and outputs one of the three classes. The model is a Resnet50 that was pretrained on the ImageNet dataset

[12]. We employ a preprocessing scheme that matches the evaluation strategy later on for saliency map generation. First, we cut off the last pixels along the depth dimension of the original polar images as they mostly contain noise. Then, we do not downsample the images any further, as typically done, but instead use small crops of size or out of the image of size . In this way, we maintain OCT’s original high resolution. This method induces a regularizing effect as each crop will only contain few stent struts and never the entire structure. This ensures that the network does not fit to only a single strut which might be sufficient to identify the stent type. For further data augmentation, we randomly shift the polar image along the angle dimension which ensures that information at the image borders is not neglected or ignored by the model.

As the dataset exhibits class imbalance, we weight the cross-entropy loss with the normalized inverse class frequency of each class, i.e., the loss for metal stent examples is weighted higher than the loss for normal images. We train the model using stochastic gradient descent with Adam

[13] with a starting learning rate of , a batch size of and a training time of epochs. We implement preprocessing, model training and saliency map generation using Tensorflow[14].

2.3 Saliency Map Construction

Figure 1: The proposed evaluation strategy for saliency map generation. For training, patches are sampled similarly and shifting is used as a data augmentation technique. Note, that shifting in polar coordinate space resembles rotation in cartesian space. For details, see Section 2.3.

After training, we use the model to construct saliency maps for stent visualization. The generation process is depicted in Figure 1. First, overlapping patches are cropped from the original image. Then, with the normal network, predictions are obtained for each patch. These predictions are averaged for a global image prediction with respect to the stent type. In addition, the patches are used for saliency map generation using guided backpropagation[10]

which requires a modification of the ReLu backwards pass. Then, the saliency map patches are assembled into an entire image once again. During this process, only patches are considered that match the global, average prediction for the current image. This avoids noise being added into the image in unwanted locations. Moreover, we cut off

of the saliency patches’ borders as we observed noise and artifacts in those regions. Afterwards, the process is repeated times with shifted versions of the original polar image along the angle dimension. This shift represents a rotation of the image in cartesian space. The saliency maps are then averaged into a final saliency map. This process smooths the maps and makes sure that stent struts at the image borders are captured as well. As convolutions are invariant towards translation, we found to be sufficient to take care of border effects. For comparison, we also consider a naive approach where the model is trained on whole, downsampled images and the saliency maps are generated by a single pass over the entire image. For visualization, we use the negative saliency map, unless indicated otherwise. I.e., we only plot values from the saliency map that have a negative value as we found this variant to highlight the stent struts effectively.

3 Results

Full Images Patch Size Patch Size
Accuracy
AUC
F1-Score
Table 1: Stent classification results for three different approaches. Accuracy and area-under-curve (AUC) represent the mean value over all classes.

First, we report classification accuracy results for several model variations in Table 1. We consider two variants of our approach with patches of size and . Moreover, we provide results for the whole image approach where the entire image is downsampled instead of using high resolution patches. There is only a small difference between the models with the patch-based approaches performing slightly better. Overall, the stent classification accuracy is very high.

Figure 2: Example for our saliency map-based visualization for decaying stent struts. Top, the visualization within the slice is shown. Left, the original IVOCT image in polar representation is shown. In the center, the generated saliency map is shown. Right, the overlaid image is shown. The saliency map is shown in red. The original IVOCT image is shown in cyan. Bottom, a 3D rendering of our visualization technique is shown. Left, the normal rendered pullback is shown. Right, the stacked, rendered saliency maps are shown.

Next, Figure 2 shows the visualization of an IVOCT image, its saliency map and the overlaid image for the case of struts being covered by tissue in a follow-up examination. The saliency map clearly delinates both the lumen and the stent struts. Furthermore, the figure shows a 3D visualization of a part of a pullback. Here, the IVOCT images were transformed to cartesian space and stacked next to each other. Our visualization clearly shows the stent’s grid structure across slices.

Figure 3: Example for our saliency map-based visualization for stent struts at the surface. Top, the visualization within the slice is shown. Left, the original IVOCT image in polar representation is shown. In the center, the generated saliency map is shown. Right, the overlaid image is shown. The saliency map is shown in red. The original IVOCT image is shown in cyan. Bottom, a 3D rendering of our visualization technique is shown. Left, the normal rendered pullback is shown. Right, the stacked, rendered saliency maps are shown.

In addition, Figure 3 shows the same visualization for the case of stent struts at the tissue surface, which is commonly the case directly after implantation. Again, the saliency maps provide a meaningful visualization both within the slice and also in a 3D rendering.

Figure 4: Comparison of saliency maps between the full image-based approach and the cropped strategy. Left, the full image-based approach is shown. In the center, the patch-based approach with is shown. Right, the patch-based approach with is shown. The images were transformed to cartesian space for visualization. The saliency map is colored in red. The original IVOCT image is colored in cyan.

Furthermore, we present several visualizations from variations of our approach. Figure 4 shows the resulting saliency maps for the three different training scenarios. Clearly, our approach substantially improves saliency map quality with clear outlines of the stent struts and also the lumen border.

Figure 5: Comparison of saliency maps between no stent (left), a metal stent (center) and the Fantom Encore bioresorbable scaffold (right). Note, that for the metal stent visualization we used maximum saliency maps instead of minimum. The images were transformed to cartesian space for visualization. The saliency map is colored in red. The original IVOCT image is colored in cyan.

Last, we show a comparison between visualization without any stent, a metal stent and the new Fantom Encore bioresorbable scaffold in Figure 5. The figure shows that our method also works well for metal stent and even generalizes to a new stent model whose struts are signifcantly thinner than the BVS model in the training set. Also, it is notable that our method achieves a smooth lumen segmentation when no stent is present in the image.

4 Discussion and Conclusion

In this paper we present a novel deep learning-based method for bioresorbable scaffold visualization. The method does not require any pixel-level annotations and local image information is derived from a global image label only. In particular, we train a convolutional neural network that predicts the type of stent that is generally visible in the IVOCT image. Then, we follow the concept of weakly supervised localization[15] and derive fine-grained image information from saliency maps with guided backpropagation[10]. As saliency maps are generally diffuse, we propose a patch-based method with image shifting for smoothed saliency maps. The concept is also useful for training where it acts as a regularization technique which slightly improves stent classification accuracy, see Table 1. In general, the classification performance is very high which indicates that the learning problem is well defined and sufficiently solved with our model. The resulting visualizations in Figure 2 and Figure 3 show that our method is able to capture bioresorbable stent struts both for detection at the surface and in later stages of decay. The former is clinically relevant as malapposition needs to be detected immediately after implantation with struts at the surface. The latter is important for follow-up examination where, e.g. endothelialisation needs to be confirmed. Also, it is notable that the 3D visualization of our saliency maps accurately shows the expected stent grid structure which shows that the visualization is consistent across slices. In addition, we showed that our new method significantly improves saliency map quality compared to a standard saliency map approach with full images as the network input. Last, we found that our method also works well for metal stents and notably also the new Fantom Encore bioresorbable scaffold. The visualization also works for the latter despite the stent struts being very small. Also, we did not include any stent images of this type during training which shows that our method is also able generalize to new, similar stent types. Even if an entirely new stent type was added to our method, adjustment would be very easy as only an additional class needs to be added to the learning problem. No manual feature engineering is required. For future work, our successful visualization method can be extended to quantitative analysis, e.g. for automatic stent malapposition detection. This extension should be feasible as the model also appears to have learned a reasonable lumen segmentation, see Figure 5. Adding an explicit differentiation between stents and lumen border could allow for measurement of the distance between struts and the lumen. Furthermore, our method could be extended with a larger dataset and more stent types.

References

  • [1] Cutlip, D. E., Baim, D. S., Ho, K. K., Popma, J. J., Lansky, A. J., Cohen, D. J., Carrozza Jr, J. P., Chauhan, M. S., Kuntz, R. E., et al., “Stent thrombosis in the modern era,” Circulation (2001).
  • [2] Tearney, G. J., Regar, E., Akasaka, T., Adriaenssens, T., Barlis, P., Bezerra, H. G., Bouma, B., Bruining, N., Cho, J., Chowdhary, S., et al., “Consensus standards for acquisition, measurement, and reporting of intravascular optical coherence tomography studies,” Journal of the American College of Cardiology 59(12), 1058–1072 (2012).
  • [3] Wang, A., Eggermont, J., Dekker, N., Garcia-Garcia, H. M., Pawar, R., Reiber, J. H., and Dijkstra, J., “Automatic stent strut detection in intravascular optical coherence tomographic pullback runs,” The international journal of cardiovascular imaging 29(1), 29–38 (2013).
  • [4] Ughi, G. J., Adriaenssens, T., Onsea, K., Kayaert, P., Dubois, C., Sinnaeve, P., Coosemans, M., Desmet, W., and D’hooge, J., “Automatic segmentation of in-vivo intra-coronary optical coherence tomography images to assess stent strut apposition and coverage,” The International Journal of Cardiovascular Imaging 28(2), 229–241 (2012).
  • [5] Gurmeric, S., Isguder, G. G., Carlier, S., and Unal, G., “A new 3-d automated computational method to evaluate in-stent neointimal hyperplasia in in-vivo intravascular optical coherence tomography pullbacks,” in [International Conference on Medical Image Computing and Computer-Assisted Intervention ], 776–785, Springer (2009).
  • [6] Wang, A., Nakatani, S., Eggermont, J., Onuma, Y., Garcia-Garcia, H. M., Serruys, P. W., Reiber, J. H., and Dijkstra, J., “Automatic detection of bioresorbable vascular scaffold struts in intravascular optical coherence tomography pullback runs,” Biomedical optics express 5(10), 3589–3602 (2014).
  • [7] Cao, Y., Jin, Q., Lu, Y., Jing, J., Chen, Y., Yin, Q., Qin, X., Li, J., Zhu, R., and Zhao, W., “Automatic analysis of bioresorbable vascular scaffolds in intravascular optical coherence tomography images,” Biomedical Optics Express 9(6), 2495–2510 (2018).
  • [8]

    Gessert, N., Heyder, M., Latus, S., Lutz, M., and Schlaefer, A., “Plaque classification in coronary arteries from ivoct images using convolutional neural networks and transfer learning,” in [

    Computer Assisted Radiology and Surgery Proceedings of the 32nd International Congress and Exhibition ], Springer (2018).
  • [9]

    Oquab, M., Bottou, L., Laptev, I., and Sivic, J., “Is object localization for free?-weakly-supervised learning with convolutional neural networks,” in [

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

     ], 685–694 (2015).
  • [10] Springenberg, J. T., Dosovitskiy, A., Brox, T., and Riedmiller, M., “Striving for simplicity: The all convolutional net,” in [ICLR 2015 Workshop ], (2014).
  • [11] Smilkov, D., Thorat, N., Kim, B., Viégas, F., and Wattenberg, M., “Smoothgrad: removing noise by adding noise,” arXiv preprint arXiv:1706.03825 (2017).
  • [12] He, K., Zhang, X., Ren, S., and Sun, J., “Deep residual learning for image recognition,” in [Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition ], 770–778 (2016).
  • [13] Kingma, D. and Ba, J., “Adam: A method for stochastic optimization,” in [ICLR ], (2014).
  • [14] Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., and Devin, M., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603.04467 (2016).
  • [15] Oquab, M., Bottou, L., Laptev, I., and Sivic, J., “Learning and transferring mid-level image representations using convolutional neural networks,” in [Proceedings of the IEEE CVPR ], 1717–1724 (2014).