Development of an algorithm for medical image segmentation of bone tissue in interaction with metallic implants

04/22/2022
by   Fernando García-Torres, et al.
0

This preliminary study focuses on the development of a medical image segmentation algorithm based on artificial intelligence for calculating bone growth in contact with metallic implants. estimating the growth of new bone tissue due to artifacts. various types of distortions and errors, known as artifacts. Two databases consisting of computerized microtomography images have been used throughout this work: 100 images for training and 196 images for testing. Both bone and implant tissue were manually segmented in the training data set. The type of network constructed follows the U-Net architecture, a convolutional neural network explicitly used for medical image segmentation. In terms of network accuracy, the model reached around 98%. Once the prediction was obtained from the new data set (test set), the total number of pixels belonging to bone tissue was calculated. This volume is around 15% of the volume estimated by conventional techniques, which are usually overestimated. This method has shown its good performance and results, although it has a wide margin for improvement, modifying various parameters of the networks or using larger databases to improve training.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

02/10/2022

Towards a Guideline for Evaluation Metrics in Medical Image Segmentation

In the last decade, research on artificial intelligence has seen rapid g...
01/11/2017

CNN-based Segmentation of Medical Imaging Data

Convolutional neural networks have been applied to a wide variety of com...
04/21/2021

PocketNet: A Smaller Neural Network for 3D Medical Image Segmentation

Overparameterized deep learning networks have shown impressive performan...
10/11/2018

A Novel Domain Adaptation Framework for Medical Image Segmentation

We propose a segmentation framework that uses deep neural networks and i...
08/23/2021

Efficient Medical Image Segmentation Based on Knowledge Distillation

Recent advances have been made in applying convolutional neural networks...
06/13/2022

On Image Segmentation With Noisy Labels: Characterization and Volume Properties of the Optimal Solutions to Accuracy and Dice

We study two of the most popular performance metrics in medical image se...
04/30/2018

Stack-U-Net: Refinement Network for Image Segmentation on the Example of Optic Disc and Cup

In this work, we propose a special cascade network for image segmentatio...

1 Introduction

One of the most widely used non-destructive medical imaging techniques is computed tomography (CT). Computed microtomography is a CT variant using micrometer resolutions. One of the uses of these techniques is to evaluate the performance of surgical implants in the body. Usually, the images obtained present various artifacts such as noise, low contrast or blurred areas. Some techniques are often applied to reduce these effects [Boas and Fleischmann(2012)]. Regarding evaluating the interaction of body tissues with metallic implants, semi-automated methods of commercial medical imaging software currently overestimate the amount of tissue due to image defects and artifacts [Ripley et al.(2017)Ripley, Levin, Kelil, Hermsen, Kim, Maki, and Wilson]

. In addition, the progress of machine learning and artificial intelligence has led to the development of new techniques that have been shown to play an important role in medical image processing and image segmentation

[Zhang and Xing(2018)]. The purpose of this work is to assess the feasibility of using convolutional neural networks (CNN) to improve the results obtained by the current CT image reconstruction and segmentation software, Mimics Innovation Suite (Materialise, Belgium), in CT images of cylindrical metal implants inserted in rabbit distal femoral condyles. This program performs a 3D reconstruction from 2D images and perform various calculations, such as the volume of regenerated bone around a prosthesis, but it introduces overestimates due to the artifacts caused by metal implants.

2 Methods

ImageJ software was used for the segmentation of the image set. The process consisted of: image preprocessing to improve viewing conditions, manual segmentation, review and correction. The images were segmented by three cases: image background (pixel value 0), bone (pixel value 1) and implant (pixel value 2).

The dataset consisted of 100 images in grayscale and size 2016x2016 pixels, that were downscaled to 512x512 pixels. Bone and implant masks were transformed from grayscale to binary images. The dataset was divided into 95 training and five validation images. A new database of 196 images was used as a testing dataset, belonging to a complete new sample. From the predictions made on this second database, the calculation of bone volume was performed and later compared with the results obtained by Mimics.

The network designed with 2.6.0 version of Keras

[Chollet et al.(2015)]

followed the structure of a U-Net, which consists of a contraction path with 2D convolutional layers and max-pooling filters, and an expansion path, with transposed convolution layers. Two convolutions were carried out for each step, increasing the depth of the images and going from 1 channel to 64. After each convolution, a ReLU activation function was applied. A max-pooling process was applied after, reducing the image size by half. These steps were repeated until the last step (without max-pooling), where the expansion path began to obtain the image in its original size. The last layer of the network had sigmoid activation. The loss function used was categorical cross-entropy and the optimizer selected was Adam.

Volume calculation was estimated by the number of pixels belonging to the bone class of the test dataset predictions. This parameter was compared with the pixels calculated in the Mimics software (1) where and are the parameters calculated from the CNN prediction and and from Mimics software.

(1)

3 Results

The network trained with

images for 50 epochs achieved an accuracy of 98% (fig:accuracy) and reached a negligible level of loss. The total pixel count predicted was

, and the total pixel count in Mimics pixels, which is equivalent to . Therefore, the volume of bone provided by the network is , approximately the volume calculated with Mimics.

4 Discussion

Despite being a small dataset of CT, the results of the CNN achieved acceptable performance. The CNN reduced the estimation of the bone volume considerably compared with the estimation of Mimics software. On a qualitative level, the segmentation performed by Mimics seemed to be noisier, more blurred and with lower contrast (fig:implantes). It should be noted that the model cannot remove the effect of metallic artifacts completely, as it did not have images with significant artifacts, which may lead to an overestimation of the bone volume. A more significant number of training images with a larger number of artifact typologies would be necessary to robust the network.

[model’s accuracy] [comparison of performances]

Figure 1: Results obtained from the CNN.

5 Conclusions

Despite the limited number of train images, the network performs acceptably, eliminating much of the noise produced by the radiation on the metal implant and overestimating the bone volume to a lesser extent than commercial software.

References

  • [Boas and Fleischmann(2012)] F Edward Boas and Dominik Fleischmann. CT artifacts: causes and reduction techniques. 4(2):229–240, 2012. ISSN 1755-5191, 1755-5205. doi: 10.2217/iim.12.13.
  • [Chollet et al.(2015)] Francois Chollet et al. Keras: Deep learning for humans, 2015. URL https://github.com/keras-team/keras. original-date: 2015-03-28T00:35:42Z.
  • [Ripley et al.(2017)Ripley, Levin, Kelil, Hermsen, Kim, Maki, and Wilson] Beth Ripley, Dmitry Levin, Tatiana Kelil, Joshua L Hermsen, Sooah Kim, Jeffrey H Maki, and Gregory J Wilson. 3d printing from MRI data: Harnessing strengths and minimizing weaknesses. 45(3):635–645, 2017. ISSN 1522-2586. doi: 10.1002/jmri.25526.
  • [Zhang and Xing(2018)] C. Zhang and Y. Xing. CT artifact reduction via u-net CNN. In Medical Imaging 2018: Image Processing, volume 10574, page 105741R. International Society for Optics and Photonics, 2018. doi: 10.1117/12.2293903.