One of the most widely used non-destructive medical imaging techniques is computed tomography (CT). Computed microtomography is a CT variant using micrometer resolutions. One of the uses of these techniques is to evaluate the performance of surgical implants in the body. Usually, the images obtained present various artifacts such as noise, low contrast or blurred areas. Some techniques are often applied to reduce these effects [Boas and Fleischmann(2012)]. Regarding evaluating the interaction of body tissues with metallic implants, semi-automated methods of commercial medical imaging software currently overestimate the amount of tissue due to image defects and artifacts [Ripley et al.(2017)Ripley, Levin, Kelil, Hermsen, Kim, Maki, and Wilson]
. In addition, the progress of machine learning and artificial intelligence has led to the development of new techniques that have been shown to play an important role in medical image processing and image segmentation[Zhang and Xing(2018)]. The purpose of this work is to assess the feasibility of using convolutional neural networks (CNN) to improve the results obtained by the current CT image reconstruction and segmentation software, Mimics Innovation Suite (Materialise, Belgium), in CT images of cylindrical metal implants inserted in rabbit distal femoral condyles. This program performs a 3D reconstruction from 2D images and perform various calculations, such as the volume of regenerated bone around a prosthesis, but it introduces overestimates due to the artifacts caused by metal implants.
ImageJ software was used for the segmentation of the image set. The process consisted of: image preprocessing to improve viewing conditions, manual segmentation, review and correction. The images were segmented by three cases: image background (pixel value 0), bone (pixel value 1) and implant (pixel value 2).
The dataset consisted of 100 images in grayscale and size 2016x2016 pixels, that were downscaled to 512x512 pixels. Bone and implant masks were transformed from grayscale to binary images. The dataset was divided into 95 training and five validation images. A new database of 196 images was used as a testing dataset, belonging to a complete new sample. From the predictions made on this second database, the calculation of bone volume was performed and later compared with the results obtained by Mimics.
The network designed with 2.6.0 version of Keras[Chollet et al.(2015)]
followed the structure of a U-Net, which consists of a contraction path with 2D convolutional layers and max-pooling filters, and an expansion path, with transposed convolution layers. Two convolutions were carried out for each step, increasing the depth of the images and going from 1 channel to 64. After each convolution, a ReLU activation function was applied. A max-pooling process was applied after, reducing the image size by half. These steps were repeated until the last step (without max-pooling), where the expansion path began to obtain the image in its original size. The last layer of the network had sigmoid activation. The loss function used was categorical cross-entropy and the optimizer selected was Adam.
Volume calculation was estimated by the number of pixels belonging to the bone class of the test dataset predictions. This parameter was compared with the pixels calculated in the Mimics software (1) where and are the parameters calculated from the CNN prediction and and from Mimics software.
The network trained with
images for 50 epochs achieved an accuracy of 98% (fig:accuracy) and reached a negligible level of loss. The total pixel count predicted was, and the total pixel count in Mimics pixels, which is equivalent to . Therefore, the volume of bone provided by the network is , approximately the volume calculated with Mimics.
Despite being a small dataset of CT, the results of the CNN achieved acceptable performance. The CNN reduced the estimation of the bone volume considerably compared with the estimation of Mimics software. On a qualitative level, the segmentation performed by Mimics seemed to be noisier, more blurred and with lower contrast (fig:implantes). It should be noted that the model cannot remove the effect of metallic artifacts completely, as it did not have images with significant artifacts, which may lead to an overestimation of the bone volume. A more significant number of training images with a larger number of artifact typologies would be necessary to robust the network.
Despite the limited number of train images, the network performs acceptably, eliminating much of the noise produced by the radiation on the metal implant and overestimating the bone volume to a lesser extent than commercial software.
- [Boas and Fleischmann(2012)] F Edward Boas and Dominik Fleischmann. CT artifacts: causes and reduction techniques. 4(2):229–240, 2012. ISSN 1755-5191, 1755-5205. doi: 10.2217/iim.12.13.
- [Chollet et al.(2015)] Francois Chollet et al. Keras: Deep learning for humans, 2015. URL https://github.com/keras-team/keras. original-date: 2015-03-28T00:35:42Z.
- [Ripley et al.(2017)Ripley, Levin, Kelil, Hermsen, Kim, Maki, and Wilson] Beth Ripley, Dmitry Levin, Tatiana Kelil, Joshua L Hermsen, Sooah Kim, Jeffrey H Maki, and Gregory J Wilson. 3d printing from MRI data: Harnessing strengths and minimizing weaknesses. 45(3):635–645, 2017. ISSN 1522-2586. doi: 10.1002/jmri.25526.
- [Zhang and Xing(2018)] C. Zhang and Y. Xing. CT artifact reduction via u-net CNN. In Medical Imaging 2018: Image Processing, volume 10574, page 105741R. International Society for Optics and Photonics, 2018. doi: 10.1117/12.2293903.