Automated SSIM Regression for Detection and Quantification of Motion Artefacts in Brain MR Images

by   Alessandro Sciarra, et al.

Motion artefacts in magnetic resonance brain images are a crucial issue. The assessment of MR image quality is fundamental before proceeding with the clinical diagnosis. If the motion artefacts alter a correct delineation of structure and substructures of the brain, lesions, tumours and so on, the patients need to be re-scanned. Otherwise, neuro-radiologists could report an inaccurate or incorrect diagnosis. The first step right after scanning a patient is the "image quality assessment" in order to decide if the acquired images are diagnostically acceptable. An automated image quality assessment based on the structural similarity index (SSIM) regression through a residual neural network has been proposed here, with the possibility to perform also the classification in different groups - by subdividing with SSIM ranges. This method predicts SSIM values of an input image in the absence of a reference ground truth image. The networks were able to detect motion artefacts, and the best performance for the regression and classification task has always been achieved with ResNet-18 with contrast augmentation. Mean and standard deviation of residuals' distribution were μ=-0.0009 and σ=0.0139, respectively. Whilst for the classification task in 3, 5 and 10 classes, the best accuracies were 97, 95 and 89%, respectively. The obtained results show that the proposed method could be a tool in supporting neuro-radiologists and radiographers in evaluating the image quality before the diagnosis.


page 1

page 2

page 3

page 5

page 6

page 8


A Machine-learning framework for automatic reference-free quality assessment in MRI

Magnetic resonance (MR) imaging offers a wide variety of imaging techniq...

Deep Learning using K-space Based Data Augmentation for Automated Cardiac MR Motion Artefact Detection

Quality assessment of medical images is essential for complete automatio...

Learning image quality assessment by reinforcing task amenable data selection

In this paper, we consider a type of image quality assessment as a task-...

Semi-Supervised Learning for Fetal Brain MRI Quality Assessment with ROI consistency

Fetal brain MRI is useful for diagnosing brain abnormalities but is chal...

Artifact- and content-specific quality assessment for MRI with image rulers

In clinical practice MR images are often first seen by radiologists long...

Understanding SSIM

The use of the structural similarity index (SSIM) is widespread. For alm...

1 Introduction

Image quality assessment (IQA) is a fundamental apparatus for evaluating MR images [19, 8, 27]. The main purpose of this process is to find out if the quality can guarantee images are diagnostically reliable and exempted from artefacts - to avoid possible unreliable diagnosis [5, 18]. Often the evaluation process requires time and is also subjectively dependent upon the observer in charge of carrying it out [26]

. Furthermore, different levels of expertise and experience of the readers (experts designated to perform the IQA) could lead to a non-perfect matching assessment. Another intrinsic issue of the IQA for MR images is the absence of a reference image. No-Reference IQA techniques with and without the support of machine and deep learning support have been proposed in the last years for the evaluation of the visual image quality 

[5, 9, 27, 36, 36, 10, 23, 22, 11, 12, 4]. These techniques are able to detect and quantify the level of blurriness or corruption with different levels of accuracy and precision. However, there are many factors to take into consideration when choosing which technique to apply, the most important are [14, 13, 2]

: data requirement - as deep learning requires a large dataset while traditional machine learning (non deep learning based) techniques can be trained on lesser data; accuracy - deep learning provides higher accuracy than traditional machine learning; training time - deep learning takes longer time than traditional machine learning; hyperparameter tuning - deep learning can be tuned in various different ways, and it is not always possible to find the best parameters, while machine learning offers limited tuning capabilities. In addition, when choosing traditional machine learning techniques, the fundamental step of feature extraction must be considered. Although the list of traditional machine learning and deep learning techniques used for regression and classification tasks is constantly updated 

[32, 25, 35, 24], there is still no gold standard IQA for MR images [8]. The aim of this work is to create an automated IQA tool that is able to detect the presence of motion artefacts and quantify the level of corruption or distortion compared to an ”artefact-free” counterpart, based on the regression of the structural similarity index (SSIM) [38]. This tool has been designed to be able to work for a large variety of MR image contrast, such as T1, T2, PD and Flair weighted images, and independently from the resolution and orientation of the considered image. Additionally, a contrast augmentation step has been introduced in order to increase the range of variability of the weighting. In practice, when the MRIs are acquired and if there are any artefacts in the image, ”artefact-free” counterparts are not available to compare the image against for quality assessment. But for the SSIM calculation, it is always necessary to have two images (corrupted vs motion-artefact free images). For this reason, in this work, the corrupted images were artificially created by making use of two different algorithms - one implemented by Shaw et al. [34] (package of the library TorchIO [30]) and a second algorithm developed in-house [7]. Furthermore, when training a neural network model in a fully-supervised manner, as in this case, a large amount of labelled or annotated data is typically required [3]. In this research on IQA, the regression labels for training were created by comparing the artificially-corrupted images against the original artefact-free images with the help of SSIM and those SSIM values were finally used as the regression labels.

2 Methodology

The proposed automatic IQA tool relies on residual neural networks (ResNet) [15, 28]. Two different versions of ResNet were used, with 18 (ResNet-18) and 101 (ResNet-101) residual blocks. Every model has been trained two times - with and without the contrast augmentation step. These are steps executed during the training, Figure 1:

  1. Given a 3D input volume one random slice (2D image) is selected from one of the possible orientations - axial, sagittal, and coronal. In case of an anisotropic volume, the slice selection is done only following the original acquisition orientation.

  2. In case of contrast augmentation is enabled, one of the contrast augmentation algorithms is selected randomly from the following and applied on the input image:

    • Random gamma adjustment [21]

    • Random logarithmic adjustment [17]

    • Random sigmoid adjustment on the input image [6]

    • Random adaptive histogram adjustment [31]

  3. Motion corruption is applied on the 2D image using one of these two options:

    • TorchIO [34, 30], Figure 2 (a)

    • ”in-house” algorithm, Figure 2 (b)

  4. The SSIM is calculated between the 2D input image and the corresponding corrupted one.

  5. The calculated SSIM value and the corrupted image are passed to the chosen model for training

Figure 1: Graphical illustration of all steps for the training as explained in Section 2.

Three datasets - train, validation, and test sets - were used for this work, Table 1. For training, 200 volumes were used, while 50 were used for validation and 50 for testing. The first group of 68 volumes were selected from the public IXI dataset 111Dataset available at:, the second group (Table 1, Site-A) of 114 volumes were acquired with a 3T scanner, the third group (Table 1, Site-B) of 93 volumes was acquired at 7T, and a final group (Table 1, Site-B) of 25 volumes was acquired with different scanners (1.5 and 3T). The volumes from IXI, Site-A, and Site-B were resampled in order to have an isotropic resolution of 1.00 .

The loss during training was calculated using the mean squared error (MSE) [1] and was optimised using the Adam optimiser [20] with a learning rate of

and a batch size of 100 for 2000 epochs. All the images (during training, validation, and testing) were always normalised, and resized or padded to have a 2D matrix size of 256x256.

Figure 2: Samples of artificially corrupted images. On the left column original images, on the right the corrupted ones. (a): image corrupted making use of TorchIO library, (b): image corrupted making use of the home-made algorithm
Data Weighting Volumes Matrix Size Resolution ()
m(M) x m(M) x m(M)† m(M) x m(M) x m(M)†
IXI T1,T2,PD 15,15,15 230(240)x230(240)x134(162) 1.00 isotropic
Site-A T1,T2,PD,FLAIR 20,20,20,20 168(168)x224(224)x143(144) 1.00 isotropic
Site-B T1,T2,FLAIR 20,20,20 156(156)x224(224)x100(100) 1.00 isotropic
Site-C T1 3 192(512)x256(512)x36(256) 0.45(1.00)x0.45(0.98)x0.98(4.40)
Site-C T2 11 192(640)x192(640)x32(160) 0.42(1.09)x0.42(1.09)x1.00(4.40)
Site-C FLAIR 1 320x320x34 0.72x0.72x4.40
IXI T1,T2,PD 1,5,7 230(240)x230(240)x134(162) 1.00 isotropic
Site-A T1,T2,PD,FLAIR 4,4,4,4 168(168)x224(224)x143(144) 1.00 isotropic
Site-B T1,T2,FLAIR 6,6,4 156(156)x224(224)x100(100) 1.00 isotropic
Site-C T1 3 176(240)x240(256)x118(256) 1.00 isotropic
Site-C T2 1 240x320x80 0.80x0.80x2.00
Site-C PD 1 240x320x80 0.80x0.80x2.00
IXI T1,T2,PD 2,4,4 230(240)x230(240)x134(162) 1.00 isotropic
Site-A T1,T2,PD,FLAIR 6,4,4,4 168(168)x224(224)x143(144) 1.00 isotropic
Site-B T1,T2,FLAIR 6,6,5 156(156)x224(224)x100(100) 1.00 isotropic
Site-C T1 2 288(320)x288(320)x35(46) 0.72(0.87)x0.72(0.87)x3.00(4.40)
Site-C T2 2 320(512)x320(512)x34(34) 0.44(0.72)x0.45(0.72)x4.40(4.40)
Site-C FLAIR 1 320x320x35 0.70x0.70x4.40
†: ”m” indicates the minimum value while ”M” the maximum.
Table 1: Data for training, validation and testing.

For testing, a total of 10000 images were repetitively selected randomly and then corrupted from the 50 volumes of the test dataset - applying the random orientation selection, the contrast augmentation, and finally the corruption - as performed during the training stage.
In order to evaluate the performances of the trained models, first the predicted SSIM values were plotted against the ground truth SSIM values as shown in Figure 3, next the residuals were calculated as follow , Figure 4.
The predicted SSIM value of an image can be considered equivalent to measuring the distortion or corruption level of the image. However, when applying this approach to a real clinical case, it is challenging to compare this value with a subjective assessment. To get around this problem, the regression task was simplified into a classification one. For the same, three different experiments were performed by choosing a different number of classes - 3, 5 and 10 classes. For every case, the SSIM range [0-1] was equally divided in order to have equal sub-ranges. For instance, in case of 3 classes, there were three sub-ranges, class-1:[0.00-0.33], class-2:[0.34-0.66] and class-3:[0.67-1.00]. A similar step was also performed for creating 5 and 10 classes.
A second dataset was also used for testing the trained models - comprised of randomly selected images from clinical acquisitions. This dataset contained five subjects, each with a different number of scans, as shown in Table 2. In this case, there were no ground truth reference images, and for this reason, the images were also subjectively evaluated by one expert using the following classification scheme: class 1 - images with good to a high quality that might have minor motion artefacts, but not altering structures and substructures of the brain (SSIM range between 0.85 and 1.00); class 2 - images with sufficient to good quality, in this case, the images can have motion artefacts that prevent a correct delineation of the brain structures, substructures or lesions (SSIM range between 0.60 and 0.85); and class 3 - image with insufficient quality and a re-scan will be required (SSIM range between 0.00 and 0.60). Additionally, this dataset contained different contrasts not included in the training, such as diffusion-weighted images (DWI).

Data Weighting Volumes Matrix Size Resolution ()
m(M) x m(M) x m(M)† m(M) x m(M) x m(M)†
Subj. 1 T1,T2,FLAIR 1,4,2 130(560)x256(560)x26(256) 0.42(1.00)x0.42(0.94)x0.93(4.40)
Subj. 2 T2 3 288(320)x288(320)x28(28) 0.76(0.81)x0.76(0.81)x5.50(5.50)
Subj. 3 T1,T2,FLAIR,DWI,(§) 1,2,1,4,1 256(640)x256(640)x32(150) 0.42(0.90)x0.42(0.90)x0.45(4.40)
Subj. 4 T2, FLAIR, DWI 1,2,6 144(512)x144(512)x20(34) 0.45(1.40)x0.45(1.40)x2.00(4.40)
Subj. 5 T2, FLAIR, DWI 3,1,4 256(640)x256(640)x28(42) 0.40(1.09)x0.40(1.09)x3.30(6.20)
†: ”m” indicates the minimum value while ”M” the maximum.
Table 2: Clinical data

3 Results

The results for the first section, the regression task, are presented in Figures 3 and 4. Figure 3 shows a scatter plot where the predicted SSIM values are compared against the ground truth values. Additionally, the plot shows the plotted linear fitting performed for each trained model. Finally, the distributions of the ground truth and predicted SSIM values are also shown. Figure 3 presents general comparisons across all the trained models and their qualitative dispersion levels. In this case, the term dispersion implies how much the predicted SSIM values differ from the ground-truth . On the other hand, in Figure 4, the results are shown separately using the scatter plots - for each model. The relative residual distribution plots are explained in section 2

. For the residual distributions, a further statistical normal distribution fitting was carried out, making the use of the python package SciPy 

[37]. The calculated mean and standard deviation values are shown in Figure 4. According to the statistical analysis, the model that has the smallest standard deviation () and the mean value closer to zero () was the ResNet-18 model trained with contrast augmentation, while the model with the mean value farther from zero and largest standard deviation was the ResNet-101 trained without contrast augmentation. A clear effect of the contrast augmentation for both models ResNet-18 and ResNet-101 can be seen from the results - reflected as a reduction of the standard deviation values, and this visually correlates with a lower dispersion level in the scatter plots.

Figure 3: Scatter plot for the regression task. There are also shown the linear fittings for each group of data. On the top: ground truth SSIM values distribution; right side: predicted SSIM values distributions for each group of data.
Figure 4: Scatter plot SSIM predicted against ground truth values and Residuals distribution for (a) ResNet-18 without contrast augmentation, (b) ResNet-18 with contrast augmentation, (c) ResNet-101 without contrast augmentation and (d) ResNet-101 with contrast augmentation.

The results for the classification task are shown in Figure 5 and table 3. Figure 5 shows the logarithmic confusion matrices obtained for the classification task. From the matrices, it can be noted that all the trained models performed well and in a similar way. In particular, none of the matrices presents non-zero elements far from the diagonal, but only the neighbouring ones - as commonly expected from a classification task. The table 3 is complementary to Figure 5. It shows the class-wise, macro-average and weighted average of precision, recall, and f1-score for all the trained models. This table also presents the accuracy. For all the three scenarios, 3, 5 and 10 classes as presented in section 2, once again, the model with the best results is ResNet-18 trained with contrast augmentation. This model always obtained the highest accuracy value - 97, 95 and 89% for 3, 5, and 10 class scenarios, respectively. Even though the ResNet-18 with contrast augmentation performed better than the other models, no substantial differences can be discerned from the tabular data. But once again, it is possible to observe an improvement in terms of performance when the contrast augmentation is applied.

Figure 5: Confusion matrices for the classification task. First row 3 classes case, second row 5 classes and third row 10 classes. The columns are for (a) ResNet-18 without contrast augmentation, (b) ResNet-18 with contrast augmentation, (c) ResNet-101 without contrast augmentation, (d) ResNet-101 with contrast augmentation, respectively.
(a) (b) (c) (d)
Class (SSIM) Prec. Recall f1-score Prec. Recall f1-score Prec. Recall f1-score Prec. Recall f1-score Support
1 [0.00 - 0.33] 0.94 0.97 0.95 0.93 0.97 0.95 0.93 0.98 0.96 0.97 0.89 0.93 117
2 [0.033 - 0.66] 0.95 0.96 0.95 0.97 0.96 0.96 0.94 0.97 0.95 0.98 0.94 0.96 4307
3 [0.66 - 1.00] 0.97 0.96 0.97 0.97 0.98 0.97 0.98 0.95 0.96 0.95 0.99 0.97 5576
accuracy 0.96 0.97 0.96 0.96 10000
macro avg 0.95 0.95 0.96 0.96 0.97 0.96 0.95 0.97 0.96 0.97 0.94 0.95 10000
weight. avg 0.96 0.96 0.96 0.97 0.97 0.97 0.96 0.96 0.96 0.96 0.96 0.96 10000
1 [0.00 - 0.20] 0.97 0.91 0.94 0.93 0.79 0.85 0.94 0.97 0.96 0.85 0.88 0.87 33
2 [0.20 - 0.40] 0.86 0.89 0.88 0.85 0.90 0.87 0.83 0.91 0.87 0.93 0.77 0.84 262
3 [0.40 - 0.60] 0.91 0.92 0.91 0.93 0.92 0.93 0.89 0.94 0.91 0.94 0.90 0.92 2320
4 [0.60 - 0.80] 0.94 0.95 0.94 0.95 0.96 0.96 0.94 0.94 0.94 0.94 0.96 0.95 5021
5 [0.80 - 1.00] 0.96 0.93 0.95 0.96 0.96 0.96 0.97 0.92 0.95 0.95 0.96 0.96 2364
accuracy 0.93 0.95 0.93 0.94 10000
macro avg 0.93 0.92 0.92 0.93 0.91 0.91 0.91 0.93 0.92 0.92 0.89 0.91 10000
weight. avg 0.93 0.93 0.93 0.95 0.95 0.95 0.93 0.93 0.93 0.94 0.94 0.94 10000
1 [0.00 - 0.10] 1.00 0.50 0.67 1.00 0.62 0.77 1.00 0.62 0.77 1.00 0.75 0.86 8
2 [0.10 - 0.20] 0.81 0.88 0.85 0.78 0.72 0.75 0.83 0.96 0.89 0.75 0.84 0.79 25
3 [0.20 - 0.30] 0.90 0.90 0.90 0.81 0.84 0.83 0.87 0.89 0.88 0.91 0.79 0.84 62
4 [0.30 - 0.40] 0.81 0.84 0.83 0.80 0.85 0.83 0.76 0.85 0.80 0.88 0.71 0.79 200
5 [0.40 - 0.50] 0.82 0.86 0.84 0.86 0.87 0.87 0.79 0.87 0.83 0.86 0.83 0.84 689
6 [0.50 - 0.60] 0.84 0.84 0.84 0.89 0.87 0.88 0.83 0.86 0.84 0.89 0.84 0.86 1631
7 [0.60 - 0.70] 0.86 0.88 0.87 0.89 0.89 0.89 0.85 0.87 0.86 0.88 0.88 0.88 2706
8 [0.70 - 0.80] 0.87 0.87 0.87 0.89 0.90 0.89 0.88 0.84 0.86 0.86 0.90 0.88 2315
9 [0.80 - 0.90] 0.86 0.88 0.87 0.89 0.92 0.90 0.89 0.85 0.87 0.87 0.91 0.89 1456
10 [0.80 - 1.0] 0.97 0.86 0.91 0.97 0.91 0.94 0.96 0.88 0.91 0.95 0.93 0.94 908
accuracy 0.87 0.89 0.86 0.88 10000
macro avg 0.88 0.83 0.84 0.88 0.84 0.85 0.86 0.85 0.85 0.88 0.84 0.86 10000
weight. avg 0.87 0.87 0.87 0.89 0.89 0.89 0.86 0.86 0.86 0.88 0.88 0.88 10000
Table 3: Results for the classification task.The classification task has been performed three times, considering 3,5 and 10 classes, respectively. ”Prec.” is the abbreviation of the term precision, while ”macro avg” corresponds to macro average and ”weight. avg” to weighted average calculated using the python package scikit-learn [29]. (a) is for ResNet-18 without contrast augmentation, (a) is for ResNet-18 with contrast augmentation,(c) is for ResNet-101 without contrast augmentation, (c) is for ResNet-101 with contrast augmentation.
Figure 6: Evaluation for the clinical dataset. The curves represent the SSIM predictions obtained with the different trained models while the colored bars the subjective classification performed by the expert. When the curves are within the coloured bars there is an agreement between the objective and subjective evaluation, disagreement otherwise. The blue dashed lines indicate the separation between the different subjects.

The results regarding the clinical data samples are shown in Figure 6. In this case, the obtained SSIM predictions are shown for each model - overlayed with the subjective scores - shown in a per-slice manner grouped by the subjects. As introduced in section 2, the subjective ratings for the clinical data samples were within the classes 1, 2 or 3 - after a careful visual evaluation. If the predictions obtained with the different models fall within the classes assigned by the subjective evaluation, this will imply that there is an agreement between the objective and subjective evaluations. When the objective prediction lies outside the class assigned by the expert, this indicates a disagreement between the two assessments. The percentage of agreement between subjective and objective analysis is (mean standard deviation), with a minimum value of achieved by ResNet-101 without contrast augmentation and a maximum of by ResNet-101 with contrast augmentation.

4 Discussion

The performances of the trained models when solving the regression task were very similar. However, when the two models ResNet-18 and ResNet-101 were coupled with contrast augmentation showed a distinct improvement. Looking at the Residuals distributions of the errors, for both models, contrast augmentation has been the reason why the mean values fell closer to zero and also, the values of the standard deviation decreased by and times for ResNet-18 and ResNet-101, respectively. The reduction of the standard deviations is quite evident also in the scatter plots, where the dispersion level is visibly less when the contrast augmentation is applied.

While considering the classification task, the first notable thing is that there is a linear decrease in the accuracy as the number of classes increases - , and

. This can be explained by the fact that as the number of classes increases, the difficulty level also increases for each model to classify the image in the correct pre-defined range of SSIM values. The confusion matrices confirm this behaviour - by the increase of the values being out-of-diagonal, i.e., considering the ResNet-18 not coupled with contrast augmentation, for the classification task with three classes, the maximum value out-of-diagonal is 0.04 (for class-2 and class-3), while considering the classification task with ten classes, the maximum value is 0.50 (for class-1). This implies that the ResNet-18, not coupled with contrast augmentation when performing the 10-classes classification task, classifies incorrectly

of the tested images. When contrast augmentation is applied, there is an apparent reduction of wrongly classified images of class-1. Although this is the general trend observed in Figure 5, there are also contradiction results, i.e., when looking at the 5-classes classification task for class-1 always considering ResNet-18 without and with contrast augmentation, there is a net increase of erroneously classified class-1 images, from to of tested images.

The final application on clinical data also provided satisfactory results, with a maximum agreement rate of between the objective and subjective assessments. A direct comparison with the previous three-classes classification task is not possible due to the different subjective schemes selected (Section 2). Although there is a visible reduction in the performance when the trained models are applied to clinical data, this can be justified by taking into account several factors. First of all, the clinical data sample involved type of image data, such as diffusion acquisition and derived diffusion maps, which were never seen by the models during the training step, and secondly, the motion artefacts artificially created did not cover the infinite possible motion artefacts that can appear in a truly MR motion corrupted image. A possible improvement can be obtained by introducing new contrasts in the training set, different resolutions and orientations. For example, oblique acquisitions have been not considered in this work. In addition, the artificial corruption methods used for this work can be further improved, e.g., including corruption algorithms based on motion log information recorded by a tracking device, as commonly used for prospective motion correction [16][39][33]. However, this would require the availability of raw MR data, and it has to be taken into account also the computational time to de-correct the images, comparably slower than the current approaches. Another point to take into account for the subjective assessment is the bias introduced by each expert while evaluating the image quality. In this work, the expert’s perception of image quality is emulated with good accuracy, , which can not be considered as a standard reference. Although the subjective assessment can be repeated with the help of several experts, there will always be differences between them, i.e., years of experience or different sensitivity to the presence of motion artefacts in the assessed image. It is also noteworthy that the SSIM ranges defined for the three classes can be re-defined following a different scheme. In the scenario explored in this paper, the scheme has been defined by making use of the artificially corrupted images and the ground truth images - this allowed an exact calculation of the SSIM values, and it was simple to define ranges that visually agree with the scheme defined in Sect. 2.

5 Conclusion

This research presents an SSIM-regression based IQA technique using ResNet models, coupled with contrast augmentations to make them robust against changes in the image contrasts in clinical scenarios. The method managed to predict the SSIM values from artificially motion corrupted images without the ground-truth (motion-free) images with high accuracy (residual SSIMs as less as ). Moreover, the motion classes obtained from the predicted SSIMs were very close to the true ones and achieved a maximum weighted accuracy of for the ten classes scenario as reported in Table 3, and achieved a maximum accuracy value of when the number of classes was three (Table 3). Considering the complexity of the problem in quantifying the image degradation level due to motion artefacts and additionally the variability of the type of contrast, resolution, etc., the results obtained are very promising. Further evaluations, including multiple subjective evaluations, will be performed on clinical data to judge its clinical applicability and robustness against changes in real-world scenarios. In addition, other trainings will be carried out in order to have a larger variety of images that should include common clinical routine acquisitions such as diffusion-weighted imaging and Time-of-Flight imaging. Furthermore, it would be beneficial to include images also acquired at lower magnetic field strength ( T). Considering the results obtained by ResNet models in this work, it is reasonable to think that future works can also be targeted towards a different anatomical body part, focusing, for instance, on abdominal or cardiac imaging. However, the reproduction of real looking-like motion artefacts plays a key role in the performances of deep learning models trained to have a reference-less image quality assessment tool.


  • [1] D. M. Allen (1971) Mean square error of prediction as a criterion for selecting variables. Technometrics 13 (3), pp. 469–475. Cited by: §2.
  • [2] T. Amr (2020) Hands-on machine learning with scikit-learn and scientific python toolkits: a practical guide to implementing supervised and unsupervised machine learning algorithms in python. Packt Publishing Ltd. Cited by: §1.
  • [3] A. S. Atukorale and T. Downs (2001) Using labeled and unlabeled data for training. Cited by: §1.
  • [4] L. L. Backhausen, M. M. Herting, J. Buse, V. Roessner, M. N. Smolka, and N. C. Vetter (2016) Quality control of structural mri images applied using freesurfer—a hands-on workflow to rate motion artifacts. Frontiers in neuroscience 10, pp. 558. Cited by: §1.
  • [5] P. Bourel, D. Gibon, E. Coste, V. Daanen, and J. Rousseau (1999) Automatic quality assessment protocol for mri equipment. Medical physics 26 (12), pp. 2693–2700. Cited by: §1.
  • [6] G. J. Braun and M. D. Fairchild (1999) Image lightness rescaling using sigmoidal contrast enhancement functions. Journal of Electronic Imaging 8 (4), pp. 380–393. Cited by: 3rd item.
  • [7] S. Chatterjee, A. Sciarra, M. Dünnwald, S. Oeltze-Jafra, A. Nürnberger, and O. Speck (2020) Retrospective motion correction of mr images using prior-assisted deep learning. arXiv preprint arXiv:2011.14134. Cited by: §1.
  • [8] L. S. Chow and R. Paramesran (2016) Review of medical image quality assessment. Biomedical signal processing and control 27, pp. 145–154. Cited by: §1.
  • [9] L. S. Chow and H. Rajagopal (2017) Modified-brisque as no reference image quality assessment for structural mr images. Magnetic resonance imaging 43, pp. 74–87. Cited by: §1.
  • [10] O. Esteban, D. Birman, M. Schaer, O. O. Koyejo, R. A. Poldrack, and K. J. Gorgolewski (2017) MRIQC: advancing the automatic prediction of image quality in mri from unseen sites. PloS one 12 (9), pp. e0184661. Cited by: §1.
  • [11] I. Fantini, L. Rittner, C. Yasuda, and R. Lotufo (2018) Automatic detection of motion artifacts on mri using deep cnn. In

    2018 International Workshop on Pattern Recognition in Neuroimaging (PRNI)

    pp. 1–4. Cited by: §1.
  • [12] I. Fantini, C. Yasuda, M. Bento, L. Rittner, F. Cendes, and R. Lotufo (2021) Automatic mr image quality evaluation using a deep cnn: a reference-free method to rate motion artifacts in neuroimaging. Computerized Medical Imaging and Graphics 90, pp. 101897. Cited by: §1.
  • [13] A. Géron (2019)

    Hands-on machine learning with scikit-learn, keras, and tensorflow: concepts, tools, and techniques to build intelligent systems

    ” O’Reilly Media, Inc.”. Cited by: §1.
  • [14] I. Goodfellow, Y. Bengio, and A. Courville (2016) Deep learning. MIT press. Cited by: §1.
  • [15] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    Vol. , pp. 770–778. External Links: Document Cited by: §2.
  • [16] M. Herbst, J. Maclaren, C. Lovell-Smith, R. Sostheim, K. Egger, A. Harloff, J. Korvink, J. Hennig, and M. Zaitsev (2014) Reproduction of motion artifacts for performance analysis of prospective motion correction in mri. Magnetic Resonance in Medicine 71 (1), pp. 182–190. Cited by: §4.
  • [17] R. Jain, R. Kasturi, and B. Schunck (1995) Machine vision mcgraw-hill international editions. New York. Cited by: 2nd item.
  • [18] P. Jezzard (2009) The physical basis of spatial distortions in magnetic resonance images. Cited by: §1.
  • [19] M. Khosravy, N. Patel, N. Gupta, and I. K. Sethi (2019) Image quality assessment: a review to full reference indexes. Recent trends in communication, computing, and electronics, pp. 279–288. Cited by: §1.
  • [20] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §2.
  • [21] W. Kubinger, M. Vincze, and M. Ayromlou (1998) The role of gamma correction in colour image processing. In 9th European Signal Processing Conference (EUSIPCO 1998), pp. 1–4. Cited by: 1st item.
  • [22] T. Küstner, S. Gatidis, A. Liebgott, M. Schwartz, L. Mauch, P. Martirosian, H. Schmidt, N. F. Schwenzer, K. Nikolaou, F. Bamberg, et al. (2018) A machine-learning framework for automatic reference-free quality assessment in mri. Magnetic Resonance Imaging 53, pp. 134–147. Cited by: §1.
  • [23] T. Küstner, A. Liebgott, L. Mauch, P. Martirosian, F. Bamberg, K. Nikolaou, B. Yang, F. Schick, and S. Gatidis (2018) Automated reference-free detection of motion artifacts in magnetic resonance images. Magnetic Resonance Materials in Physics, Biology and Medicine 31 (2), pp. 243–256. Cited by: §1.
  • [24] P. Langley et al. (2011) The changing science of machine learning. Machine learning 82 (3), pp. 275–279. Cited by: §1.
  • [25] Y. Li (2022) Research and application of deep learning in image recognition. In 2022 IEEE 2nd International Conference on Power, Electronics and Computer Applications (ICPECA), pp. 994–999. Cited by: §1.
  • [26] J. J. Ma, U. Nakarmi, C. Y. S. Kin, C. M. Sandino, J. Y. Cheng, A. B. Syed, P. Wei, J. M. Pauly, and S. S. Vasanawala (2020) Diagnostic image quality assessment and classification in medical imaging: opportunities and challenges. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pp. 337–340. Cited by: §1.
  • [27] B. Mortamet, M. A. Bernstein, C. R. Jack Jr, J. L. Gunter, C. Ward, P. J. Britson, R. Meuli, J. Thiran, and G. Krueger (2009) Automatic quality assessment in structural brain magnetic resonance imaging. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine 62 (2), pp. 365–372. Cited by: §1.
  • [28] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pp. 8024–8035. External Links: Link Cited by: §2.
  • [29] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay (2011) Scikit-learn: machine learning in Python. Journal of Machine Learning Research 12, pp. 2825–2830. Cited by: Table 3.
  • [30] F. Pérez-García, R. Sparks, and S. Ourselin (2021) TorchIO: a python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. Computer Methods and Programs in Biomedicine 208, pp. 106236. Cited by: §1, 1st item.
  • [31] S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld (1987) Adaptive histogram equalization and its variations. Computer vision, graphics, and image processing 39 (3), pp. 355–368. Cited by: 4th item.
  • [32] W. Rawat and Z. Wang (2017)

    Deep convolutional neural networks for image classification: a comprehensive review

    Neural computation 29 (9), pp. 2352–2449. Cited by: §1.
  • [33] A. Sciarra, H. Mattern, R. Yakupov, S. Chatterjee, D. Stucht, S. Oeltze-Jafra, F. Godenschweger, and O. Speck (2022) Quantitative evaluation of prospective motion correction in healthy subjects at 7t mri. Magnetic resonance in medicine 87 (2), pp. 646–657. Cited by: §4.
  • [34] R. Shaw, C. Sudre, S. Ourselin, and M. J. Cardoso (2018) MRI k-space motion artefact augmentation: model robustness and task-specific uncertainty. In International Conference on Medical Imaging with Deep Learning–Full Paper Track, Cited by: §1, 1st item.
  • [35] V. E. Staartjes and J. M. Kernbach (2022) Foundations of machine learning-based clinical prediction modeling: part v—a practical approach to regression problems. In Machine Learning in Clinical Neuroscience, pp. 43–50. Cited by: §1.
  • [36] S. J. Sujit, R. E. Gabr, I. Coronado, M. Robinson, S. Datta, and P. A. Narayana (2018) Automated image quality evaluation of structural brain magnetic resonance images using deep convolutional neural networks. In 2018 9th Cairo International Biomedical Engineering Conference (CIBEC), pp. 33–36. Cited by: §1.
  • [37] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, İ. Polat, Y. Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt, and SciPy 1.0 Contributors (2020) SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods 17, pp. 261–272. External Links: Document Cited by: §3.
  • [38] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13 (4), pp. 600–612. Cited by: §1.
  • [39] B. Zahneisen, B. Keating, A. Singh, M. Herbst, and T. Ernst (2016) Reverse retrospective motion correction. Magnetic resonance in medicine 75 (6), pp. 2341–2349. Cited by: §4.