DeepAI

# Localized adversarial artifacts for compressed sensing MRI

As interest in deep neural networks (DNNs) for image reconstruction tasks grows, their reliability has been called into question (Antun et al., 2020; Gottschling et al., 2020). However, recent work has shown that compared to total variation (TV) minimization, they show similar robustness to adversarial noise in terms of ℓ^2-reconstruction error (Genzel et al., 2022). We consider a different notion of robustness, using the ℓ^∞-norm, and argue that localized reconstruction artifacts are a more relevant defect than the ℓ^2-error. We create adversarial perturbations to undersampled MRI measurements which induce severe localized artifacts in the TV-regularized reconstruction. The same attack method is not as effective against DNN based reconstruction. Finally, we show that this phenomenon is inherent to reconstruction methods for which exact recovery can be guaranteed, as with compressed sensing reconstructions with ℓ^1- or TV-minimization.

• 3 publications
• 6 publications
• 2 publications
11/09/2020

### Solving Inverse Problems With Deep Neural Networks – Robustness Included?

In the past five years, deep learning methods have become state-of-the-a...
04/04/2019

### Sampling Limits for Electron Tomography with Sparsity-exploiting Reconstructions

Electron tomography (ET) has become a standard technique for 3D characte...
01/23/2019

### Removing Stripes, Scratches, and Curtaining with Non-Recoverable Compressed Sensing

Highly-directional image artifacts such as ion mill curtaining, mechanic...
08/04/2020

### Stabilizing Deep Tomographic Reconstruction Networks

While the field of deep tomographic reconstruction has been advancing ra...
02/05/2015

### Ring artifacts correction in compressed sensing tomographic reconstruction

We present a novel approach to handle ring artifacts correction in compr...
01/04/2023

### Fast Absolute 3D CGO-Based Electrical Impedance Tomography on Experimental Tank Data

Objective: To present the first 3D CGO-based absolute EIT reconstruction...
07/14/2018

### Sparse Relaxed Regularized Regression: SR3

Regularized regression problems are ubiquitous in statistical modeling, ...

## 1 Introduction

Following the success of deep learning in computer vision, deep neural networks (DNNs) have now found their way to a wide range of imaging inverse problems

[mccann2017review, AMO19, ongie2020deep]. In some applications, learning the distribution of images from data is the only option. In others, existing methods based on hand-crafted priors are well established. Magnetic resonance image (MRI) reconstruction, for which sparsity based methods have been highly successful, is an example of the latter [lu2008csmri]. However, recent work suggests that image quality can be improved and computation times shortened significantly by the use of DNNs in MRI reconstruction [chen2022ai].

At the same time, it is well known that DNNs trained for image classification admit so called adversarial examples – images that have been altered in minor but very specific ways to change the label predicted by the network [biggio2013evasion, szegedy2013intriguing]. In [antun2020instabilities], it was discovered that DNNs used in inverse problems (MRI and CT) exhibit similar behaviour. Namely, the authors show that perturbing the measurements slightly can lead to undesirable artifacts in the image reconstructed by the network, and that the same perturbations do not cause problems for state-of-the-art compressed sensing methods. On the other hand, [genzel2020solving] shows quantitatively that DNNs can be made robust, to a comparable level with total variation (TV) minimization, by injecting statistical noise to the measurement data during training. Here, robustness is measured by the mean relative reconstruction error as a function of relative (adversarial or statistical) noise level, where both are defined by the -norm. Although DNNs and TV-regularized reconstruction behave similarly to adversarial perturbations from this quantitative perspective, the reconstruction artifacts are qualitatively very different. TV minimization suffers from global degradation of image quality (due to staircasing effects), while DNNs tend to introduce new meaningful features to the image. The latter type of artifacts is arguably worse, but it is only severe at relatively high noise levels and the introduced features can already be seen embedded in the adversarial noise and are therefore not created by the network.

In the current work, we focus on the two-dimensional compressed sensing MRI problem [lu2008csmri] and aim to create adversarial perturbations for TV-regularization that result in more localized reconstruction artifacts as displayed in Figure 1. To this end, we simply modify the attack method used in [genzel2020solving]

by replacing the loss function with a weighted seminorm, with a weight vector supported on a targeted location. Several locations are targeted, and the one on which the largest artifact appears is selected. Our experimental results show that such localized adversarial perturbations for TV-regularization do exist, and that the effects on DNNs are milder. The resulting artifacts are often manifested as isolated spikes in the image, and in the penultimate section we provide a mathematical justification for the appearance of these artifacts, based on the theory of compressed sensing. Curiously enough, the same positive compressed sensing results on undersampled MRI guaranteeing exact recovery of sparse signals have a negative counterpart regarding the appearance of sparse artifacts. While the analysis we provide is in the context of MRI, the motivations behind this phenomenon are more general and are mainly due to the fact that the forward operator in undersampled MRI has a nontrivial kernel (see also

[gottschling2020troublesome]), which happens in many inverse imaging problems.

We note that in regularized reconstruction for imaging applications, the standard benchmark for stability is, and always has been, a quantitative one described by a norm estimate in some chosen (appropriate) norm. At the same time, it is interesting to see the influence of the adversarial attacks research on DNN based image reconstruction. It seems a valid argument to point out that suitable attacks can lead to artifacts that are qualitatively relevant: after all, the objects of interest are images. So to us, this also opens the discussion about benchmarks for classical regularization algorithms. One might argue that also in that case, qualitative benchmarks (which are of course harder to define) should play a relevant role as well.

The paper is structured as follows: In Section 2 we introduce the compressed sensing MRI model, in Section 3 we formulate our adversarial attack method, in Section 4 we present the results of our numerical experiments, in Section 5 we give a mathematical explanation for our observations, and in Section 6 we make concluding remarks.

The code for the experimental part of the paper builds on the code from [genzel2020solving], and is available on https://gitlab.math.ethz.ch/tandrig/localadvmri.

## 2 Compressed sensing MRI

The goal of magnetic resonance imaging (MRI) is to recover an object’s density from its Fourier coefficients. In the fully sampled case, this problem is readily solved by applying the inverse Fourier transform. However, acquisition times can be reduced significantly by undersampling along non-Cartesian trajectories in the frequency domain. This leads to an underdetermined linear system, and it is clear that some additional assumptions need to be made on the object density in order to get a good reconstruction

[lu2008csmri].

Let us model the cross section of an object by an image of resolution for some . Let be the two-dimensional discrete Fourier transform, and let be a projection onto a set of indices, . The forward operator of the subsampled MRI problem is , and the measured values corresponding to an image are

 y=Ax+ϵ, (1)

where is zero-mean random noise which we assume is bounded in -norm by some constant . In order to recover from the measurement vector, we must search for an image with

 ∥Az−y∥2≤η.

Since , the equation does not have a unique solution. The pseudoinverse certainly provides a solution, but the resulting image may be of low quality or exhibit severe aliasing artifacts (depending on ), and not at all represent the desired object’s density (see Figure 2). Instead, we must impose some conditions on based on our a priori knowledge of .

Compressed sensing (CS) refers to the approach of favoring solutions that are sparse under some given transform . Under certain conditions, the sparsest solution is also the one with the smallest -norm [foucart2013invitation], and hence we are left with the following convex optimization problem:

 minz∈Cn×n∥Ψz∥1 subject to ∥Az−y∥2≤η. (2)

Several choices of the transform can be found in the CS-MRI literature, including the identity, the wavelet transform, and the image gradient [lu2008csmri]. In this work, we focus on the last one of these choices. More precisely, we choose , where is the two-dimensional finite difference operator on with periodic boundary conditions. The quantity is known as the total variation (TV) of the image . Minimization of the total variation promotes sparse gradients, and hence piecewise constant solutions. For computational efficiency, we solve the unconstrained formulation of (2):

 recTV(y;η)=argminz∈Cn×n∥Az−y∥22+λη∥∇z∥1, (3)

where is a regularization parameter. An appropriate choice of ensures that is a solution to (2) [foucart2013invitation].

### 2.1 Deep neural networks as an alternative to CS

Another approach to the MRI problem is to learn the inverse mapping from fully sampled data, i.e. to replace by a neural network. Several different strategies appear in the literature. Fully learned networks learn the entire inversion without any knowledge of the forward model at all [zhu2018image], while others use the linear operator as a first layer so that only the post-processing step is learned [wang2016accelerating]. The forward model can also be incorporated at several stages in the network, as in networks that are based on unrolled iterative optimization algorithms [sun2016deep].

For our experiments, we use the networks and from [genzel2020solving], which are based on the fully convolutional Tiramisu architecture [jegou2017one]. The difference between the two networks lies in that is a fully learned network , while applies the pseudoinverse to the measurements first. Both networks are trained using the mean squared error as a loss function, and Gaussian noise is added to the input as a means of regularization.

The study of adversarial examples originates in image classification. In that context, given an image and a classifier, an

adversarial perturbation is one that is imperceptible when added to the image but changes the output of the classifier. The perturbed image is called an adversarial example. The imperceptibility of the perturbation is difficult to define, so this requirement is commonly replaced by a bound on the norm of the perturbation. A popular choice is the -norm [goodfellow2014explaining, madry2017towards], since if each pixel in the image is changed only by a small value, then one can be sure that the semantic meaning of the image stays the same. In contrast, the meaning of an image may be changed (say a handwritten 1 to a 7) by introducing a localized perturbation with a small -norm. It should be noted that there exist other adversarial image transformations that are not small in the -norm, such as rotations, translations, or smooth deformations [alaifari2019adef].

Two difficulties arise when adapting the notion of adversarial examples to MRI. Firstly, since measurements are in the frequency domain, the imperceptibility of perturbations is not very meaningful. Secondly, since the output of the reconstruction method is a continuous variable (as opposed to discrete labels in classification), a notion of severity is needed to quantify the effect of an adversarial perturbation.

It is natural to tackle the first problem by referring to the noise model, i.e. a perturbation is “imperceptible” if it is small in -norm, as is done in both [antun2020instabilities] and [genzel2020solving]. For the second problem, [antun2020instabilities] inspects the visual quality of the reconstructed images, while [genzel2020solving] uses the -error for quantitative analysis of stability to perturbations. Thus, for given measurements and noise level , [genzel2020solving] defines an adversarial perturbation by

 (4)

(and in fact, [antun2020instabilities] solves a similar unconstrained optimization problem). We refer to

 (5)

as the reconstruction artifact induced by the perturbation .

In this work, we argue that the reconstruction -error does not sufficiently capture the most harmful reconstruction artifacts. In the medical setting, a misdiagnosis might be based on a localized anomaly in the image rather than an overall poor quality of reconstruction (see Figure 1). Thus we aim to create perturbations that cause localized reconstruction artifacts. We replace the Euclidean norm in the objective with a weighted seminorm:

 e=argmaxe∈Cm∥ϕ⊙(recTV(y+e;η)−recTV(y;η))∥2such that% ∥e∥2≤η, (6)

where denotes entrywise product and is a weight vector. To promote localized artifacts, we select a weight vector that is the (discrete) indicator function of a disk of radius , centered at . In other words, we let with

 ϕμ,σij={1if (i−μ1)2+(j−μ2)2≤σ2,0otherwise,

for .

Although we target a specific location in the image, we are interested in any large artifacts that may appear in reconstruction. Therefore, we judge the severity of the reconstruction artifact by its -norm. We solve (6) for all locations on a regular grid in , and select the perturbation that maximizes

 ∥∥recTV(y+eμ;η)−recTV(y;η)∥∥∞

as our adversarial perturbation. The radius, , is fixed at a small value relative to . Note that for a large enough radius, (6) is equivalent to (4).

### 3.1 Visualizing perturbations

While we search for reconstruction artifacts by perturbing the measurement vector , it is important to consider whether the artifact is in some sense already encoded in the perturbed measurements , rather than “invented” by the reconstruction method. Indeed, in some cases shown in [genzel2020solving] (see for example Fig. 7 therein) the same perturbation induces similar artifacts for both TV-regularized reconstruction and DNN-based reconstruction, indicating that these artifacts are present in the perturbation itself and are not created in the process of reconstruction.

To see if the perturbation encodes an artifact in this way, we visualize it by an image perturbation such that . We select as this perturbation is orthogonal to , which means that . Then we can compare the perturbed image with the reconstruction , both visually and in terms of the -norm.

## 4 Results

We demonstrate the strategy described above by creating adversarial perturbations for synthetically generated images of phantom ellipses (as in [genzel2020solving]) with pixel values ranging from 0 to 1. To generate the measurement vectors, we apply the MRI forward operator with a sampling mask defined by 25, 40, or 80 lines through the origin (see Figure 2), which corresponds to , , or of the coefficients. We train the neural networks, and , only on the data sampled on 40 lines. Moreover, all the figures in this paper are based on measurements using that same mask.

The reconstruction map in (3), , is realized by the alternating direction method of multipliers (ADMM) [boyd2011distributed], and the implementation is taken from [genzel2020solving]. For each noise level , the parameter is selected by grid search (along with the one free parameter of ADMM), averaged over 50 samples of the ellipse data.

We create adversarial perturbations for 50 ellipse images according to (6) at relative noise levels () ranging from to . For each noise level, and each image, we search for the best position of localization weight vector, , on an grid. The radius of the localization disk is fixed at . Experimentation showed that similar reconstruction artifacts appear for other values of , as long as it is small. For larger , the artifacts are not well localized and resemble those of [genzel2020solving].

Figure 3 shows an example image , a perturbed image (where is a visual representation of the perturbation), and the reconstruction from perturbed measurements . The output of the reconstruction method is a vector in , which we visualize by taking the entrywise modulus. We now list some observations based on this quantitative experiment and visual inspection of examples.

TV-regularized reconstruction amplifies artifacts already present in the perturbation: We observe that the adversarial perturbations indeed induce localized reconstruction artifacts in the form of a spike. A spike is also noticeably present in the image perturbation, , itself. The image perturbation is therefore not imperceptible in the -sense, except at very low noise levels. However, in -norm, the reconstruction artifact from (5) is much larger than the perturbation. We consider the amplification factor

 α=∥ρ∥∞/∥r∥∞

to quantify this effect. In Figure 4, we observe a large at different noise levels, but visually, this phenomenon is especially conspicuous at higher noise levels.

The amplification factor is roughly equal to the subsampling factor: Table 1 shows the average amplification factor of the 50 ellipse samples for each noise level. Although the amplification factor varies more at low noise levels, for TV-regularization its average does not seem to depend on the noise level. There is however a clear dependence on the number of coefficients in the measurements. In fact, for reasons that will become clear in the next section, the average amplification factor is approximately the subsampling factor , which for the 25, 40, and 80 line sampling masks is approximately , , and .

The amplification factor is smaller for DNNs: Applying the attack strategy (6) to the neural networks and does produce reconstruction artifacts. However, at noise, the attack is far less effective than for . Figure 5 shows a typical artifact created by , which has a much lower -norm than those created by .

Perturbations made for DNNs transfer to , but not vice versa: In Figure 6, we see that a perturbation created for a DNN can transfer to all three reconstruction methods, with showing the most severe artifact. On the other hand, using the DNNs to reconstruct from , where is a perturbation created for TV-regularized reconstruction, produces images of good quality and no visible artifacts (Figure 7). In fact, the peak of the image perturbation is dampened by the DNN reconstruction.

## 5 Explaining localized artifacts

Many results exist that guarantee exact recovery of sparse vectors from partial Fourier measurements (1), depending on properties of the underlying signal, the sampling mask , and the reconstruction method. The reconstruction artifacts seen in Figures 3 and 4 consist mainly of a single pixel spike. We now show how such exactness guarantees can imply the existence of low -norm perturbations which give rise to precisely that type of artifacts.

For simplicity, we consider the noiseless one-dimensional setting. Let be an even integer and let denote the one-dimensional discrete Fourier transform,

Let be a set of indices, and define the operator , where is the projection onto the index set . Consider a signal . We wish to perturb the measurement vector such that a spike appears in the reconstruction. Without loss of generality, let be our desired single spike artifact, and simply define the perturbation

 en=An(δn)=(1/√n)k∈Ωn

(which, incidentally, is an -minimal perturbation given an -budget). The corresponding “image perturbation” is

 rn=A†n(en)=(1n∑k∈Ωne2πikj/n)n−1j=0.

If we can guarantee the exact recovery of from and of from its measurements (see Theorems 5.1 and 5.2 below),

 An(xn+δn)=yn+en=An(xn+rn),

then a reconstruction artifact is created with an amplification factor . The -norm of can easily be bounded from above:

where , so that

 αn≥nmn.

In other words, the amplification factor is at least as big as the subsampling factor, which is in accordance with Table 1. If the size of the sampling mask grows slowly with the dimension , then the amplification factor can be made arbitrarily large simply by increasing the resolution. In fact, since

 ∥rn∥2=∥en∥2=√mn/n,∥δn∥2=1,

the same is true for amplification in the -norm, although the growth is at a slower rate.

For demonstration purposes, we now cite a known recovery guarantee for -minimization, which can then be translated to results on TV-minimization by a simple argument. The number of measurements required depends on the sparsity of the signal, which we shall assume does not vary with the dimension. This is true if is a finite spike train, and thus suitable for -minimization. We employ TV-minimization if is a piecewise constant function, in which case the sparsity of is constant with respect to . Moreover, if is -sparse, then is -sparse. Similarly, if is -sparse, then is -sparse (where is understood as the periodic finite difference operator). Hence, exactness results hold for as well as .

###### Theorem 5.1 ([candes2011probabilistic, Theorem 1.1] as stated in [foucart2013invitation, Theorem 12.20]).

Let be -sparse, , and suppose that the indices of are chosen uniformly at random from . Then, there exists a constant such that if

 mn≥Cslog(n)log(ε−1),

then is the unique solution to

 minz∈Cn∥z∥1 subject % to Anz=Anx,

with probability at least

.

Keeping at a small but constant value, and using the minimal number of measurements means that . Applying Theorem 5.1 to , we see that -minimization combined with this uniformly random sampling scheme leads to amplification of

 αn≳nlogn−−−→n→∞∞,

with high probability. In the same way, the corresponding TV-minimization result implies that TV-minimization for gradient sparse signals also leads to amplification of .

###### Theorem 5.2.

Let be such that is -sparse, , and suppose that where contains indices chosen uniformly at random from . Then, there exists a constant such that if

 mn≥Cslog(n)log(ε−1),

then is the unique solution to

 minz∈Cn∥∇z∥1 % subject to Anz=Anx,

with probability at least .

###### Proof.

(The following argument can be used to translate results on recovery from Fourier measurements by -minimization to TV-minimization, and appears, for example, in [candes2006robust].) First, note that for any , and , we have

 (Fn(∇z))k =1√nn−1∑j=0(zj−z(j−1)modn)e−2πikj/n =1√nn−1∑j=0(zj)e−2πikj/n−1√nn−1∑j=0(zj)e−2πik(j+1)/n =(1−e−2πik/n)(Fnz)k,

and therefore

 Anz=Anx⟺An(∇z)=An(∇x) and 1⋅z=1⋅x

(where and the condition comes from the assumption that ).

By successively loosening the constraints, we see that

 minz∈Cn∥∇z∥1 subject to Anz=Anx = minz∈Cn∥∇z∥1 subject to An(∇z)=An(∇x) and 1⋅z=1⋅x ≥ minz∈Cn∥∇z∥1 subject to An(∇z)=An(∇x) ≥ minz∈Cn∥z∥1 subject to Anz=An(∇x).

By Theorem 5.1, is the unique solution to the last of these minimization problems, and therefore also solves the first one. Uniqueness follows from the fact that the only which satisfies both and is . ∎

Of course, other sampling schemes exist, and if additional information on the signal is utilized, this result can be improved upon. For example, if the minimal distance between two non-zero entries of is greater than for some positive integer , then -minimization recovers exactly as long as the frequencies are included in the sampling mask [candes2014towards]. Likewise, if is non-negative and -sparse, then it suffices that [de2012exact]. In these cases is constant, and therefore . However, we cannot apply these specialized results to our example with TV-minimization, since is neither non-negative nor well separated. It is worth mentioning the main result of [poon2015role] on TV-minimization, which uses a random sampling mask that concentrates on the lower frequencies with .

The single pixel reconstruction artifact may not be considered a meaningful feature, and can be disregarded by a practitioner. However, the artifacts seen in the experiments are more diverse and act as a proof of concept for localized adversarial artifacts for TV-regularization. Other criteria for artifact severity may lead to different results. The spike is only an idealization of the observed artifacts which maximizes the -norm within a fixed -budget for the perturbation. A similar argument can be made for the existence of different types of sparse artifacts, but with a lower -amplification factor. For any artifact , the corresponding image perturbation can be bounded by the triangle inequality:

so that an amplification of

 αn≥∥ρn∥∞∥ρn∥1nmn

can be attained. Depending on the sparsity properties of , exact recovery of can be guaranteed by the theorems above. For example if is a discrete rectangular function, then is 2-sparse and Theorem 5.2 can be applied.

## 6 Conclusion

In this work, we have studied localized adversarial perturbations in the context of MRI, both with TV-based and DNN-based reconstruction methods. In the case of TV-regularization, we have been successful in creating such localized reconstruction artifacts according to our criterion of large amplification of -norms. Only at low noise levels does the same hold for the DNNs considered. We analysed the artifacts arising with TV, and showed that they are inherent to the compressed sensing MRI problem. While exact recovery of sparse signals from incomplete Fourier data is generally considered a positive result, we offer the perspective that it also guarantees the existence of adversarial perturbations. Keeping the -norm of the image perturbation constant, the -norm of the resulting reconstruction artifact is proportional to the subsampling factor. Thus the effect becomes more pronounced in high dimensions where proportionally fewer measurements may be used.

The results presented in this paper suggest that the -norm of the reconstruction error may not be a significant measure of the perturbation created, since localized artifacts have a very small norm but a large norm, thereby yielding perceptible perturbations. It is natural to wonder whether it is possible to design a measure of the perturbations that could be both quantitatively computable and qualitatively meaningful. Furthermore, it would be interesting to generalize the experimental results of Section 4 and the mathematical insights of Section 5 to different kinds of artifacts and to more general inverse problems.

## Acknowledgments

This material is based upon work supported by the Air Force Office of Scientific Research under award number FA8655-20-1-7027. GSA is a member of the “Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni”, of the “Istituto Nazionale di Alta Matematica”.