1 Introduction
MRI is a widely used imaging modality in routine clinical practice due to its noninvasion, nonionization and excellent visualization of softtissue contrast. However, it has also been traditionally limited by its slow data acquisition speed in many applications. How to reduce the imaging time has become a research hotspot. In 2006, a highprofile method, termed Compressed Sensing (CS), was proposed (Candes2006Robust; Donoho2006Compressed), which theoretically shows that sparse signals can be completely recovered from incomplete measurement under certain conditions. Leveraging the sparse nature of MR images under certain transforms or dictionaries, CS has been successfully applied to accelerated MRI (CSMRI) (Lustig2007Sparse). However, further acceleration is limited due to the complexity limitations of the CSMRI model. Inspired by the tremendous success of deep learning (DL), many researchers have committed to applying DL to MR reconstruction and received significant performance gains (7493320; yang2016deep; zhu2018Image; 8756028; 8962949; Huang2021Deep; 9481093; 9481108). Nevertheless, most of these methods typically require many fullsampled training dataset, which is impractical in the clinic.
Untrained neural networks (UNNs) are intersection models between DL and CS. It not only inherits the powerful representation ability of deep neural networks but also requires no additional training data like CS (Ulyanov_2018_CVPR). However, a suitable network architecture is difficult to design and significantly impacts the performance of UNNs. More specifically, instead of relying on sparse nature, UNN captures the prior of the sought solution by parameterizing it through a carefully designed deep neural network. Then, let the solution represented by the network satisfy the data consistency, transforming the sparse constrained optimization problem in CS into an unconstrained network fitting problem. In a nutshell, UNN is regularized by the network architecture (Dittmer2020Regularization). With the advantages of UNNs, this field of study has emerged as a competitive method for solving inverse problems (8581448; 9442767; 9488215; qayyum2021untrained). Recently, an imagedomain decoder architecturebased UNN (IUNN) was applied to the MR image reconstruction problem and achieved satisfactory performance on 4x random sampling trajectories (9488215).
However, IUNN cannot perform well in other common sampling scenarios, such as partial Fourier, regular trajectories, etc. The main reason for this is the physical prior underutilization
. In the existing IUNN methods, the network architecture design mainly relies on computer vision and does not consider the physics prior to MR image. It not only leads to poor reconstruction quality but also the
lack of theoretical guarantee for the reconstruction accuracy. It is worth noting that although the literature (heckel2020compressive) pointed out that the reconstruction error of IUNN can be effectively bounded if the coding matrix meets the subGaussian assumption, most of the time, the subGaussian assumption cannot be met in MRI.1.1 Contributions and Observations
Motivated by the abovementioned problems, this paper will tackle the problem that suitable network architectures are difficult to design by guiding the UNN through the physical priors of MR images. Specifically, our work’s main contributions and observations are summarized as follows.

For the space interpolation problem, a tripled UNN architecture is proposed, one for sparse prior to the MR image (or linear predictability of space data), one for smooth prior to the MR image phase, and one for smooth prior to the coil sensitivities. It is worth mentioning that the proposed tripled UNN is a very flexible framework. If a prior is not satisfied in some cases, the remaining two priorbased modules can form a double UNN architecture.

We prove that the space data interpolated by the proposed tripled UNN enjoys a tight complexity guarantee to approximate the fullsampled space data on random and deterministic (including regular and partial Fourier) sampling trajectories.

In terms of priors characterization, ablation experiments show that the proposed method can more accurately characterize the physical priors of MR images than traditional methods.

In terms of reconstruction accuracy, experiments on a series of commonly used sampling trajectories show that the proposed tripled UNN consistently outperforms existing UNNbased and traditional parallel imaging methods and even outperforms the stateoftheart supervisedtrained DLbased methods in some cases.
The remainder of the paper is organized as follows. Section 3 provides some notations and preliminaries. Section 2 reviews some related work. Section 4 discusses the proposed space interpolation method and its corresponding theoretical guarantees. The implementation details are presented in Section 5. Experiments performed on several data sets are presented in Section 6. The discussions are presented in Section 7. The last section 8 gives some concluding remarks. All the proofs are presented in the Appendix.
2 Related Work
2.1 Cs Unn
Consider the following general inverse problem
where is the illcondition coding operator, is the measurement data, and is the variable to be solved. If satisfies sparsity under a certain sparse transformation , CS can obtain the unique true solution to the inverse problem by solving the following variational problem:
where is a sparse promoting regularizer. In particular, the unique true solution , where is the optimal solution of the above variational problem. Instead of relying on sparse nature, UNN captures the prior of the sought solution by parameterizing it through a carefully designed deep neural network , i.e., , where
is a random lowdimensional vector. Then, CS can be generalized to
In a nutshell, CS fixes the sparse transform and seeks a sparse coefficient to represent the desired solution, while UNN fixes a sparse (lowdimensional) coefficient and seeks an adaptive transformation to represent the desired solution. Namely, the sparse prior in CS is generalized to be implicitly extracted by UNN network architecture.
Existing UNN methods are mainly oriented toward computer vision. Based on the discovery that Encoderdecoder architecture has high impedance to noise and low impedance to the desired image, Ulyanov_2018_CVPR
applied the UNN with Encoderdecoder architecture to various linear inverse problems, including denoising, inpainting, superresolution, etc..
9488215 applied the decoder architecturebased IUNN to accelerated MRI. Due to the underutilization of the MR image physic priors, IUNN can only perform satisfactorily on randomly sampled trajectories at 4x acceleration and poorly on other common sampling scenarios such as partial Fourier and regular trajectories. In addition, heckel2020compressive demonstrated that IUNN exact reconstruction of the signal requires the coding matrix to satisfy the subGaussian assumption, which is difficult to meet in most cases of MRI, demonstrating the theoretical limitations of IUNN.2.2 Space Interpolation Methods
Compared with the reconstruction methods in the image domain, the space interpolation methods can avoid the basis mismatches between the singularities of true continuous image and discrete grid (Ongie2016off). Interpolating the missing data in space lies mainly in using various physical priors of MR images. Up to now, many classic works have been proposed. For example, through statistical observations or the transformation of image domain sparsity in the Fourier domain, it has been found that space data can be linearly predicted within a neighbourhood. The most prominent example of such a method is GRAPPA (Griswold2002Generalized). If one further assumes that the missing data can be linearly predicted by its entire neighbourhood in all channels, SPIRiT (Lustig2010spirit) is derived. The second one mainly capitalizes on the smooth prior of coil sensitivities. By dual relationship between the smooth prior in the image domain and the low rankness of the Hankel matrix in space, missing space prediction can be realized by a lowrank regularization problem, for which the multichannel version of ALOHA (lee2016acceleration) is the corresponding stateoftheart method. The third one mainly capitalizes on phase smoothness. Like previous methods, smoothness can be transformed into lowrankness in space. The corresponding stateoftheart method is the signalchannel version of LORAKS (6678771
). However, estimating prediction kernels or nullspace filters on these methods require additional fully sampled calibration data, and the lowrankness is computationally inefficient. Therefore, it is desired to couple the physical priors characterization utilized in the above methods to the UNN for proposing a more efficient
space interpolation method.3 Notations
In this paper, in the case of no ambiguity, matrices and vectors are all represented by bold lower case letters, i.e. , . In addition, () refers to the th column (th entry) of the matrix , and denotes the th element of vector . Let denotes a subset of indices, i.e. , while is the subvector of whose entries are listed in . denotes the th standard basis vector in , equal to 1 in component and 0 everywhere else. The superscript for a vector or matrix denotes the Hermitian transpose. The notation denotes a closed boll in , i.e. . The notation for any function denotes the support set of , i.e., . The notation FFT or superscript
denotes the Fourier transform.
A variety of norms on matrices will be discussed. The spectral norm of a matrix is denoted by . The Euclidean inner product between two matrices is and the corresponding Euclidean norm, termed Frobenius norm, is denoted as which is derived by . The maximum entry of is denoted as . For vectors, denotes the norm. Now, let us review the matrix manipulation form for the convolutional operation. For simplicity, we only consider the 1D case, and its extension to higher dimensions is valid (ye2018deep). Let and . A singleoutput convolution of the input filter (which is referred to the flipped version of , i.e. the indices of are timereversed such that .) can be represented in a matrix form:
where denotes the convolution operator and denotes the wraparound Hankel matrix.
4 Methodology and Theory
This section presents our tripled UNN for space interpolation and then analyses its interpolating accuracy guarantees under several commonly used sampling trajectories.
4.1 The Tripled UNN
In MRI, the forward model of multichannel space data acquisition can be expressed as
(1) 
where denotes an abbreviation for Fourier transform, , denotes the number of channels, denotes the true fully sampled space data, is the undersampled data, is the noise and denotes the sampling trajectory. In particular, , denotes the th column of and is a subsampling mask with sampling set , so that the th diagonal of is 1 if and zero otherwise. The matrix does not satisfy the subGaussian condition.
Recalling MRI, the image is obtained by multiplying the desired image by the coil sensitivities . As we know, the coil sensitivities are smooth, the phase is smooth, and the image is sparse. Therefore, the MR reconstruction problem can be reduced to seeking a set of smooth coil sensitivities, a smooth phase, and a sparse image representing the multichannel MR image. Let , and be variables, the MR reconstruction mathematical model reads:
where the second term in the objective is because , and is the Hermitian transpose of . Given the dual relationship between sparseness in the image domain and low rankness in space and smoothness in the image domain and compact support in space, the above problem can be reformulated as
(2) 
Relying on the low rankness of , ye2018deep showed that can be represented by a deep convolutional framelet. Following (8756028), we used a decoder architecture based UNN to generalize the framelet representation, ie, . For the compact support prior, we also used decoder architectures based UNNs with small output size to represent and , ie, , , where
are lowdimensional random variables and
are CNN networks parameters. Since the network modules can characterize all the above constraints, the constrained optimization problem (2) can be transformed into an unconstrained optimization problem by bringing the network modules represented variables into the objective function. Further, for convenience, we rewrite it in the following compact form:(3) 
where , and . Specifically, is the proposed tripled UNN to generate space data. The illustration of the proposed tripled UNN architecture is shown in Figure 1.
4.2 Theoretical Results
In MRI, space data are usually acquired in random or deterministic sampling (including regular, partial Fourier, etc.) manners, so we need to analyze the interpolated accuracy for these two cases.
4.2.1 Theoretical guarantee for random sampling
Before proving our main result, we suppose that the generator derived from (3) satisfies the following condition:
Assumption 1
Let be a component of the optimal derived from (3). For any input , the wrap around Hankel matrix of is rankdeficient, ie,
Remark 4.1
Since network is Lipschitz continuous (pmlrv70bora17a), if the radius of the ball is small enough, the structure of the images generated by will change little, i.e., . As a result, is likely to be sparse. By the Prony’s result (1003065), there is at least a nonzero filter such that . Then, the above assumption holds.
Based on the above assumption, we have the following result:
Theorem 4.1
Let be the optimal UNN derived from (3). Suppose Assumption 1 holds, trajectory is generated independently and uniformly random with nonzero locations. For any and observation , there exists a constant such that the reconstruction satisfies:
with probability at least
provided that , where is short for , and .The proof is presented in Appendix 8.1.
Remark 4.2
YAROTSKY2017103 showed that deep neural networks can characterize the Sobolev space (including space), so it is reasonable to believe that the term can be bounded tightly if the generator network is deep enough.
4.2.2 Theoretical guarantee for deterministic sampling
In practice, space data are sometimes acquired by deterministic sampling. However, to the best of our knowledge, there is currently no theoretical guarantee about the accuracy of the UNN method for reconstructing MR images or interpolating space data for deterministic sampling. Now, let us bridge this theoretical gap.
Theorem 4.2
Let be the optimal UNN derived from (3). Suppose is bounded over , trajectory is generated by deterministic sampling. For any and observation , there exists a constant such that the reconstruction satisfies:
where is short for , and .
The proof is presented in Appendix 8.2.
Remark 4.3
The bounded assumption can be derived by the Lipschitz continuous of (pmlrv70bora17a), i.e., , where is the Lipschitz constant.
5 Implementation
The evaluation was performed on the knee and brain MR data with various space trajectories, including random and deterministic (variable density regular, partial Fourier) cases. The details of the MR data are as follows:
5.1 Data Acquisition
5.1.1 Knee data
Firstly, we tested our proposed method on knee space data ^{1}^{1}1http://mridata.org/. The raw data was acquired from a 3T Siemens scanner. The number of coils was 15, and the 2D Cartesian turbo spin echo (TSE) protocol was used. The parameters for data acquisition are as follows: the repetition time (TR) was 2800ms, the echo time (TE) was 22ms, the matrix size was and the field of view (FOV) was . The readout oversampling was removed by transforming the space to the image and cropping the centre region. Our proposed method does not require any additional training set. We selected data from seven subjects (including 227 slices) as the training set for trained comparison experiments.
5.1.2 Brain data
The raw data was acquired from a 3T Siemens scanner on a healthy female volunteer. The number of coils was 32, and the Cartesian 2D gradient echo (GRE) protocol was used. Imaging parameter included: imaging resolution = mm2, TR/TE = 250/5 ms, flip angle (FA) = 70°, slice thickness = 5 mm. The maximum amplitude of wave gradient was 3.72 mT/m with cycle = 8 and readout oversampling (OS) ratio = 2, FOV .
5.1.3 Sampling trajectories
Three undersampling trajectories were considered, including random and deterministic (variable density (VD) regular, partial Fourier (PF)) trajectories. A visualization of these sampling trajectories is depicted in Figure 2.
5.2 Network Architecture and Training
The schematic diagram of the proposed tripled UNN architecture is illustrated in Figures 1. Particularly, for CNN modules (, and ), we used ConvDecoder^{2}^{2}2https://github.com/MLIlab/ConvDecoder with {#layers, #channels, #output size} = {10, 256, 384384}, {5, 64, 1111}, {5, 64, 1111} to achieve them, respectively.
For proposed tripled UNN, ADAM (kingma2014adam) optimizer with
is chosen for optimizing loss function (
3). The number of iterations is chosen as 1000. The learning rate is set to. The models were implemented on an Ubuntu 20.04 operating system equipped with an NVIDIA A6000 Tensor Core (GPU, 48 GB memory) in the open PyTorch 1.10 framework (
paszke2019pytorch) with CUDA 11.3 and CUDNN support. For each slice, the proposed tripled UNN takes about 60s to perform iterations.5.3 Performance Evaluation
In this study, the quantitative evaluations were all calculated on the image domain. The reconstructed and reference images were derived using an inverse Fourier transform followed by an elementwise squareroot of sumofthe squares (SSoS) operation, i.e. , where denotes the th element of image , and denotes the th element of the th coil image
. For quantitative evaluation, the peak signaltonoise ratio (PSNR), normalized mean square error (NMSE) value, and structural similarity (SSIM) index (
1284395) were adopted.6 Experimentation Results
6.1 Ablation Studies
In order to verify that UNN in space (termed KUNN) can more accurately characterize the physical priors of MR images compared to the traditional methods, we designed the following ablation experiments. First, for the multichannel parallel imaging, the phase module was removed from the tripled KUNN (3) and compared with the L1SPIRiT (Lustig2010spirit) to verify the accuracy of KUNN for the characterization of sensitivity prior. Then, for single channel partial Fourier imaging, the coil sensitivity module was removed from the tripled KUNN (3) and compared with the data fitting, and space convolution (KCOV) method (huang2009partial) to verify the accuracy of KUNN for the characterization of phase prior. Finally, for multichannel parallel imaging, the L1SENSELORAKS (kim2017loraks) (for which a waveletbased L1 penalty is added on its publicly available Matlab code ^{3}^{3}3https://mr.usc.edu/download/LORAKS2/) was compared to verify the joint characterization ability of the proposed tripled KUNN for the sparse, coil sensitivity smooth, and phase smooth priors. Our code is available at ^{4}^{4}4https://github.com/ZhuoxuCui/K_UNN.
6.1.1 Characterization of sensitivity prior
In this section, we eliminate the effect of phase and verify the ability of KUNN to characterize the coil sensitivity prior. In particular, by eliminating the second term in the objective function (2), the space data is generated only by . As shown in Figure 3, when ACS data is abundant (16lines), both our KUNN and L1SPIRiT can reconstruct the image satisfactorily. However, a closer look shows that in the region of interest of the knee cartilage, a few artifacts remain in the L1SPIRiT reconstruction. When the ACS data is insufficient (6lines), the L1SPIRiT reconstruction exhibits artifacts, while the quality of the reconstructed image by our KUNN deteriorates negligibly. Therefore, the above experimental results verify the superiority of the proposed KUNN in the characterization of coil sensitivity prior.
6.1.2 Characterization of phase prior
In this section, we eliminate the effect of coil sensitivities and verify the ability of KUNN to characterize the phase prior. Specifically, we first merged the knee data into a single channel. For singlechannel partial Fourier imaging, the ablation experiments can be performed directly by removing the coil sensitivity module in the model (3). As shown in Figure 4, both the proposed KUNN and KCOV can reconstruct the image satisfactorily when ACS data is abundant (). However, it can be seen from the phase error that KCOV biases the phase reconstruction in the interior of the image (indicated by the red arrow). When the ACS data is insufficient (), it is clear that our method outperforms KCOV in terms of both image reconstruction quality and phase reconstruction quality. This experiment verifies the proposed KUNN’s superiority in the prior characterization of the phase.
6.1.3 Joint characterization of physical priors
In this section, we verify the joint characterization ability of the proposed KUNN for the sparse, coil sensitivity smooth, and phase smooth priors. As shown in Figure 5, when ACS data is abundant (10lines), our KUNN and L1SENSELORAKS can satisfactorily reconstruct the image. When the ACS data are insufficient (8lines), the reconstruction quality of L1SENSELORAKS drops sharply due to inaccurate coil sensitivities estimation, while the quality of the reconstructed image by our KUNN only deteriorates slightly. This experiment verifies the superiority of the proposed KUNN in the joint characterization of three priors.
6.2 Comparative Studies
In this section, to demonstrate the effectiveness of our KUNN, a series of extensive comparative experiments were studied. In particular, we compared the traditional space PI method (L1SPIRiT (Lustig2010spirit), L1SENSELORAKS (kim2017loraks)) and IUNN (9488215) which dose not utilize physical priors. To further verify the superiority of the proposed method, we also compared it to the SOTA supervisedtrained space DL method HDSLR (9159672
), for which we develop a PyTorchbased implementation based on its publicly available TensorFlow code
^{5}^{5}5https://github.com/anikpram/DeepSLR.6.2.1 Experiments on random sampling trajectory
In this section, we test the performances of the proposed KUNN and comparison methods under random sampling. Figure 6 shows the reconstruction results of the knee data using various methods in the case of random sampling with an acceleration factor of 5. For L1SPIRiT, the aliasing pattern remains in the reconstructed images. For L1SENSELORAKS, there are still artifacts in the region of interest of the reconstructed image, as seen in the enlarged view. For IUNN, although it has been empirically noted in the literature (Lustig2007Sparse) that the random sampling composite Fourier encoding can approximate the subGaussian condition, however, the acceleration exceeds its limit on this experiment, so there are still artifacts in the reconstruction images. For the SOTA trained space DL method (HDSLR), the noise in the reconstruction image is effectively suppressed. However, the reconstructed image has a serious loss of highfrequency details in the upper right of the enlarged view. It is not difficult to find that our KUNN can effectively suppress artifacts and recover the highfrequency detail better.
The competitive quantitative results of the above methods are shown in Table 1. Our method consistently outperforms traditional methods L1SPIRiT, L1SENSELORAKS, image domain IUNN, and SOTAtrained HDSLR as characterized by visual and quantitative evaluations. The above experiments confirm the competitiveness of our method under a random sampling trajectory.
Datasets  Quantitative Evaluation  

& Methods  NMSE  PSNR(dB)  SSIM  
Random ()  L1SPIRiT  0.0055  34.1344  0.8588 
L1SENSELORAKS  0.0066  33.3665  0.6967  
HDSLR  0.0044  35.4245  0.8625  
IUNN  0.0059  33.8467  0.8461  
KUNN  0.0043  35.4811  0.8728  
VD Regular ()  L1SPIRiT  0.0033  36.3754  0.8940 
L1SENSELORAKS  0.0049  34.6815  0.7260  
HDSLR  0.0026  37.3599  0.9045  
IUNN  0.0065  33.4455  0.8724  
KUNN  0.0034  36.3166  0.9005  
PF ()  L1SPIRiT  0.0138  30.1501  0.8582 
L1SENSELORAKS  0.0047  34.8409  0.7558  
HDSLR  0.0059  33.8551  0.8976  
IUNN  0.0052  34.3695  0.8783  
KUNN  0.0037  35.9568  0.9015 
6.2.2 Experiments on deterministic sampling trajectories
In this section, we test the performances of the proposed KUNN and comparison methods under deterministic sampling trajectories, including VD regular and PF trajectories. Figure 7 shows the reconstruction results of the knee data using various methods in the case of VD regular sampling with an acceleration factor of 5. As shown in Figure 7, for L1SPIRiT and L1SENSELORAKS, the aliasing pattern remains in the reconstructed images. Due to the serious violation of subGaussian assumption by VD regular sampling composite Fourier encoding, serious artifacts remain in the IUNN reconstructed images, which also verifies the theoretical validity reversely. Although the proposed KUNN is slightly inferior to HDSLR in terms of quantitative metrics, visually, the proposed method achieves better performance in artifact suppression. In this experiment, the proposed untrained method achieved comparable performance to the SOTA trained method, which is a good indication of the superiority of the proposed method.
Figure 8 shows the reconstruction results of the knee data using various methods in the case of PF sampling with an acceleration factor of 5. As shown in Figure 8, due to the absence or inability to fully utilize the phase prior, L1SPIRiT and HDSLR reconstructed images are blurred. Due to the serious violation of subGaussian assumption by PF sampling composite Fourier encoding, IUNN reconstructed image exhibits artifacts. Although L1SENSELORAKS can reconstruct the image well, a closer look reveals two inconspicuous artifacts at the arrow’s point. Competitive quantitative results are shown in Table 1. Our proposed method achieves the best performance compared to all methods in terms of quantitative metrics. Combining visual perception and quantitative indicators, we verify the effectiveness of the proposed KUNN method under the PF sampling.
7 Discussion
In this study, we propose an MR physical priors driven interpolation method for MRI using UNN, dubbed KUNN. In theory, we analyze its interpolated accuracy bound, which has also been verified by some comparative experiments on MR image reconstruction qualities. However, some areas still need further discussion or improvement for our proposed model.
7.1 Priors Mismatches
If, in reality, there are unsatisfiable cases of image sparsity, phase smoothness, and coil sensitivity smoothness in the proposed method, the imaging model (2) or (3) can be recomposed flexibly from the modules as adopted in the ablation experiments. It is worth mentioning that in our model, image sparsity prior, phase and coil sensitivity smooth prior have been generalized to characterize the space learnable framelet and small size convolution kernel, respectively. The proposed model not only characterizes the priors more accurately (as has been verified by ablation experiments) but also has a wider range of applicability. For example, the phase of the image acquired by the GRE sequence is usually not particularly smooth (). Figure 9 shows the reconstruction result of the GRE acquired data using KUNN in the case of PF sampling with an acceleration factor of 5. Not only is the image accurately reconstructed, but the jump boundaries indicated by the red arrows are also accurately reconstructed.
7.2 Further Improvements
In the proposed method, we do not explicitly utilize the ACS data. However, in some applications of MRI, ACS data can be acquired by additional fast scan sequences. Our future work will be on how to characterize the prior from the ACS data and couple it with the proposed model.
The proposed model only makes full use of the priors of a single image or a single space data, while in practice, MRI can draw on correlation with other images. For example, in dynamic imaging, the current frame image can be regarded as the image of the previous frame that has been deformed. Therefore, our future work will propose an untrained deformation operator to capitalize interframe similarity priors and couple them into the proposed KUNN.
At last, for supervised deep learning methods, their training process is offline, and the test (reconstruction) process takes very little time. Our methods need to solve the optimization problem (3) individually for each slice (task), which severely limits the speed of imaging. Instead of optimizing (3) individually, it will be a potential research direction to shorten the imaging time of our method via metalearning the past experiences by exploiting some similarities across the previous tasks (Vilalta2002a).
8 Conclusion
This paper proposed a interpolation method for MRI using UNN, dubbed KUNN. Unlike existing UNNs, the proposed KUNN can capitalize three physical priors of the MR image (or space data), including sparsity, coil sensitivities, and phase smoothness. Theoretically, we proved that the proposed method guarantees tight bounds for interpolated space data accuracy on a series of commonly used sampling trajectories. Experiments showed that the proposed method consistently outperforms traditional parallel imaging methods and IUNN, and even outperforms the SOTA supervisedtrained space deep learning methods in some cases. Our method could be a powerful framework for parallel MR imaging, and the further development of this kind of method may enable even larger gains in the future.
Appendix
8.1 Proof of Theorem 4.1
Before proving in detail, let’s review some useful technical lemmas in matrix completion (candes2009exact; 6867345). Recall that any rank matrix can be decomposed as follows via SVD decomposition:
According to the left and right eigenvectors
and , one can define the following subspace:(4) 
The coherence between above subspaces and standard basis can be measured by the following definition:
Definition 8.0
Let be a subspace of dimension and denotes the orthogonal projection onto it, the coherence of is defined as
where is the standard basis.
Now, we will give a pivotal lemma for our method. By the architecture of the optimal UNN derived from (3), we have
Let , and be abbreviations for , and , respectively, space data acquisition (1) can be reformulated as
(5)  
Based on the above notations, we have:
Lemma 8.0
Let be the optimal UNN derived from (3). Suppose Assumption 1 holds and trajectory is generated independently and uniformly with nonzero locations. For any , , and , we have
(6) 
with probability at least provided that , where subspaces , and (see (4)) are spanned by the right and left singular vectors of and is a constant satisfying .
By Assumption 1, we have with . Following Theorem 6 of literature (Recht2011a), we know that the following inequality holds with a probability at least and for the subspace spanned by the right and left singular vectors of :
where , which meets . The above inequality implies
Since and , we have
with . It is not difficult to verify that the above inequality also holds for Then, the inequality (6) follows by applying the equation (5) directly.
Now, let us start to give the concrete proof for Theorem 4.1.
Firstly, we define the following two subsets and such that
Using the inequality (6) directly, we have
where we make use of the notation for the last inequality. Then it holds that
Proof is completed.
8.2 Proof of Theorem 4.2
Before giving the concrete analysis for Theorem 4.2, we give a useful lemma firstly:
Lemma 8.0
For any , with , there exists a constant such that
Since , we have . Because is bounded, we have . Let , then the above inequality holds.
Now, we start to prove the result of Theorem 4.2.
Let , we define:
Let represent the subset of indices sampled by and represent its complement set, we have
where the second equality is due to and , and the last inequality is due to . Switch the order on both sides of the above inequality, we have
If , then which means that . There exists at least a pair such that . Then, it holds
(7) 
If , since is bounded, by Lemma 8.5, there is a scalar such that
Choosing and , we have
With a simple derivation, it holds that
(8)  
Acknowledgments
This work was supported in part by the National Key RD Program of China (2020YFA0712202, 2017YFC0108802 and 2017YFC0112903); China Postdoctoral Science Foundation under Grant 2020M682990; National Natural Science Foundation of China (61771463, 81830056, U1805261, 81971611, 61871373, 81729003, 81901736); Natural Science Foundation of Guangdong Province (2018A0303130132); Shenzhen Key Laboratory of Ultrasound Imaging and Therapy (ZDSYS20180206180631473); Shenzhen Peacock Plan Team Program (KQTD20180413181834876); Innovation and Technology Commission of the government of Hong Kong SAR (MRP/001/18X); Strategic Priority Research Program of Chinese Academy of Sciences (XDB25000000).