Pattern analysis and machine intelligence have been focussed predominantly on tasks that mimic perceptual problems. These are typically modelled as classification or regression tasks in which the actual reference stems from a human observer that defines the ground-truth
. As we have only limited understanding on how these man-made classes emerge from the human mind, there is only limited knowledge available. As such, pattern recognition has relied on expert knowledge to design features that are suited towards a particular recognition task. In order to alleviate the task of feature-design, researchers started also learning feature descriptors as a part of the training procedure 
. Implementation of such on efficient hardware gave rise to first models that could outperform classical feature extraction methods significantly and was one of the milestone works in the emerging field of deep learning.
With the rise of deep learning 
, researchers became aware that these methods of general function learning are applicable to a much wider range than mere perceptual tasks. Today, machine learning is applied in a much wider range of applications. Examples range from image super resolution, image denoising and inpainting , or even computed tomography . In these fields, the methods from deep learning are often directly applied and often show performances that are either en par or even significantly better than results found with state-of-the-art methods. Yet, there are also reports that present surprising results in which parts of the image are hallucinated [8, 9]. In particular  demonstrates that mismatches in training and test data leads to dramatic changes in the produced result. Hence, blind deep learning methods have to be performed with care in order to be successful.
In this article, we explore the use of known operations within machine learning algorithms. First, we analyze the problem from a theoretical perspective and study the effect of prior knowledge in terms of maximal error bounds. This is followed by three applications in which we use prior operators to study to their effect on the respective regression or classification problem. Lastly, we discuss our observations in relation to other works in literature and give an outlook on future work. Note that some of the work presented here is based on prior conference publications [10, 11, 12, 13].
2 Known Operator Learning
The general idea of known operator learning is to embed entire operations into a learning problem. Figure 1 presents the idea graphically. We generally refer to the -dimensional input of our trained algorithm as . In order to increase readability, we use an extended version
such that inner products with some weight vectorplus bias can be conveniently written, i. e. . Before looking into the properties of this approach and in particular maximal error bounds, we shortly summarize the Universal Approximation Theorem as it is closely related to our analysis. Note that the supplementary material to this article contains all proofs for the presented theorems in this section.
2.1 Universal Approximation
Theorem 1 (Universal Approximation Theorem).
Let be a non-constant, bounded, and continuous function and be a continuous function on a compact set . Then, there exist an integer , weights , and that form an approximation
such that the inequality
holds for all and .
Theorem 1 states that for any continuous function an approximation can be found such that the difference between true function and approximation is bounded by . With increasing number of nodes , will decrease. In literature, this result is often referred to as Universal Approximation Theorem [14, 15]
and forms the fundamental result that neural networks with just a single hidden layer are general function approximators. Yet, this type of approximation may result in a very high requirement for the choice ofwhich is the reason why stacked layers of different type are known to be more successful .
2.2 Known Operator Error Bounds
Knowing the limits of general function approximation, we are now interested in finding limits for mixing known and approximated operators. As previously mentioned, deep networks are never constructed out of a single layer, but rather take the form of the configuration shown in Figure 1. Hence, we need to consider layered networks to analyze the maximal error bounds. Instead of investigating entire networks, we choose to simplify our theoretical analysis to the special case
with , , and compact sets and . Note that this simplification does not limit the generality of our analysis, as we can map any knowledge on the structure of the network architecture either onto the output function , the intermediate function , or directly as a transform of the inputs . Generalisation to -dimensional functions is also possible following the idea shown in Eq. 3.
Previous definition of allows us to investigate different forms of approximation. In particular, we are able to introduce approximations and following Theorem 1:
Here , , and denote the errors that are introduced by respective approximation of , , and .
Next, we are interested in finding bounds on using above approximations. For the case of known , we can substitute , as is a fixed function. In this case Theorem 1 directly applies and a bound on is found as with . If we would know in addition, would be 0 and the bound would shrink to the case of equality.
Theorem 2 (Known Output Operator Theorem).
Let be a non-constant, bounded, and continuous function and be a continuous function on . Further let be Lipschitz-continuous function with Lipschitz constant with on and
be a general function approximator of with integer , weight , and . Then, with — as is known — is generally bounded for all by
with and component-wise approximation errors .
The bound for is found using a Lipschitz constant on which implies that the theorem will only hold, if Lipschitz-bounded functions are used for . Analysis of Eq. 8 reveals that knowing in this case, would imply which also yields equality on both sides.
We further explore this idea in Theorem 3. It describes a bound for the case that both and are approximated.
Theorem 3 (Unknown Operator Theorem).
Let be a non-constant, bounded, and continuous function with Lipschitz-bound and be a continuous function on . Further let
be general function approximators of and with integers and , weights , , , and compact sets and . Then, with is generally bounded for all by
where , , and is the vector of errors introduced by the components of .
The bound is comprised of two terms in an additive relation:
where the first term vanishes, if is known as and the second term vanishes for known as . Hence for all of the considered cases, knowing or is beneficial and allows to shrink the maximal training error bounds.
Given the previous observations, we can now also explore deeper networks that try to mimic the structure of the original function. This gives rise to Theorem 4.
Theorem 4 (Unknown Operators in Deep Networks).
Let be a continuous function with Lipschitz-bound on compact set with integer . Further let be a function composed of layers / function blocks defined as recursion with on compact set bound by Lipschitz constant with . Recursive function with is then an approximation of . Then, is generally bounded for all and for all in each component by
where is the vector of errors introduced by .
If we investigate Theorem 4 closely, we identify similar properties to Theorem 3. The errors of each layer / function block are additive. If a layer is known, the respective error vector vanishes and the respective part of the bound cancels out. Furthermore, later layers have a multiplier effect on the error as their Lipschitz constants amplify . Note that the relation is shown in the supplementary material. A large advantage of Theorem 4 over Theorem 3 is that the Lipschitz constants that appear in the error term are the ones of the true function . Therefore, the amplification effects are only dependent of the structure of the true function and independent of the actual choice of the universal function approximator. The approximator only influences the actual error .
Above observations pave the way to incorporating prior operators into different architectures. In the following, we will highlight several applications in which we explore blending deep learning with prior operators.
3 Application Examples
We believe that known operators have a wide range of applications in physics and signal processing. Here, we highlight three approaches to use such operators. All three applications are from the domain of medical imaging, yet the method is applicable to many more disciplines to be discovered in the future. The results presented here are based on conference contributions [10, 11, 13]. Note that the supplementary material contains descriptions of experiments, data, and additional figures that were omitted here for brevity.
3.1 Deep Learning Computed Tomography
In computed tomography, we are interested in computing a reconstruction from a set of projection images . Both are related by the X-ray transfrom :
Solving for requires inversion of above formula. The Moore-Penrose inverse of yields the following solution:
This type of inversion gives rise to the class of filtered back-projection methods, as it can be shown that takes the form of a circulant matrix , i. e. , where
denotes the Fourier transform,its inverse, and a diagonal matrix that corresponds to the Fourier transform of . As typically is associated with a large receptive field, it is typically implemented in Fourier space. In order to be applicable for other geometries, such as fan-beam reconstruction additional Parker and cosine weights have to be incorporated that can elegantly be summarised in an additional diagonal matrix to yield
where suppresses negative values as the final reconstruction algorithm.
Following the paradigm of known operator learning, Eq. 14 can also interpreted as a neural network structure as it only contains diagonal, circulant, and fully connected matrix operators displayed in Figure 2. A practical limitation of is that it typically is a very large and sparse matrix. In practice, it is therefore never instantiated, but only evaluated on the fly using fast ray-tracing methods. For 3-D problems, the full matrix size is way beyond the memory restrictions of today’s compute systems. Furthermore, none of the parameters need to be trained as all of them are known for complete data acquisitions.
Incomplete data cannot be reconstructed with this type of algorithm and would lead to strong artifacts. We can still tackle limited data problems if we apply additional training of our network. As is large, we treat it as fixed during the training and only allow modification of and . Results and experimental details are demonstrated in the supplementary material. Training of both matrices clearly improves the image reconstruction result. In particular, the trained algorithm learns to compensate for the loss of mass in areas of the reconstruction in which rays are missing.
. (c) expresses significant similarity to (b) which is also able to compensate for the loss of mass. While (b) was only arrived at heuristically (c) can be shown to be data optimal here.
As the trained algorithm is mathematically equivalent to the original filtered back-projection method, we are able to map the trained weights back onto their original interpretation which allows comparison to state-of-the-art weights. In Figure 3, we can see that the trained weights show similarity with the approach published by Schäfer et al. . In contrast to Schäfer et al. who arrived at their weights following intuition, our approach is optimal with respect to our training data. In our present model, we have to re-train the algorithm for every new geometry. This could be avoided by modelling the weights using a continuous function which is sampled by the reconstruction network.
3.2 Learning from Heuristic Algorithms
Incorporating known operators generally allows blending of deep learning methods with traditional image processing approaches. In particular, we are able to choose heuristic methods that are well understood and augment them with neural network layers.
One example for such a heuristic method is Frangi’s vesselness . The vesselness values for dark tubes are calculated using the following formula:
are the eigenvalues,is the second order structureness, is the blobness measure, are image-dependent parameters for blobness and structureness terms, and stands for the vesselness value.
The entire multi-scale framework of Frangi filter can be mapped onto a neural network architecture . In Frangi-Net, each step of the Frangi filter is replaced with a network counterpart and data normalization layers are added to enable end-to-end training. Multi-scale analysis is formed as a series of trainable filters, followed by eigenvalue computation in specialized fixed function network blocks. This is followed by another fixed function – the actual vesselness measure as described in Eq. 15.
We compare the segmentation result of the proposed Frangi-Net with the original Frangi filter, and show that the Frangi-Net outperforms Frangi filter regarding all evaluation metrics. In comparison to the state-of-the-art image segmentation model U-Net, Frangi-Net contains less than 6 % the number of trainable parameters, while achieving an AUC score around 0.960, which is only 1 % inferior to that of the U-Net.Adding a trainable guided-filter before Frangi-Net as preprocessing step yields an AUC 0.972 with only 8.5 % of the trainable parameters of U-Net which is statistically not distinguishable from U-Net’s AUC of 0.974.
Hence using our approach of known operators, we are able to augment heuristic methods by blending them with methods of deep learning saving many trainable parameters.
3.3 Deriving Networks
A third application of known operator learning that we would like to highlight in this paper, is the derivation of new network architectures from mathematical relations of the signal processing problem at hand. In the following, we are interested in hybrid imaging of magnetic resonance imaging (MRI) and X-ray imaging simultaneously. One major problem is that MRI -space acquisitions typically allow parallel projection geometries, i. e. a line through the center -space, while X-rays are associated with a divergent geometry such as fan- or cone-beam geometries. Both modalities allow different contrast mechanisms and simultaneous acquisition and overlay in the same image would be highly desirable for improved interventional guidance.
In the following, we assume to have sampled MRI projections in -space. By inverse Fourier Transform , they can be transformed into parallel projections . Both parallel and cone-beam projections are related to the volume under consideration by associated projection operations and :
As appears in both relations, we can solve Eq. 16 for using the Moore-Penrose Pseudo Inverse:
Next, we can use in Eq. 17 to yield
Note that all operations on the path from -space to are known. Yet, is expensive to determine and may need significant amounts of memory. As we know from reconstruction theory, this matrix often takes the form of a circulant matrix, i. e. a convolution. As such, we can approximate it with the chain of operations where
is a diagonal matrix. In order to add a few more degrees of freedom, we further add another diagonal operator in spatial domainto yield
as parallel to cone rebinning formula. In this formulation, only and are unknown and need to be trained. By design both matrices are diagonal and therewith only have few unknown parameters.
Even though the training was conducted merely on numerical phantoms we can apply the learned algorithm on data acquired with a real MRI system without any loss of generality. Using only 15 parallel-beam MR projections we were able to compute a stacked fan-beam projection with both approaches. In Figure 5
the results of the analytical and learned algorithms are shown. The result of the learned algorithm has much sharper visual impression compared to the analytical approach which intrinsically suffers from ray-by-ray interpolation and thus from a blurring effect.Note that additional smoothing could be incorporated into the network by regularization of the filter or additional hard-coded filter steps at request.
For many applications, we do not know which operation is required in the ideal processing pipeline. Most machine learning tasks focus either on perceptual problems or man-made classes. Therefore, we only have limited knowledge on the ideal processing chain. In many cases, the human brain seems to have identified suitable solutions. Yet, our knowledge of the human brain is incomplete and search for well-suited deep architectures is a process of trial and error. Still, deep learning has shown to be able to solve tasks that were deemed as hard or close to impossible .
Now that deep learning also starts addressing fields of physics and classical signal processing, we are entering areas in which we have much better understanding of the underlying processes and therefore know that kind of mathematical operations need to be present in order to solve specific problems. Yet, during the derivation of our mathematical models, we often introduce simplifications that allow more compact descriptions and a more elegant solution. Still these simplifications introduce slight errors along the way and are often compensated using heuristic correction methods .
In this paper, we have shown that inclusion of known operators is beneficial in terms of maximal error bounds. We demonstrated that in all cases in which we are able to use partial knowledge on the function at hand, the maximal errors that may remain after training of the network are reduced even for networks of arbitrary depth. Note that in the future tighter error bounds than the ones described in this work might be identified that are independent of the use of known operators. Yet, our error analysis is still useful, as for the case of increasing number of known operations in the network, the magnitude of the bound shrinks up to the point of identity, if all operations are known. To the knowledge of the authors, this is the first paper to attempt such a theoretical analysis of the use of known operators in neural network training.
In our experiments with CT reconstruction, we could demonstrate that we are able to tackle limited angle reconstructions using a standard filtered back-projection-type of algorithm. In fact, we only adopted weights while run-time, behaviour, and computational complexity remained unchanged. As we can map the trained algorithm back onto its original interpretation, we could also investigate shape and function of the learned weights. They demonstrated similarity to a heuristic method that could previously only be explained by intuition rather than by showing optimality. For the case, of our trained weights, we can demonstrate that they are optimal with respect to the training data.
Based on Frangi’s vesselness, we could develop a trainable network for vessel detection. In our experiments, we could demonstrate that training of this net already yields improved filters for vessel detection that are close in terms of performance with a much more complex U-Net. Further inclusion of a trainable denoising step yielded an accuracy that is statistically not distinguishable from U-Net.
As last application of our approach, we investigated rebinning of MR data to a divergent beam geometry. For this kind of rebinning procedure, a fast convolution-based algorithm was previously unknown. Prior approaches relied on ray-by-ray interpolation that is typically introducing blurring. With our hypothesis that the inverse matrix operator takes the form of a circulant matrix in spatial domain in combination with an additional multiplicative weight, we could train a new algorithm successfully. The new approach is not just computationally efficient, it also features images of a degree of sharpness that was previously not reported in literature.
Although only applications from the medical domain are shown in this paper, this does not limit the generality of our theoretical analysis. Similar problems are found in many fields, e. g. computer vision, image super resolution , or audio signal processing .
Obviously, known operators have been embedded into neural networks already for a long time. Already, LeCun et al.  suggested convolution and pooling operators. Janderberg et al. introduced differentiable spatial transformations and their resampling into deep learning . Lin et al. use this for image generation . Kulkarni et al. developed an entire deep convolutional graphics pipeline . Zhu et al. include differentiable projectors to disentangle 3D shape information from appearance . Tewari et al. integrate a differentiable model-based image generator to synthesize facial images . Adler et al. shows an approach to partially learn the solution for ill-posed inverse problems. Ye et al.  introduced the Wavelet transform as multi-scale operator, Hammernik et al.  mapped entire energy minimization problems onto networks, and Wu et al. even included the guided filter as layer into a network . As this list could be continued with many more references, we see this as an emerging trend in deep learning. In fact, any operation that allows the computation of a sub-gradient  is suited to be used in combination with the back-propagation algorithm. In order to integrate a new operator, only the partial derivatives / sub-gradients with respect to its inputs and its parameters have to be implemented. This allows inclusion of a wide range of operations. To the best of our knowledge, this is the first paper giving a general argument for the effectiveness of such approaches.
Next, the introduction of a known operator is also associated with a reduction of trainable parameters. We demonstrate this in this paper in all of our experiments. This allows us to work with much fewer training data and helps us to create models that can be transferred from synthetic training data to real measured data. Zarei et al.  drive this approach so far that they are able to train user-dependent image denoising filters using only few clicks from alternate forced-choice experiments. Thus, we believe that known operators may be a suitable approach to problems for which only limited data is available.
At present we are unaware how to predict the benefit of using known operators before the actual experiment. Our analysis only focuses on maximum error bounds. Therefore, investigation of expected errors following for example the approach of Barron seems interesting for future work  . Also analysis of the bias variance trade-off seems interesting. In
. Also analysis of the bias variance trade-off seems interesting. In[37, Chapter 9] Duda and Hart already hinted at the positive effect of prior knowledge on this trade-off.
Lastly, we believe that known operators may be key in order to gain better understanding of deep networks. Similar to our experiments with Frangi-Net, we can start replacing layers with known operations and observe the effect on the performance of the network. From our theoretical analysis, we expect that inclusion of a known operation will not or only insignificantly reduce the system’s performance. This may allow us to find configurations for networks that only have few unknown operations while showing large parts that are explainable and understood. Figure 6 shows a variant of this process that is inspired by . Here, we offer a set of known operations in parallel and determine their optimal superposition by training of the network. In a second step, connections with low weights can be removed to iteratively determine the optimal sequence of operations. Furthermore, any known operator sequence can also be regarded as a hypothesis for a suitable algorithm for the problem at hand. By training, we are able to validate of falsify our hypothesis similar to our example of the derivation of a new network architecture.
We believe that the use of known operators in deep networks is a promising method. In this paper, we demonstrate that the use of such reduces maximal error bounds and experimentally show an reduction in the number of trainable parameters. Furthermore, we applied this to the case of learning CT reconstruction yielding networks that are interpretable and that can be analysed with classical signal processing tools. Also mixing of deep and known operator learning is beneficial, as it allows us to build smaller networks with only 6 % of the parameters of a competing U-Net while being close with respect to their performance. Lastly, the known operators can also be found using mathematical derivation of networks. While keeping large parts of the mathematical operations, we only replace inefficient or unknown operations with deep learning techniques to find entirely new imaging formulas. While all of the applications shown in this paper stem only from the medical domain, we believe that this approach is applicable to all fields of physics and signal processing which is the focus of our future work.
The research leading to these results has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (ERC Grant No. 810316).
Andreas Maier is the main author of the paper and is responsible for the writing of the manuscript, theoretical analysis, and experimental design. Christopher Syben and Bernhard Stimpel contributed to the writing of Section 3.3 and the supporting experiments. Tobias Würfl and Mathis Hoffmann supported writing Section 3.1 and performed the experiments reported in this section. Frank Schebesch contributed to the mathematical analysis and the writing thereof. Weilin Fu conducted the experiments supporting Section 3.2 and contributed to their description. Leonid Mill, Lasse Kling, and Silke Christiansen contributed to the experimental design and the writing of the manuscript.
Data Availability Statement
Code Availability Statement
The code and data for this article, along with an accompanying computational environment, are available and executable online as a Code Ocean Capsule. Experiments in Section 3.1 can be found at https://doi.org/10.24433/CO.2164960.v1 . The code on learning vesselness in Section 3.2 are published at https://doi.org/10.24433/CO.5016803.v2 . Code for Section 3.3 is available at https://doi.org/10.24433/CO.8086142.v2 . The code capsules for the experiments in Section 3.1 and Section 3.3 were implemented using the open source framework PYRO-NN .
-  Niemann, H. Pattern Analysis and Understanding, vol. 4 (Springer Science & Business Media, 2013).
-  LeCun, Y. & Bengio, Y. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks 3361, 1995 (1995).
-  Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 1097–1105 (2012).
-  LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436 (2015).
-  Dong, C., Loy, C. C., He, K. & Tang, X. Learning a deep convolutional network for image super-resolution. In European Conference on Computer Vision, 184–199 (Springer, 2014).
-  Xie, J., Xu, L. & Chen, E. Image denoising and inpainting with deep neural networks. In Advances in Neural Information Processing Systems, 341–349 (2012).
-  Wang, G., Ye, J. C., Mueller, K. & Fessler, J. A. Image reconstruction is a new frontier of machine learning. IEEE Transactions on Medical Imaging 37, 1289–1296 (2018).
-  Cohen, J. P., Luck, M. & Honari, S. Distribution matching losses can hallucinate features in medical image translation. In Frangi, A. F., Schnabel, J. A., Davatzikos, C., Alberola-López, C. & Fichtinger, G. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, 529–536 (Springer International Publishing, Cham, 2018).
-  Huang, Y. et al. Some investigations on robustness of deep learning in limited angle tomography. In Frangi, A. F., Schnabel, J. A., Davatzikos, C., Alberola-López, C. & Fichtinger, G. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, 145–153 (Springer International Publishing, Cham, 2018).
-  Würfl, T., Ghesu, F. C., Christlein, V. & Maier, A. Deep learning computed tomography. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 432–440 (Springer, 2016).
-  Fu, W. et al. Frangi-Net: A Neural Network Approach to Vessel Segmentation. In Maier, A. et al. (eds.) Bildverarbeitung für die Medizin 2018, 341–346 (2018).
-  Maier, A. et al. Precision Learning: Towards Use of Known Operators in Neural Networks. In Tan, J. K. T. (ed.) 2018 24rd International Conference on Pattern Recognition (ICPR), 183–188 (2018). URL https://www5.informatik.uni-erlangen.de/Forschung/Publikationen/2018/Maier18-PLT.pdf.
-  Syben, C. et al. Deriving neural network architectures using precision learning: Parallel-to-fan beam conversion. In German Conference on Pattern Recognition (GCPR) (2018).
Approximation by superpositions of a sigmoidal function.Mathematics of Control, Signals and Systems 2, 303–314 (1989).
-  Hornik, K. Approximation capabilities of multilayer feedforward networks. Neural networks 4, 251–257 (1991).
-  Maier, A., Syben, C., Lasser, T. & Riess, C. A gentle introduction to deep learning in medical image processing. Zeitschrift für Medizinische Physik 29, 86–101 (2019).
-  Parker, D. L. Optimal short scan convolution reconstruction for fan beam ct. Medical Physics 9, 254–257 (1982).
-  Schäfer, D., van de Haar, P. & Grass, M. Modified parker weights for super short scan cone beam ct. In Proc. 14th Int. Meeting Fully Three-Dimensional Image Reconstruction Radiol. Nucl. Med., 49–52 (2017).
-  Frangi, A. F., Niessen, W. J., Vincken, K. L. & Viergever, M. A. Multiscale vessel enhancement filtering. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 130–137 (Springer, 1998).
-  Silver, D. et al. Mastering the game of go with deep neural networks and tree search. Nature 529, 484 (2016).
-  Zhu, B., Liu, J. Z., Cauley, S. F., Rosen, B. R. & Rosen, M. S. Image reconstruction by domain-transform manifold learning. Nature 555, 487 (2018).
-  Fürsattel, P., Plank, C., Maier, A. & Riess, C. Accurate Laser Scanner to Camera Calibration with Application to Range Sensor Evaluation. IPSJ Transactions on Computer Vision and Applications 9 (2017). URL https://link.springer.com/article/10.1186/s41074-017-0032-5.
-  Köhler, T. et al. Robust Multiframe Super-Resolution Employing Iteratively Re-Weighted Minimization. IEEE Transactions on Computational Imaging 2, 42–58 (2016). URL https://www5.informatik.uni-erlangen.de/Forschung/Publikationen/2016/Kohler16-RMS.pdf.
-  Aubreville, M. et al. Deep Denoising for Hearing Aid Applications. In IEEE (ed.) 16th International Workshop on Acoustic Signal Enhancement (IWAENC), 361–365 (2018).
-  Jaderberg, M., Simonyan, K., Zisserman, A. & Kavukcuoglu, K. Spatial transformer networks. In Advances in Neural Information Processing Systems, 2017–2025 (2015).
-  Lin, C.-H., Yumer, E., Wang, O., Shechtman, E. & Lucey, S. St-gan: Spatial transformer generative adversarial networks for image compositing. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018).
-  Kulkarni, T. D., Whitney, W. F., Kohli, P. & Tenenbaum, J. Deep convolutional inverse graphics network. In Advances in Neural Information Processing Systems, 2539–2547 (2015).
-  Zhu, J.-Y. et al. Visual object networks: Image generation with disentangled 3d representations. In Advances in Neural Information Processing Systems, 118–129 (2018).
Tewari, A. et al.
Mofa: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction.In The IEEE International Conference on Computer Vision (ICCV) Workshops (2017).
-  Adler, J. & Öktem, O. Solving ill-posed inverse problems using iterative deep neural networks. Inverse Problems 33, 124007 (2017).
-  Ye, J. C., Han, Y. & Cha, E. Deep convolutional framelets: A general deep learning framework for inverse problems. SIAM Journal on Imaging Sciences 11, 991–1048 (2018).
-  Hammernik, K. et al. Learning a variational network for reconstruction of accelerated mri data. Magnetic Resonance in Medicine 79, 3055–3071 (2018).
-  Wu, H., Zheng, S., Zhang, J. & Huang, K. Fast end-to-end trainable guided filter. CoRR abs/1803.05619 (2018). URL http://arxiv.org/abs/1803.05619. 1803.05619.
-  Rockafellar, R. Convex Analysis. Princeton landmarks in mathematics and physics (Princeton University Press, 1970). URL https://books.google.de/books?id=1TiOka9bx3sC.
-  Zarei, S., Stimpel, B., Syben, C. & Maier, A. User Loss A Forced-Choice-Inspired Approach to Train Neural Networks Directly by User Interaction. In Bildverarbeitung für die Medizin 2019, Informatik aktuell, 92–97 (2019). URL https://www5.informatik.uni-erlangen.de/Forschung/Publikationen/2019/Zarei19-ULA.pdf.
-  Barron, A. R. Approximation and estimation bounds for artificial neural networks. Machine learning 14, 115–133 (1994).
-  Duda, R. O., Hart, P. E. & Stork, D. G. Pattern classification (John Wiley & Sons, 2012).
-  Szegedy, C. et al. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1–9 (2015).
-  McCollough, C. Tu-fg-207a-04: Overview of the low dose ct grand challenge. Medical Physics 43, 3759–3760 (2016).
-  Staal, J., Abràmoff, M. D., Niemeijer, M., Viergever, M. A. & Van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Transactions on Medical Imaging 23, 501–509 (2004).
-  Fu, W. Frangi-net on high-resolution fundus (HRF) image database. Code Ocean (2019). https://doi.org/10.24433/CO.5016803.v2.
-  Syben, C. & Hoffmann, M. Learning CT reconstruction. Code Ocean (2019). https://doi.org/10.24433/CO.2164960.v1.
-  Syben, C. Deriving neural networks. Code Ocean (2019). https://doi.org/10.24433/CO.8086142.v2.
-  Syben, C. et al. PYRO-NN: Python reconstruction operators in neural networks. arXiv preprint arXiv:1904.13342 (2019).