-
Regularization of Inverse Problems by Neural Networks
Inverse problems arise in a variety of imaging applications including co...
read it
-
Can stable and accurate neural networks be computed? – On the barriers of deep learning and Smale's 18th problem
Deep learning (DL) has had unprecedented success and is now entering sci...
read it
-
A hierarchical approach to deep learning and its application to tomographic reconstruction
Deep learning (DL) has shown unprecedented performance for many image an...
read it
-
Deep Learning Methods for Solving Linear Inverse Problems: Research Directions and Paradigms
The linear inverse problem is fundamental to the development of various ...
read it
-
When to Use Convolutional Neural Networks for Inverse Problems
Reconstruction tasks in computer vision aim fundamentally to recover an ...
read it
-
Deep synthesis regularization of inverse problems
Recently, a large number of efficient deep learning methods for solving ...
read it
-
A Mathematical Framework for Deep Learning in Elastic Source Imaging
An inverse elastic source problem with sparse measurements is of concern...
read it
The troublesome kernel: why deep learning for inverse problems is typically unstable
There is overwhelming empirical evidence that Deep Learning (DL) leads to unstable methods in applications ranging from image classification and computer vision to voice recognition and automated diagnosis in medicine. Recently, a similar instability phenomenon has been discovered when DL is used to solve certain problems in computational science, namely, inverse problems in imaging. In this paper we present a comprehensive mathematical analysis explaining the many facets of the instability phenomenon in DL for inverse problems. Our main results not only explain why this phenomenon occurs, they also shed light as to why finding a cure for instabilities is so difficult in practice. Additionally, these theorems show that instabilities are typically not rare events - rather, they can occur even when the measurements are subject to completely random noise - and consequently how easy it can be to destablise certain trained neural networks. We also examine the delicate balance between reconstruction performance and stability, and in particular, how DL methods may outperform state-of-the-art sparse regularization methods, but at the cost of instability. Finally, we demonstrate a counterintuitive phenomenon: training a neural network may generically not yield an optimal reconstruction method for an inverse problem.
READ FULL TEXT
Comments
There are no comments yet.