-
The troublesome kernel: why deep learning for inverse problems is typically unstable
There is overwhelming empirical evidence that Deep Learning (DL) leads t...
read it
-
Tradeoffs between Convergence Speed and Reconstruction Accuracy in Inverse Problems
Solving inverse problems with iterative algorithms is popular, especiall...
read it
-
The gap between theory and practice in function approximation with deep neural networks
Deep learning (DL) is transforming whole industries as complicated decis...
read it
-
From Deep to Physics-Informed Learning of Turbulence: Diagnostics
We describe physical tests validating progress made toward acceleration ...
read it
-
Universal Approximation with Certified Networks
Training neural networks to be certifiably robust is a powerful defense ...
read it
-
Overcomplete representation in a hierarchical Bayesian framework
A common task in inverse problems and imaging is finding a solution that...
read it
Can stable and accurate neural networks be computed? – On the barriers of deep learning and Smale's 18th problem
Deep learning (DL) has had unprecedented success and is now entering scientific computing with full force. However, DL suffers from a universal phenomenon: instability, despite universal approximating properties that often guarantee the existence of stable neural networks (NNs). We show the following paradox. There are basic well-conditioned problems in scientific computing where one can prove the existence of NNs with great approximation qualities, however, there does not exist any algorithm, even randomised, that can train (or compute) such a NN. Indeed, for any positive integers K > 2 and L, there are cases where simultaneously: (a) no randomised algorithm can compute a NN correct to K digits with probability greater than 1/2, (b) there exists a deterministic algorithm that computes a NN with K-1 correct digits, but any such (even randomised) algorithm needs arbitrarily many training data, (c) there exists a deterministic algorithm that computes a NN with K-2 correct digits using no more than L training samples. These results provide basic foundations for Smale's 18th problem and imply a potentially vast, and crucial, classification theory describing conditions under which (stable) NNs with a given accuracy can be computed by an algorithm. We begin this theory by initiating a unified theory for compressed sensing and DL, leading to sufficient conditions for the existence of algorithms that compute stable NNs in inverse problems. We introduce Fast Iterative REstarted NETworks (FIRENETs), which we prove and numerically verify are stable. Moreover, we prove that only 𝒪(|log(ϵ)|) layers are needed for an ϵ accurate solution to the inverse problem (exponential convergence), and that the inner dimensions in the layers do not exceed the dimension of the inverse problem. Thus, FIRENETs are computationally very efficient.
READ FULL TEXT
Comments
There are no comments yet.