Transgressing the boundaries: towards a rigorous understanding of deep learning and its (non-)robustness

07/05/2023
by   Carsten Hartmann, et al.
0

The recent advances in machine learning in various fields of applications can be largely attributed to the rise of deep learning (DL) methods and architectures. Despite being a key technology behind autonomous cars, image processing, speech recognition, etc., a notorious problem remains the lack of theoretical understanding of DL and related interpretability and (adversarial) robustness issues. Understanding the specifics of DL, as compared to, say, other forms of nonlinear regression methods or statistical learning, is interesting from a mathematical perspective, but at the same time it is of crucial importance in practice: treating neural networks as mere black boxes might be sufficient in certain cases, but many applications require waterproof performance guarantees and a deeper understanding of what could go wrong and why it could go wrong. It is probably fair to say that, despite being mathematically well founded as a method to approximate complicated functions, DL is mostly still more like modern alchemy that is firmly in the hands of engineers and computer scientists. Nevertheless, it is evident that certain specifics of DL that could explain its success in applications demands systematic mathematical approaches. In this work, we review robustness issues of DL and particularly bridge concerns and attempts from approximation theory to statistical learning theory. Further, we review Bayesian Deep Learning as a means for uncertainty quantification and rigorous explainability.

READ FULL TEXT

page 4

page 11

research
10/10/2018

Secure Deep Learning Engineering: A Software Quality Assurance Perspective

Over the past decades, deep learning (DL) systems have achieved tremendo...
research
12/08/2021

Deep Learning and Mathematical Intuition: A Review of (Davies et al. 2021)

A recent paper by Davies et al (2021) describes how deep learning (DL) t...
research
10/26/2021

Improving Robustness of Deep Neural Networks for Aerial Navigation by Incorporating Input Uncertainty

Uncertainty quantification methods are required in autonomous systems th...
research
06/24/2019

A Review of Statistical Learning Machines from ATR to DNA Microarrays: design, assessment, and advice for practitioners

Statistical Learning is the process of estimating an unknown probabilist...
research
06/24/2019

Statistical Learning Machines from ATR to DNA Microarrays: design, assessment, and advice for practitioners

Statistical Learning is the process of estimating an unknown probabilist...
research
02/26/2020

Uncertainty Quantification for Sparse Deep Learning

Deep learning methods continue to have a decided impact on machine learn...
research
03/13/2022

Algebraic Learning: Towards Interpretable Information Modeling

Along with the proliferation of digital data collected using sensor tech...

Please sign up or login with your details

Forgot password? Click here to reset