Topological Uncertainty: Monitoring trained neural networks through persistence of activation graphs

05/07/2021
by   Théo Lacombe, et al.
0

Although neural networks are capable of reaching astonishing performances on a wide variety of contexts, properly training networks on complicated tasks requires expertise and can be expensive from a computational perspective. In industrial applications, data coming from an open-world setting might widely differ from the benchmark datasets on which a network was trained. Being able to monitor the presence of such variations without retraining the network is of crucial importance. In this article, we develop a method to monitor trained neural networks based on the topological properties of their activation graphs. To each new observation, we assign a Topological Uncertainty, a score that aims to assess the reliability of the predictions by investigating the whole network instead of its final layer only, as typically done by practitioners. Our approach entirely works at a post-training level and does not require any assumption on the network architecture, optimization scheme, nor the use of data augmentation or auxiliary datasets; and can be faithfully applied on a large range of network architectures and data types. We showcase experimentally the potential of Topological Uncertainty in the context of trained network selection, Out-Of-Distribution detection, and shift-detection, both on synthetic and real datasets of images and graphs.

READ FULL TEXT

page 1

page 4

research
11/22/2018

Towards Robust Neural Networks with Lipschitz Continuity

Deep neural networks have shown remarkable performance across a wide ran...
research
10/19/2021

Activation Landscapes as a Topological Summary of Neural Network Performance

We use topological data analysis (TDA) to study how data transforms as i...
research
10/25/2020

Towards Interaction Detection Using Topological Analysis on Neural Networks

Detecting statistical interactions between input features is a crucial a...
research
12/23/2018

Neural Persistence: A Complexity Measure for Deep Neural Networks Using Algebraic Topology

While many approaches to make neural networks more fathomable have been ...
research
06/08/2017

Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks

We consider the problem of detecting out-of-distribution images in neura...
research
11/18/2018

Enhancing the Robustness of Prior Network in Out-of-Distribution Detection

With the recent surge of interests in deep neural networks, more real-wo...
research
11/27/2021

Nonparametric Topological Layers in Neural Networks

Various topological techniques and tools have been applied to neural net...

Please sign up or login with your details

Forgot password? Click here to reset