DeepAI AI Chat
Log In Sign Up

Performance Bounds for Neural Network Estimators: Applications in Fault Detection

03/22/2021
by   Navid Hashemi, et al.
Johns Hopkins University
0

We exploit recent results in quantifying the robustness of neural networks to input variations to construct and tune a model-based anomaly detector, where the data-driven estimator model is provided by an autoregressive neural network. In tuning, we specifically provide upper bounds on the rate of false alarms expected under normal operation. To accomplish this, we provide a theory extension to allow for the propagation of multiple confidence ellipsoids through a neural network. The ellipsoid that bounds the output of the neural network under the input variation informs the sensitivity - and thus the threshold tuning - of the detector. We demonstrate this approach on a linear and nonlinear dynamical system.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/08/2022

Robust Training and Verification of Implicit Neural Networks: A Non-Euclidean Contractive Approach

This paper proposes a theoretical and computational framework for traini...
09/22/2019

HAWKEYE: Adversarial Example Detector for Deep Neural Networks

Adversarial examples (AEs) are images that can mislead deep neural netwo...
05/29/2022

The Berry-Esséen Upper Bounds of Vasicek Model Estimators

The Berry-Esséen upper bounds of moment estimators and least squares est...
05/13/2022

Data-Driven Upper Bounds on Channel Capacity

We consider the problem of estimating an upper bound on the capacity of ...
09/23/2022

A Robust and Explainable Data-Driven Anomaly Detection Approach For Power Electronics

Timely and accurate detection of anomalies in power electronics is becom...
06/03/2019

Correctness Verification of Neural Networks

We present the first verification that a neural network produces a corre...
04/07/2023

A modular framework for stabilizing deep reinforcement learning control

We propose a framework for the design of feedback controllers that combi...