DeepAI AI Chat
Log In Sign Up

Performance Bounds for Neural Network Estimators: Applications in Fault Detection

by   Navid Hashemi, et al.
Johns Hopkins University

We exploit recent results in quantifying the robustness of neural networks to input variations to construct and tune a model-based anomaly detector, where the data-driven estimator model is provided by an autoregressive neural network. In tuning, we specifically provide upper bounds on the rate of false alarms expected under normal operation. To accomplish this, we provide a theory extension to allow for the propagation of multiple confidence ellipsoids through a neural network. The ellipsoid that bounds the output of the neural network under the input variation informs the sensitivity - and thus the threshold tuning - of the detector. We demonstrate this approach on a linear and nonlinear dynamical system.


page 1

page 2

page 3

page 4


Robust Training and Verification of Implicit Neural Networks: A Non-Euclidean Contractive Approach

This paper proposes a theoretical and computational framework for traini...

HAWKEYE: Adversarial Example Detector for Deep Neural Networks

Adversarial examples (AEs) are images that can mislead deep neural netwo...

The Berry-Esséen Upper Bounds of Vasicek Model Estimators

The Berry-Esséen upper bounds of moment estimators and least squares est...

Data-Driven Upper Bounds on Channel Capacity

We consider the problem of estimating an upper bound on the capacity of ...

A Robust and Explainable Data-Driven Anomaly Detection Approach For Power Electronics

Timely and accurate detection of anomalies in power electronics is becom...

Correctness Verification of Neural Networks

We present the first verification that a neural network produces a corre...

A modular framework for stabilizing deep reinforcement learning control

We propose a framework for the design of feedback controllers that combi...