Examining Redundancy in the Context of Safe Machine Learning

07/03/2020
by   Hans Dermot Doran, et al.
0

This paper describes a set of experiments with neural network classifiers on the MNIST database of digits. The purpose is to investigate naïve implementations of redundant architectures as a first step towards safe and dependable machine learning. We report on a set of measurements using the MNIST database which ultimately serve to underline the expected difficulties in using NN classifiers in safe and dependable systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/26/2020

Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks

Recent works have demonstrated the existence of adversarial examples ta...
research
10/24/2019

Case Study: Verifying the Safety of an Autonomous Racing Car with a Neural Network Controller

This paper describes a verification case study on an autonomous racing c...
research
10/11/2022

Geometry of Radial Basis Neural Networks for Safety Biased Approximation of Unsafe Regions

Barrier function-based inequality constraints are a means to enforce saf...
research
05/25/2019

Cold Case: The Lost MNIST Digits

Although the popular MNIST dataset [LeCun et al., 1994] is derived from ...
research
04/15/2019

Tutorial: Safe and Reliable Machine Learning

This document serves as a brief overview of the "Safe and Reliable Machi...
research
03/26/2019

Generative Tensor Network Classification Model for Supervised Machine Learning

Tensor network (TN) has recently triggered extensive interests in develo...
research
07/30/2021

Dependable Neural Networks Through Redundancy, A Comparison of Redundant Architectures

With edge-AI finding an increasing number of real-world applications, es...

Please sign up or login with your details

Forgot password? Click here to reset