Log In Sign Up

Safety and Trustworthiness of Deep Neural Networks: A Survey

by   Xiaowei Huang, et al.

In the past few years, significant progress has been made on deep neural networks (DNNs) in achieving human-level intelligence on several long-standing tasks such as image classification, natural language processing, the ancient game of Go, etc. With broader deployment of DNNs on various applications, the concerns on its safety and trustworthiness have been raised, particularly after the fatal incidents of self-driving cars. Research to address these concerns is very active, with many papers released in the past few years. It is therefore infeasible, if not impossible, to cover all the research activities. This survey paper is to conduct a review of the current research efforts on making DNNs safe and trustworthy, by focusing on those works that are aligned with our humble visions about the safety and trustworthiness of DNNs. In total, we surveyed 178 papers, most of which were published in the most recent two years, i.e., 2017 and 2018.


page 17

page 30


Generating Textual Adversarial Examples for Deep Learning Models: A Survey

With the development of high computational devices, deep neural networks...

A Safety Assurable Human-Inspired Perception Architecture

Although artificial intelligence-based perception (AIP) using deep neura...

A Survey of Active Learning for Text Classification using Deep Neural Networks

Natural language processing (NLP) and neural networks (NNs) have both un...

AGI Safety Literature Review

The development of Artificial General Intelligence (AGI) promises to be ...

Comparison Study for Multi-vendor Versus Single-vendor for Enterprise Computer Networks

One of the topics that concerns the way computer networks are designed, ...

Are Self-Driving Cars Secure? Evasion Attacks against Deep Neural Networks for Steering Angle Prediction

Deep Neural Networks (DNNs) have tremendous potential in advancing the v...