Is It Time to Redefine the Classification Task for Deep Neural Networks?

10/11/2020
by   Keji Han, et al.
0

Deep neural networks (DNNs) is demonstrated to be vulnerable to the adversarial example, which is generated by adding small adversarial perturbation into the original legitimate example to cause the wrong outputs of DNNs. Nowadays, most works focus on the robustness of the deep model, while few works pay attention to the robustness of the learning task itself defined on DNNs. So we redefine this issue as the robustness of deep neural learning system. A deep neural learning system consists of the deep model and the learning task defined on the deep model. Moreover, the deep model is usually a deep neural network, involving the model architecture, data, training loss and training algorithm. We speculate that the vulnerability of the deep learning system also roots in the learning task itself. This paper defines the interval-label classification task for the deep classification system, whose labels are predefined non-overlapping intervals, instead of a fixed value (hard label) or probability vector (soft label). The experimental results demonstrate that the interval-label classification task is more robust than the traditional classification task while retaining accuracy.

READ FULL TEXT
research
12/08/2020

KNN-enhanced Deep Learning Against Noisy Labels

Supervised learning on Deep Neural Networks (DNNs) is data hungry. Optim...
research
11/16/2021

Assessing Deep Neural Networks as Probability Estimators

Deep Neural Networks (DNNs) have performed admirably in classification t...
research
01/03/2023

Explainability and Robustness of Deep Visual Classification Models

In the computer vision community, Convolutional Neural Networks (CNNs), ...
research
07/09/2019

Characterizing Inter-Layer Functional Mappings of Deep Learning Models

Deep learning architectures have demonstrated state-of-the-art performan...
research
03/21/2023

Dynamic-Aware Loss for Learning with Label Noise

Label noise poses a serious threat to deep neural networks (DNNs). Emplo...
research
08/10/2022

Customized Watermarking for Deep Neural Networks via Label Distribution Perturbation

With the increasing application value of machine learning, the intellect...
research
07/13/2021

The Foes of Neural Network's Data Efficiency Among Unnecessary Input Dimensions

Datasets often contain input dimensions that are unnecessary to predict ...

Please sign up or login with your details

Forgot password? Click here to reset