-
Material Classification in the Wild: Do Synthesized Training Data Generalise Better than Real-World Training Data?
We question the dominant role of real-world training images in the field...
read it
-
Fourier Neural Networks: A Comparative Study
We review neural network architectures which were motivated by Fourier s...
read it
-
Learning from Multiple Corrupted Sources, with Application to Learning from Label Proportions
We study binary classification in the setting where the learner is prese...
read it
-
Unique Informations and Deficiencies
Given two channels that convey information about the same random variabl...
read it
-
The Value of Information in Retrospect
In the course of any statistical analysis, it is necessary to consider i...
read it
-
Where is the Information in a Deep Neural Network?
Whatever information a Deep Neural Network has gleaned from past data is...
read it
-
Adversarially Robust Neural Architectures
Deep Neural Network (DNN) are vulnerable to adversarial attack. Existing...
read it
Estimating informativeness of samples with Smooth Unique Information
We define a notion of information that an individual sample provides to the training of a neural network, and we specialize it to measure both how much a sample informs the final weights and how much it informs the function computed by the weights. Though related, we show that these quantities have a qualitatively different behavior. We give efficient approximations of these quantities using a linearized network and demonstrate empirically that the approximation is accurate for real-world architectures, such as pre-trained ResNets. We apply these measures to several problems, such as dataset summarization, analysis of under-sampled classes, comparison of informativeness of different data sources, and detection of adversarial and corrupted examples. Our work generalizes existing frameworks but enjoys better computational properties for heavily over-parametrized models, which makes it possible to apply it to real-world networks.
READ FULL TEXT
Comments
There are no comments yet.