
Material Classification in the Wild: Do Synthesized Training Data Generalise Better than RealWorld Training Data?
We question the dominant role of realworld training images in the field...
read it

Fourier Neural Networks: A Comparative Study
We review neural network architectures which were motivated by Fourier s...
read it

Learning from Multiple Corrupted Sources, with Application to Learning from Label Proportions
We study binary classification in the setting where the learner is prese...
read it

Unique Informations and Deficiencies
Given two channels that convey information about the same random variabl...
read it

The Value of Information in Retrospect
In the course of any statistical analysis, it is necessary to consider i...
read it

Where is the Information in a Deep Neural Network?
Whatever information a Deep Neural Network has gleaned from past data is...
read it

Adversarially Robust Neural Architectures
Deep Neural Network (DNN) are vulnerable to adversarial attack. Existing...
read it
Estimating informativeness of samples with Smooth Unique Information
We define a notion of information that an individual sample provides to the training of a neural network, and we specialize it to measure both how much a sample informs the final weights and how much it informs the function computed by the weights. Though related, we show that these quantities have a qualitatively different behavior. We give efficient approximations of these quantities using a linearized network and demonstrate empirically that the approximation is accurate for realworld architectures, such as pretrained ResNets. We apply these measures to several problems, such as dataset summarization, analysis of undersampled classes, comparison of informativeness of different data sources, and detection of adversarial and corrupted examples. Our work generalizes existing frameworks but enjoys better computational properties for heavily overparametrized models, which makes it possible to apply it to realworld networks.
READ FULL TEXT
Comments
There are no comments yet.