Fantastic DNN Classifiers and How to Identify them without Data
Current algorithms and architecture can create excellent DNN classifier models from example data. In general, larger training datasets result in better model estimations, which improve test performance. Existing methods for predicting generalization performance are based on hold-out test examples. To the best of our knowledge, at present no method exists that can estimate the quality of a trained DNN classifier without test data. In this paper, we show that the quality of a trained DNN classifier can be assessed without any example data. We consider DNNs to be composed of a feature extractor and a feature classifier; the feature extractor's output is fed to the classifier. The proposed method iteratively creates class prototypes in the input space for each class by minimizing a cross-entropy loss function at the output of the network. We use these prototypes and their feature relationships to reveal the quality of the classifier. We have developed two metrics: one using the features of the prototypes and the other using adversarial examples corresponding to each prototype. Empirical evaluations show that accuracy obtained from test examples is directly proportional to quality measures obtained from the proposed metrics. We report our observations for ResNet18 with Tiny ImageNet, CIFAR100, and CIFAR10 datasets. The proposed metrics can be used to compare performances of two or more classifiers without test examples.
READ FULL TEXT