Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data

02/06/2022
by   Yaoqing Yang, et al.
3

The search for effective and robust generalization metrics has been the focus of recent theoretical and empirical work. In this paper, we discuss the performance of natural language processing (NLP) models, and we evaluate various existing and novel generalization metrics. Compared to prior studies, we (i) focus on NLP instead of computer vision (CV), (ii) focus on generalization metrics that predict test error instead of the generalization gap, (iii) focus on generalization metrics that do not need the access to data, and (iv) focus on the heavy-tail (HT) phenomenon that has received comparatively less attention in the study of deep neural networks (NNs). We extend recent HT-based work which focuses on power law (PL) distributions, and we study exponential (EXP) and exponentially truncated power law (E-TPL) fitting to the empirical spectral densities (ESDs) of weight matrices. Our detailed empirical studies show that (i) shape metrics, or the metrics obtained from fitting the shape of the ESDs, perform uniformly better at predicting generalization performance than scale metrics commonly studied in the literature, as measured by the average rank correlations with the generalization performance for all of our experiments; (ii) among forty generalization metrics studied in our paper, the metric, a new shape metric invented in this paper that measures the distance between empirical eigenvalues of weight matrices and those of randomly initialized weight matrices, achieves the highest worst-case rank correlation with generalization performance under a variety of training settings; and (iii) among the three HT distributions considered in our paper, the E-TPL fitting of ESDs performs the most robustly.

READ FULL TEXT

page 6

page 20

research
08/03/2018

Generalization Error in Deep Learning

Deep learning models have lately shown great performance in various fiel...
research
05/23/2021

Compressing Heavy-Tailed Weight Matrices for Non-Vacuous Generalization Bounds

Heavy-tailed distributions have been studied in statistics, random matri...
research
02/17/2020

Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data

In many applications, one works with deep neural network (DNN) models tr...
research
04/25/2022

A global analysis of metrics used for measuring performance in natural language processing

Measuring the performance of natural language processing models is chall...
research
12/07/2021

Spectral Complexity-scaled Generalization Bound of Complex-valued Neural Networks

Complex-valued neural networks (CVNNs) have been widely applied to vario...
research
03/30/2022

Reproducibility Issues for BERT-based Evaluation Metrics

Reproducibility is of utmost concern in machine learning and natural lan...
research
06/01/2021

Post-mortem on a deep learning contest: a Simpson's paradox and the complementary roles of scale metrics versus shape metrics

To understand better the causes of good generalization performance in st...

Please sign up or login with your details

Forgot password? Click here to reset