Maximum Entropy Models for Fast Adaptation

06/30/2020
by   Samarth Sinha, et al.
12

Deep Neural Networks have shown great promise on a variety of downstream tasks; but their ability to adapt to new data and tasks remains a challenging problem. The ability of a model to perform few-shot adaptation to a novel task is important for the scalability and deployment of machine learning models. Recent work has shown that the learned features in a neural network follow a normal distribution [41], which thereby results in a strong prior on the downstream task. This implicit overfitting to data from training tasks limits the ability to generalize and adapt to unseen tasks at test time. This also highlights the importance of learning task-agnostic representations from data. In this paper, we propose a regularization scheme using a max-entropy prior on the learned features of a neural network; such that the extracted features make minimal assumptions about the training data. We evaluate our method on adaptation to unseen tasks by performing experiments in 4 distinct settings. We find that our method compares favourably against multiple strong baselines across all of these experiments.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset