Class Conditional Gaussians for Continual Learning
Dealing with representation shift is one of the main problems in online continual learning. Current methods mainly solve this by reducing representation shift, but leave the classifier on top of the representation to slowly adapt, in many update steps, to the remaining representation shift, increasing forgetting. We propose DeepCCG, an empirical Bayesian approach to solve this problem. DeepCCG works by updating the posterior of a class conditional Gaussian classifier such that the classifier adapts instantly to representation shift. The use of a class conditional Gaussian classifier also enables DeepCCG to use a log conditional marginal likelihood loss to update the representation, which can be seen as a new type of replay. To perform the update to the classifier and representation, DeepCCG maintains a fixed number of examples in memory and so a key part of DeepCCG is selecting what examples to store, choosing the subset that minimises the KL divergence between the true posterior and the posterior induced by the subset. We demonstrate the performance of DeepCCG on a range of settings, including those with overlapping tasks which thus far have been under-explored. In the experiments, DeepCCG outperforms all other methods, evidencing its potential.
READ FULL TEXT