
Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes
Linear interpolation between initial neural network parameters and conve...
read it

Flexible FewShot Learning with Contextual Similarity
Existing approaches to fewshot learning deal with tasks that have persi...
read it

Theoretical bounds on estimation error for metalearning
Machine learning models have traditionally been developed under the assu...
read it

Regularized linear autoencoders recover the principal components, eventually
Our understanding of learning inputoutput relationships with neural net...
read it

Don't Blame the ELBO! A Linear VAE Perspective on Posterior Collapse
Posterior collapse in Variational Autoencoders (VAEs) arises when the va...
read it

Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks
Lipschitz constraints under L2 norm on deep neural networks are useful f...
read it

Lookahead Optimizer: k steps forward, 1 step back
The vast majority of successful deep neural networks are trained using v...
read it

Sorting out Lipschitz function approximation
Training neural networks subject to a Lipschitz constraint is useful for...
read it

Adversarial Distillation of Bayesian Neural Network Posteriors
Bayesian neural networks (BNNs) allow us to reason about uncertainty in ...
read it

Aggregated Momentum: Stability Through Passive Damping
Momentum is a simple and widely used trick which allows gradientbased o...
read it
James Lucas
is this you? claim profile