DeepAI AI Chat
Log In Sign Up

Private and Efficient Meta-Learning with Low Rank and Sparse Decomposition

by   Soumyabrata Pal, et al.

Meta-learning is critical for a variety of practical ML systems – like personalized recommendations systems – that are required to generalize to new tasks despite a small number of task-specific training points. Existing meta-learning techniques use two complementary approaches of either learning a low-dimensional representation of points for all tasks, or task-specific fine-tuning of a global model trained using all the tasks. In this work, we propose a novel meta-learning framework that combines both the techniques to enable handling of a large number of data-starved tasks. Our framework models network weights as a sum of low-rank and sparse matrices. This allows us to capture information from multiple domains together in the low-rank part while still allowing task specific personalization using the sparse part. We instantiate and study the framework in the linear setting, where the problem reduces to that of estimating the sum of a rank-r and a k-column sparse matrix using a small number of linear measurements. We propose an alternating minimization method with hard thresholding – AMHT-LRS – to learn the low-rank and sparse part effectively and efficiently. For the realizable, Gaussian data setting, we show that AMHT-LRS indeed solves the problem efficiently with nearly optimal samples. We extend AMHT-LRS to ensure that it preserves privacy of each individual user in the dataset, while still ensuring strong generalization with nearly optimal number of samples. Finally, on multiple datasets, we demonstrate that the framework allows personalized models to obtain superior performance in the data-scarce regime.


page 1

page 2

page 3

page 4


Sample Efficient Linear Meta-Learning by Alternating Minimization

Meta-learning synthesizes and leverages the knowledge from a given set o...

Support-Target Protocol for Meta-Learning

The support/query (S/Q) training protocol is widely used in meta-learnin...

Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic Uncertainty

Numerous recent works utilize bi-Lipschitz regularization of neural netw...

Trace norm regularization for multi-task learning with scarce data

Multi-task learning leverages structural similarities between multiple t...

Spectral Algorithm for Low-rank Multitask Regression

Multitask learning, i.e. taking advantage of the relatedness of individu...

MetaPhys: Unsupervised Few-Shot Adaptation for Non-Contact Physiological Measurement

There are large individual differences in physiological processes, makin...

SUM: Suboptimal Unitary Multi-task Learning Framework for Spatiotemporal Data Prediction

The typical multi-task learning methods for spatio-temporal data predict...