Private and Efficient Meta-Learning with Low Rank and Sparse Decomposition

10/07/2022
by   Soumyabrata Pal, et al.
7

Meta-learning is critical for a variety of practical ML systems – like personalized recommendations systems – that are required to generalize to new tasks despite a small number of task-specific training points. Existing meta-learning techniques use two complementary approaches of either learning a low-dimensional representation of points for all tasks, or task-specific fine-tuning of a global model trained using all the tasks. In this work, we propose a novel meta-learning framework that combines both the techniques to enable handling of a large number of data-starved tasks. Our framework models network weights as a sum of low-rank and sparse matrices. This allows us to capture information from multiple domains together in the low-rank part while still allowing task specific personalization using the sparse part. We instantiate and study the framework in the linear setting, where the problem reduces to that of estimating the sum of a rank-r and a k-column sparse matrix using a small number of linear measurements. We propose an alternating minimization method with hard thresholding – AMHT-LRS – to learn the low-rank and sparse part effectively and efficiently. For the realizable, Gaussian data setting, we show that AMHT-LRS indeed solves the problem efficiently with nearly optimal samples. We extend AMHT-LRS to ensure that it preserves privacy of each individual user in the dataset, while still ensuring strong generalization with nearly optimal number of samples. Finally, on multiple datasets, we demonstrate that the framework allows personalized models to obtain superior performance in the data-scarce regime.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2021

Sample Efficient Linear Meta-Learning by Alternating Minimization

Meta-learning synthesizes and leverages the knowledge from a given set o...
research
04/08/2021

Support-Target Protocol for Meta-Learning

The support/query (S/Q) training protocol is widely used in meta-learnin...
research
10/12/2021

Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic Uncertainty

Numerous recent works utilize bi-Lipschitz regularization of neural netw...
research
02/14/2022

Trace norm regularization for multi-task learning with scarce data

Multi-task learning leverages structural similarities between multiple t...
research
10/27/2019

Spectral Algorithm for Low-rank Multitask Regression

Multitask learning, i.e. taking advantage of the relatedness of individu...
research
10/05/2020

MetaPhys: Unsupervised Few-Shot Adaptation for Non-Contact Physiological Measurement

There are large individual differences in physiological processes, makin...
research
06/22/2023

Generalized Low-Rank Update: Model Parameter Bounds for Low-Rank Training Data Modifications

In this study, we have developed an incremental machine learning (ML) me...

Please sign up or login with your details

Forgot password? Click here to reset