Learning from Aggregated Data: Curated Bags versus Random Bags

05/16/2023
by   Lin Chen, et al.
0

Protecting user privacy is a major concern for many machine learning systems that are deployed at scale and collect from a diverse set of population. One way to address this concern is by collecting and releasing data labels in an aggregated manner so that the information about a single user is potentially combined with others. In this paper, we explore the possibility of training machine learning models with aggregated data labels, rather than individual labels. Specifically, we consider two natural aggregation procedures suggested by practitioners: curated bags where the data points are grouped based on common features and random bags where the data points are grouped randomly in bag of similar sizes. For the curated bag setting and for a broad range of loss functions, we show that we can perform gradient-based learning without any degradation in performance that may result from aggregating data. Our method is based on the observation that the sum of the gradients of the loss function on individual data examples in a curated bag can be computed from the aggregate label without the need for individual labels. For the random bag setting, we provide a generalization risk bound based on the Rademacher complexity of the hypothesis class and show how empirical risk minimization can be regularized to achieve the smallest risk bound. In fact, in the random bag setting, there is a trade-off between size of the bag and the achievable error rate as our bound indicates. Finally, we conduct a careful empirical study to confirm our theoretical findings. In particular, our results suggest that aggregate learning can be an effective method for preserving user privacy while maintaining model accuracy.

READ FULL TEXT
research
02/24/2014

On Learning from Label Proportions

Learning from Label Proportions (LLP) is a learning setting, where the t...
research
02/06/2023

Easy Learning from Label Proportions

We consider the problem of Learning from Label Proportions (LLP), a weak...
research
03/04/2022

Learning from Label Proportions by Learning with Label Noise

Learning from label proportions (LLP) is a weakly supervised classificat...
research
04/24/2023

More Communication Does Not Result in Smaller Generalization Error in Federated Learning

We study the generalization error of statistical learning models in a Fe...
research
06/30/2016

Ballpark Learning: Estimating Labels from Rough Group Comparisons

We are interested in estimating individual labels given only coarse, agg...
research
10/05/2020

Learning by Minimizing the Sum of Ranked Range

In forming learning objectives, one oftentimes needs to aggregate a set ...
research
09/14/2020

The Shooting Regressor; Randomized Gradient-Based Ensembles

An ensemble method is introduced that utilizes randomization and loss fu...

Please sign up or login with your details

Forgot password? Click here to reset