Labels, Information, and Computation: Efficient, Privacy-Preserving Learning Using Sufficient Labels

04/19/2021
by   Shiyu Duan, et al.
0

In supervised learning, obtaining a large set of fully-labeled training data is expensive. We show that we do not always need full label information on every single training example to train a competent classifier. Specifically, inspired by the principle of sufficiency in statistics, we present a statistic (a summary) of the fully-labeled training set that captures almost all the relevant information for classification but at the same time is easier to obtain directly. We call this statistic "sufficiently-labeled data" and prove its sufficiency and efficiency for finding the optimal hidden representations, on which competent classifier heads can be trained using as few as a single randomly-chosen fully-labeled example per class. Sufficiently-labeled data can be obtained from annotators directly without collecting the fully-labeled data first. And we prove that it is easier to directly obtain sufficiently-labeled data than obtaining fully-labeled data. Furthermore, sufficiently-labeled data naturally preserves user privacy by storing relative, instead of absolute, information. Extensive experimental results are provided to support our theory.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2021

Labeled Data Generation with Inexact Supervision

The recent advanced deep learning techniques have shown the promising re...
research
06/01/2023

Pseudo Labels for Single Positive Multi-Label Learning

The cost of data annotation is a substantial impediment for multi-label ...
research
01/03/2022

An analysis of over-sampling labeled data in semi-supervised learning with FixMatch

Most semi-supervised learning methods over-sample labeled data when cons...
research
08/12/2020

Renal Cell Carcinoma Detection and Subtyping with Minimal Point-Based Annotation in Whole-Slide Images

Obtaining a large amount of labeled data in medical imaging is laborious...
research
04/12/2023

Self-Supervised Learning with Cluster-Aware-DINO for High-Performance Robust Speaker Verification

Automatic speaker verification task has made great achievements using de...
research
08/25/2022

Towards Federated Learning against Noisy Labels via Local Self-Regularization

Federated learning (FL) aims to learn joint knowledge from a large scale...
research
09/05/2018

Learning Concept Abstractness Using Weak Supervision

We introduce a weakly supervised approach for inferring the property of ...

Please sign up or login with your details

Forgot password? Click here to reset