Preventing Adversarial Use of Datasets through Fair Core-Set Construction

10/24/2019
by   Benjamin Spector, et al.
0

We propose improving the privacy properties of a dataset by publishing only a strategically chosen "core-set" of the data containing a subset of the instances. The core-set allows strong performance on primary tasks, but forces poor performance on unwanted tasks. We give methods for both linear models and neural networks and demonstrate their efficacy on data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/14/2019

Planted Hitting Set Recovery in Hypergraphs

In various application areas, networked data is collected by measuring i...
research
06/18/2022

Fair Generalized Linear Models with a Convex Penalty

Despite recent advances in algorithmic fairness, methodologies for achie...
research
08/04/2022

Core Challenge 2022: Solver and Graph Descriptions

This paper collects all descriptions of solvers and ISR instances submit...
research
06/10/2022

In Defense of Core-set: A Density-aware Core-set Selection for Active Learning

Active learning enables the efficient construction of a labeled dataset ...
research
08/10/2023

Composable Core-sets for Diversity Approximation on Multi-Dataset Streams

Core-sets refer to subsets of data that maximize some function that is c...
research
03/09/2018

On Generation of Adversarial Examples using Convex Programming

It has been observed that deep learning architectures tend to make erron...
research
05/19/2020

Coalition and Core in Resource Allocation and Exchange

In discrete exchange economies with possibly redundant and joint ownersh...

Please sign up or login with your details

Forgot password? Click here to reset