Learning to Succeed while Teaching to Fail: Privacy in Closed Machine Learning Systems

05/23/2017
by   Jure Sokolic, et al.
0

Security, privacy, and fairness have become critical in the era of data science and machine learning. More and more we see that achieving universally secure, private, and fair systems is practically impossible. We have seen for example how generative adversarial networks can be used to learn about the expected private training data; how the exploitation of additional data can reveal private information in the original one; and how what looks like unrelated features can teach us about each other. Confronted with this challenge, in this paper we open a new line of research, where the security, privacy, and fairness is learned and used in a closed environment. The goal is to ensure that a given entity (e.g., the company or the government), trusted to infer certain information with our data, is blocked from inferring protected information from it. For example, a hospital might be allowed to produce diagnosis on the patient (the positive task), without being able to infer the gender of the subject (negative task). Similarly, a company can guarantee that internally it is not using the provided data for any undesired task, an important goal that is not contradicting the virtually impossible challenge of blocking everybody from the undesired task. We design a system that learns to succeed on the positive task while simultaneously fail at the negative one, and illustrate this with challenging cases where the positive task is actually harder than the negative one being blocked. Fairness, to the information in the negative task, is often automatically obtained as a result of this proposed approach. The particular framework and examples open the door to security, privacy, and fairness in very important closed scenarios, ranging from private data accumulation companies like social networks to law-enforcement and hospitals.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2023

Verifiable Fairness: Privacy-preserving Computation of Fairness for Machine Learning Systems

Fair machine learning is a thriving and vibrant research topic. In this ...
research
11/07/2020

On the Privacy Risks of Algorithmic Fairness

Algorithmic fairness and privacy are essential elements of trustworthy m...
research
08/25/2018

Formal Analysis of an E-Health Protocol

Given the sensitive nature of health data, security and privacy in e-hea...
research
06/11/2020

A Variational Approach to Privacy and Fairness

In this article, we propose a new variational approach to learn private ...
research
12/06/2018

Differentially Private Fair Learning

We design two learning algorithms that simultaneously promise differenti...
research
06/08/2022

How unfair is private learning ?

As machine learning algorithms are deployed on sensitive data in critica...
research
03/10/2021

Fairness On The Ground: Applying Algorithmic Fairness Approaches to Production Systems

Many technical approaches have been proposed for ensuring that decisions...

Please sign up or login with your details

Forgot password? Click here to reset