Input-agnostic Certified Group Fairness via Gaussian Parameter Smoothing

06/22/2022
by   Jiayin Jin, et al.
6

Only recently, researchers attempt to provide classification algorithms with provable group fairness guarantees. Most of these algorithms suffer from harassment caused by the requirement that the training and deployment data follow the same distribution. This paper proposes an input-agnostic certified group fairness algorithm, FairSmooth, for improving the fairness of classification models while maintaining the remarkable prediction accuracy. A Gaussian parameter smoothing method is developed to transform base classifiers into their smooth versions. An optimal individual smooth classifier is learnt for each group with only the data regarding the group and an overall smooth classifier for all groups is generated by averaging the parameters of all the individual smooth ones. By leveraging the theory of nonlinear functional analysis, the smooth classifiers are reformulated as output functions of a Nemytskii operator. Theoretical analysis is conducted to derive that the Nemytskii operator is smooth and induces a Frechet differentiable smooth manifold. We theoretically demonstrate that the smooth manifold has a global Lipschitz constant that is independent of the domain of the input data, which derives the input-agnostic certified group fairness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/30/2020

Towards Auditability for Fairness in Deep Learning

Group fairness metrics can detect when a deep learning model behaves dif...
research
11/14/2017

Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

The most prevalent notions of fairness in machine learning are statistic...
research
07/11/2023

Towards A Scalable Solution for Improving Multi-Group Fairness in Compositional Classification

Despite the rich literature on machine learning fairness, relatively lit...
research
10/30/2020

All of the Fairness for Edge Prediction with Optimal Transport

Machine learning and data mining algorithms have been increasingly used ...
research
10/28/2022

Fairness Certificates for Differentially Private Classification

In this work, we theoretically study the impact of differential privacy ...
research
12/18/2020

Fair for All: Best-effort Fairness Guarantees for Classification

Standard approaches to group-based notions of fairness, such as parity a...
research
06/03/2021

Stein's method, smoothing and functional approximation

Stein's method for Gaussian process approximation can be used to bound t...

Please sign up or login with your details

Forgot password? Click here to reset