Why Should I Trust a Model is Private? Using Shifts in Model Explanation for Evaluating Privacy-Preserving Emotion Recognition Model

04/18/2021
by   Mimansa Jaiswal, et al.
0

Privacy preservation is a crucial component of any real-world application. Yet, in applications relying on machine learning backends, this is challenging because models often capture more than a designer may have envisioned, resulting in the potential leakage of sensitive information. For example, emotion recognition models are susceptible to learning patterns between the target variable and other sensitive variables, patterns that can be maliciously re-purposed to obtain protected information. In this paper, we concentrate on using interpretable methods to evaluate a model's efficacy to preserve privacy with respect to sensitive variables. We focus on saliency-based explanations, explanations that highlight regions of the input text, which allows us to understand how model explanations shift when models are trained to preserve privacy. We show how certain commonly-used methods that seek to preserve privacy might not align with human perception of privacy preservation. We also show how some of these induce spurious correlations in the model between the input and the primary as well as secondary task, even if the improvement in evaluation metric is significant. Such correlations can hence lead to false assurances about the perceived privacy of the model because especially when used in cross corpus conditions. We conduct crowdsourcing experiments to evaluate the inclination of the evaluators to choose a particular model for a given task when model explanations are provided, and find that correlation of interpretation differences with sociolinguistic biases can be used as a proxy for user trust.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/29/2019

Privacy Enhanced Multimodal Neural Representations for Emotion Recognition

Many mobile applications and virtual conversational agents now aim to re...
research
01/12/2023

Fairly Private: Investigating The Fairness of Visual Privacy Preservation Algorithms

As the privacy risks posed by camera surveillance and facial recognition...
research
02/11/2021

Disentanglement for audio-visual emotion recognition using multitask setup

Deep learning models trained on audio-visual data have been successfully...
research
05/28/2019

Two-level Explanations in Music Emotion Recognition

Current ML models for music emotion recognition, while generally working...
research
09/06/2023

Implicit Design Choices and Their Impact on Emotion Recognition Model Development and Evaluation

Emotion recognition is a complex task due to the inherent subjectivity i...
research
03/06/2023

Crowdsourcing on Sensitive Data with Privacy-Preserving Text Rewriting

Most tasks in NLP require labeled data. Data labeling is often done on c...
research
11/05/2017

Distribution-Preserving k-Anonymity

Preserving the privacy of individuals by protecting their sensitive attr...

Please sign up or login with your details

Forgot password? Click here to reset