Unfooling Perturbation-Based Post Hoc Explainers

05/29/2022
by   Zachariah Carmichael, et al.
0

Monumental advancements in artificial intelligence (AI) have lured the interest of doctors, lenders, judges, and other professionals. While these high-stakes decision-makers are optimistic about the technology, those familiar with AI systems are wary about the lack of transparency of its decision-making processes. Perturbation-based post hoc explainers offer a model agnostic means of interpreting these systems while only requiring query-level access. However, recent work demonstrates that these explainers can be fooled adversarially. This discovery has adverse implications for auditors, regulators, and other sentinels. With this in mind, several natural questions arise - how can we audit these black box systems? And how can we ascertain that the auditee is complying with the audit in good faith? In this work, we rigorously formalize this problem and devise a defense against adversarial attacks on perturbation-based explainers. We propose algorithms for the detection (CAD-Detect) and defense (CAD-Defend) of these attacks, which are aided by our novel conditional anomaly detection approach, KNN-CAD. We demonstrate that our approach successfully detects whether a black box system adversarially conceals its decision-making process and mitigates the adversarial attack on real-world data for the prevalent explainers, LIME and SHAP.

READ FULL TEXT

page 9

page 16

page 17

research
11/15/2017

The best defense is a good offense: Countering black box attacks by predicting slightly wrong labels

Black-Box attacks on machine learning models occur when an attacker, des...
research
10/21/2020

Explaining black-box text classifiers for disease-treatment information extraction

Deep neural networks and other intricate Artificial Intelligence (AI) mo...
research
09/15/2020

Decision-based Universal Adversarial Attack

A single perturbation can pose the most natural images to be misclassifi...
research
10/31/2016

The Case for Temporal Transparency: Detecting Policy Change Events in Black-Box Decision Making Systems

Bringing transparency to black-box decision making systems (DMS) has bee...
research
09/14/2021

A Novel Data Encryption Method Inspired by Adversarial Attacks

Due to the advances of sensing and storage technologies, a tremendous am...
research
01/28/2022

Feature Visualization within an Automated Design Assessment leveraging Explainable Artificial Intelligence Methods

Not only automation of manufacturing processes but also automation of au...

Please sign up or login with your details

Forgot password? Click here to reset