Don't Lie to Me: Avoiding Malicious Explanations with STEALTH

01/25/2023
by   Lauren Alvarez, et al.
0

STEALTH is a method for using some AI-generated model, without suffering from malicious attacks (i.e. lying) or associated unfairness issues. After recursively bi-clustering the data, STEALTH system asks the AI model a limited number of queries about class labels. STEALTH asks so few queries (1 per data cluster) that malicious algorithms (a) cannot detect its operation, nor (b) know when to lie.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/23/2020

AI Data poisoning attack: Manipulating game AI of Go

With the extensive use of AI in various fields, the issue of AI security...
research
01/08/2018

How to find a GSMem malicious activity via an AI approach

This paper investigates the following problem: how to find a GSMem malic...
research
04/20/2018

Understanding AI Data Repositories with Automatic Query Generation

We describe a set of techniques to generate queries automatically based ...
research
06/25/2021

Identifying malicious accounts in Blockchains using Domain Names and associated temporal properties

The rise in the adoption of blockchain technology has led to increased i...
research
01/18/2021

Detection of Insider Attacks in Distributed Projected Subgradient Algorithms

The gossip-based distributed algorithms are widely used to solve decentr...
research
02/27/2018

Impact of the malicious input data modification on the efficiency of quantum algorithms

In this paper we demonstrate that the efficiency of quantum algorithms c...

Please sign up or login with your details

Forgot password? Click here to reset