Inverse Classification with Limited Budget and Maximum Number of Perturbed Samples

09/29/2020
by   Jaehoon Koo, et al.
1

Most recent machine learning research focuses on developing new classifiers for the sake of improving classification accuracy. With many well-performing state-of-the-art classifiers available, there is a growing need for understanding interpretability of a classifier necessitated by practical purposes such as to find the best diet recommendation for a diabetes patient. Inverse classification is a post modeling process to find changes in input features of samples to alter the initially predicted class. It is useful in many business applications to determine how to adjust a sample input data such that the classifier predicts it to be in a desired class. In real world applications, a budget on perturbations of samples corresponding to customers or patients is usually considered, and in this setting, the number of successfully perturbed samples is key to increase benefits. In this study, we propose a new framework to solve inverse classification that maximizes the number of perturbed samples subject to a per-feature-budget limits and favorable classification classes of the perturbed samples. We design algorithms to solve this optimization problem based on gradient methods, stochastic processes, Lagrangian relaxations, and the Gumbel trick. In experiments, we find that our algorithms based on stochastic processes exhibit an excellent performance in different budget settings and they scale well.

READ FULL TEXT

page 10

page 11

page 12

research
05/29/2016

A budget-constrained inverse classification framework for smooth classifiers

Inverse classification is the process of manipulating an instance such t...
research
10/05/2016

Generalized Inverse Classification

Inverse classification is the process of perturbing an instance in a mea...
research
01/17/2022

Cyberbullying Classifiers are Sensitive to Model-Agnostic Perturbations

A limited amount of studies investigates the role of model-agnostic adve...
research
03/21/2021

Natural Perturbed Training for General Robustness of Neural Network Classifiers

We focus on the robustness of neural networks for classification. To per...
research
04/18/2022

UNBUS: Uncertainty-aware Deep Botnet Detection System in Presence of Perturbed Samples

A rising number of botnet families have been successfully detected using...
research
05/12/2019

Budgeted Training: Rethinking Deep Neural Network Training Under Resource Constraints

In most practical settings and theoretical analysis, one assumes that a ...
research
06/24/2022

SCAI: A Spectral data Classification framework with Adaptive Inference for the IoT platform

Currently, it is a hot research topic to realize accurate, efficient, an...

Please sign up or login with your details

Forgot password? Click here to reset