DeepAI AI Chat
Log In Sign Up

Strategic Classification in the Dark

by   Ganesh Ghalme, et al.

Strategic classification studies the interaction between a classification rule and the strategic agents it governs. Under the assumption that the classifier is known, rational agents respond to it by manipulating their features. However, in many real-life scenarios of high-stake classification (e.g., credit scoring), the classifier is not revealed to the agents, which leads agents to attempt to learn the classifier and game it too. In this paper we generalize the strategic classification model to such scenarios. We define the price of opacity as the difference in prediction error between opaque and transparent strategy-robust classifiers, characterize it, and give a sufficient condition for this price to be strictly positive, in which case transparency is the recommended policy. Our experiments show how Hardt et al.'s robust classifier is affected by keeping agents in the dark.


page 1

page 2

page 3

page 4


Alternative Microfoundations for Strategic Classification

When reasoning about strategic behavior in a machine learning context it...

PAC-Learning for Strategic Classification

Machine learning (ML) algorithms may be susceptible to being gamed by in...

The Role of Randomness and Noise in Strategic Classification

We investigate the problem of designing optimal classifiers in the strat...

Strategic Classification from Revealed Preferences

We study an online linear classification problem, in which the data is g...

Information Discrepancy in Strategic Learning

We study the effects of information discrepancy across sub-populations o...

Performative Prediction in a Stateful World

Deployed supervised machine learning models make predictions that intera...

Learning Losses for Strategic Classification

Strategic classification, i.e. classification under possible strategic m...