Strategic Recourse in Linear Classification
In algorithmic decision making, recourse refers to individuals' ability to systematically reverse an unfavorable decision made by an algorithm. Meanwhile, individuals subjected to a classification mechanism are incentivized to behave strategically in order to gain a system's approval. However, not all strategic behavior necessarily leads to adverse results: through appropriate mechanism design, strategic behavior can induce genuine improvement in an individual's qualifications. In this paper, we explore how to design a classifier that achieves high accuracy while providing recourse to strategic individuals so as to incentivize them to improve their features in non-manipulative ways. We capture these dynamics using a two-stage game: first, the mechanism designer publishes a classifier, with the goal of optimizing classification accuracy and providing recourse to incentivize individuals' improvement. Then, agents respond by potentially modifying their input features in order to obtain a favorable decision from the classifier, while trying to minimize the cost of making such modifications. Under this model, we provide analytical results characterizing the equilibrium strategies for both the mechanism designer and the agents. Our empirical results show the effectiveness of our mechanism in three real-world datasets: compared to a baseline classifier that only considers individuals' strategic behavior without explicitly incentivizing improvement, our algorithm can provide recourse to a much higher fraction of individuals in the direction of improvement while maintaining relatively high prediction accuracy. We also show that our algorithm can effectively mitigate disparities caused by differences in manipulation costs. Our results provide insights for designing a machine learning model that focuses not only on the static distribution as of now, but also tries to encourage future improvement.
READ FULL TEXT