Generalized Strategic Classification and the Case of Aligned Incentives
Predicative machine learning models are frequently being used by companies, institutes and organizations to make choices about humans. Strategic classification studies learning in settings where self-interested users can strategically modify their features to obtain favorable predictive outcomes. A key working assumption, however, is that 'favorable' always means 'positive'; this may be appropriate in some applications (e.g., loan approval, university admissions and hiring), but reduces to a fairly narrow view what user interests can be. In this work we argue for a broader perspective on what accounts for strategic user behavior, and propose and study a flexible model of generalized strategic classification. Our generalized model subsumes most current models, but includes other novel settings; among these, we identify and target one intriguing sub-class of problems in which the interests of users and the system are aligned. For this cooperative setting, we provide an in-depth analysis, and propose a practical learning approach that is effective and efficient. We compare our approach to existing learning methods and show its statistical and optimization benefits. Returning to our fully generalized model, we show how our results and approach can extend to the most general case. We conclude with a set of experiments that empirically demonstrate the utility of our approach.
READ FULL TEXT