Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data?
Fairness is an important property in data-mining applications, including recommender systems. In this work, we investigate a case where users of a recommender system need (or want) to be fair to a protected group of items. For example, in a job market, the user is the recruiter, an item is the job seeker, and the protected attribute is gender or race. Even if recruiters want to use a fair talent recommender system, the platform may not provide a fair recommender system, or recruiters may not be able to ascertain whether the recommender system's algorithm is fair. In this case, recruiters cannot utilize the recommender system, or they may become unfair to job seekers. In this work, we propose methods to enable the users to build their own fair recommender systems. Our methods can generate fair recommendations even when the platform does not (or cannot) provide fair recommender systems. The key challenge is that a user does not have access to the log data of other users or the latent representations of items. This restriction prohibits us from adopting existing methods, which are designed for platforms. The main idea is that a user has access to unfair recommendations provided by the platform. Our methods leverage the outputs of an unfair recommender system to construct a new fair recommender system. We empirically validate that our proposed method improves fairness substantially without harming much performance of the original unfair system.
READ FULL TEXT