Providing Previously Unseen Users Fair Recommendations Using Variational Autoencoders

08/29/2023
by   Bjørnar Vassøy, et al.
0

An emerging definition of fairness in machine learning requires that models are oblivious to demographic user information, e.g., a user's gender or age should not influence the model. Personalized recommender systems are particularly prone to violating this definition through their explicit user focus and user modelling. Explicit user modelling is also an aspect that makes many recommender systems incapable of providing hitherto unseen users with recommendations. We propose novel approaches for mitigating discrimination in Variational Autoencoder-based recommender systems by limiting the encoding of demographic information. The approaches are capable of, and evaluated on, providing users that are not represented in the training data with fair recommendations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset