Respect for Human Autonomy in Recommender Systems

09/05/2020 ∙ by Lav R. Varshney, et al. ∙ 0

Recommender systems can influence human behavior in significant ways, in some cases making people more machine-like. In this sense, recommender systems may be deleterious to notions of human autonomy. Many ethical systems point to respect for human autonomy as a key principle arising from human rights considerations, and several emerging frameworks for AI include this principle. Yet, no specific formalization has been defined. Separately, self-determination theory shows that autonomy is an innate psychological need for people, and moreover has a significant body of experimental work that formalizes and measures level of human autonomy. In this position paper, we argue that there is a need to specifically operationalize respect for human autonomy in the context of recommender systems. Moreover, that such an operational definition can be developed based on well-established approaches from experimental psychology, which can then be used to design future recommender systems that respect human autonomy.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.