Bias Impact Analysis of AI in Consumer Mobile Health Technologies: Legal, Technical, and Policy
Today's large-scale algorithmic and automated deployment of decision-making systems threatens to exclude marginalized communities. Thus, the emergent danger comes from the effectiveness and the propensity of such systems to replicate, reinforce, or amplify harmful existing discriminatory acts. Algorithmic bias exposes a deeply entrenched encoding of a range of unwanted biases that can have profound real-world effects that manifest in domains from employment, to housing, to healthcare. The last decade of research and examples on these effects further underscores the need to examine any claim of a value-neutral technology. This work examines the intersection of algorithmic bias in consumer mobile health technologies (mHealth). We include mHealth, a term used to describe mobile technology and associated sensors to provide healthcare solutions through patient journeys. We also include mental and behavioral health (mental and physiological) as part of our study. Furthermore, we explore to what extent current mechanisms - legal, technical, and or normative - help mitigate potential risks associated with unwanted bias in intelligent systems that make up the mHealth domain. We provide additional guidance on the role and responsibilities technologists and policymakers have to ensure that such systems empower patients equitably.
READ FULL TEXT