GreaseVision: Rewriting the Rules of the Interface

04/07/2022
by   Siddhartha Datta, et al.
0

Digital harms can manifest across any interface. Key problems in addressing these harms include the high individuality of harms and the fast-changing nature of digital systems. As a result, we still lack a systematic approach to study harms and produce interventions for end-users. We put forward GreaseVision, a new framework that enables end-users to collaboratively develop interventions against harms in software using a no-code approach and recent advances in few-shot machine learning. The contribution of the framework and tool allow individual end-users to study their usage history and create personalized interventions. Our contribution also enables researchers to study the distribution of harms and interventions at scale.

READ FULL TEXT

page 4

page 7

page 8

research
11/15/2022

Cross-Reality Re-Rendering: Manipulating between Digital and Physical Realities

The advent of personalized reality has arrived. Rapid development in AR/...
research
06/08/2022

Designing Reinforcement Learning Algorithms for Digital Interventions: Pre-implementation Guidelines

Online reinforcement learning (RL) algorithms are increasingly used to p...
research
12/20/2021

Mind-proofing Your Phone: Navigating the Digital Minefield with GreaseTerminator

Digital harms are widespread in the mobile ecosystem. As these devices g...
research
05/19/2022

Personalized Interventions for Online Moderation

Current online moderation follows a one-size-fits-all approach, where ea...
research
12/22/2022

How can we combat online misinformation? A systematic overview of current interventions and their efficacy

The spread of misinformation is a pressing global problem that has elici...
research
01/27/2021

Not Now, Ask Later: Users Weaken Their Behavior Change Regimen Over Time, But Expect To Re-Strengthen It Imminently

How effectively do we adhere to nudges and interventions that help us co...
research
08/25/2023

Learning to Intervene on Concept Bottlenecks

While traditional deep learning models often lack interpretability, conc...

Please sign up or login with your details

Forgot password? Click here to reset