Thread With Caution: Proactively Helping Users Assess and Deescalate Tension in Their Online Discussions

12/02/2022
by   Jonathan P. Chang, et al.
0

Incivility remains a major challenge for online discussion platforms, to such an extent that even conversations between well-intentioned users can often derail into uncivil behavior. Traditionally, platforms have relied on moderators to – with or without algorithmic assistance – take corrective actions such as removing comments or banning users. In this work we propose a complementary paradigm that directly empowers users by proactively enhancing their awareness about existing tension in the conversation they are engaging in and actively guides them as they are drafting their replies to avoid further escalation. As a proof of concept for this paradigm, we design an algorithmic tool that provides such proactive information directly to users, and conduct a user study in a popular discussion platform. Through a mixed methods approach combining surveys with a randomized controlled experiment, we uncover qualitative and quantitative insights regarding how the participants utilize and react to this information. Most participants report finding this proactive paradigm valuable, noting that it helps them to identify tension that they may have otherwise missed and prompts them to further reflect on their own replies and to revise them. These effects are corroborated by a comparison of how the participants draft their reply when our tool warns them that their conversation is at risk of derailing into uncivil behavior versus in a control condition where the tool is disabled. These preliminary findings highlight the potential of this user-centered paradigm and point to concrete directions for future implementations.

READ FULL TEXT

page 1

page 8

research
11/29/2022

Proactive Moderation of Online Discussions: Existing Practices and the Potential for Algorithmic Support

To address the widespread problem of uncivil behavior, many online discu...
research
09/03/2019

Trouble on the Horizon: Forecasting the Derailment of Online Conversations as they Develop

Online discussions often derail into toxic exchanges between participant...
research
09/18/2018

Improving Moderation of Online Discussions via Interpretable Neural Models

Growing amount of comments make online discussions difficult to moderate...
research
03/01/2022

Advancing an Interdisciplinary Science of Conversation: Insights from a Large Multimodal Corpus of Human Speech

People spend a substantial portion of their lives engaged in conversatio...
research
04/14/2021

To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions

In hybrid human-AI systems, users need to decide whether or not to trust...
research
11/28/2022

Representation with Incomplete Votes

Platforms for online civic participation rely heavily on methods for con...

Please sign up or login with your details

Forgot password? Click here to reset