Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots

09/07/2022
by   Wai Man Si, et al.
0

Chatbots are used in many applications, e.g., automated agents, smart home assistants, interactive characters in online games, etc. Therefore, it is crucial to ensure they do not behave in undesired manners, providing offensive or toxic responses to users. This is not a trivial task as state-of-the-art chatbot models are trained on large, public datasets openly collected from the Internet. This paper presents a first-of-its-kind, large-scale measurement of toxicity in chatbots. We show that publicly available chatbots are prone to providing toxic responses when fed toxic queries. Even more worryingly, some non-toxic queries can trigger toxic responses too. We then set out to design and experiment with an attack, ToxicBuddy, which relies on fine-tuning GPT-2 to generate non-toxic queries that make chatbots respond in a toxic manner. Our extensive experimental evaluation demonstrates that our attack is effective against public chatbot models and outperforms manually-crafted malicious queries proposed by previous work. We also evaluate three defense mechanisms against ToxicBuddy, showing that they either reduce the attack performance at the cost of affecting the chatbot's utility or are only effective at mitigating a portion of the attack. This highlights the need for more research from the computer security and online safety communities to ensure that chatbot models do not hurt their users. Overall, we are confident that ToxicBuddy can be used as an auditing tool and that our work will pave the way toward designing more effective defenses for chatbot safety.

READ FULL TEXT
research
07/14/2023

Understanding Multi-Turn Toxic Behaviors in Open-Domain Chatbots

Recent advances in natural language processing and machine learning have...
research
04/27/2023

SHATTER: Control and Defense-Aware Attack Analytics for Activity-Driven Smart Home Systems

Modern smart home control systems utilize real-time occupancy and activi...
research
06/28/2021

Data Poisoning Won't Save You From Facial Recognition

Data poisoning has been proposed as a compelling defense against facial ...
research
09/21/2023

A Chinese Prompt Attack Dataset for LLMs with Evil Content

Large Language Models (LLMs) present significant priority in text unders...
research
03/01/2023

UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers

Many information retrieval tasks require large labeled datasets for fine...
research
11/09/2018

Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering

While machine learning (ML) models are being increasingly trusted to mak...
research
06/15/2021

Snail Mail Beats Email Any Day: On Effective Operator Security Notifications in the Internet

In the era of large-scale internet scanning, misconfigured websites are ...

Please sign up or login with your details

Forgot password? Click here to reset