I'm Afraid I Can't Do That: Predicting Prompt Refusal in Black-Box Generative Language Models

06/06/2023
by   Max Reuter, et al.
0

Since the release of OpenAI's ChatGPT, generative language models have attracted extensive public attention. The increased usage has highlighted generative models' broad utility, but also revealed several forms of embedded bias. Some is induced by the pre-training corpus; but additional bias specific to generative models arises from the use of subjective fine-tuning to avoid generating harmful content. Fine-tuning bias may come from individual engineers and company policies, and affects which prompts the model chooses to refuse. In this experiment, we characterize ChatGPT's refusal behavior using a black-box attack. We first query ChatGPT with a variety of offensive and benign prompts (n=1,730), then manually label each response as compliance or refusal. Manual examination of responses reveals that refusal is not cleanly binary, and lies on a continuum; as such, we map several different kinds of responses to a binary of compliance or refusal. The small manually-labeled dataset is used to train a refusal classifier, which achieves an accuracy of 92 this refusal classifier to bootstrap a larger (n=10,000) dataset adapted from the Quora Insincere Questions dataset. With this machine-labeled data, we train a prompt classifier to predict whether ChatGPT will refuse a given question, without seeing ChatGPT's response. This prompt classifier achieves 76 on a test set of manually labeled questions (n=1,009). We examine our classifiers and the prompt n-grams that are most predictive of either compliance or refusal. Datasets and code are available at https://github.com/maxwellreuter/chatgpt-refusals.

READ FULL TEXT
research
05/15/2022

Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization

It is challenging to train a good intent classifier for a task-oriented ...
research
06/08/2023

Revisit Few-shot Intent Classification with PLMs: Direct Fine-tuning vs. Continual Pre-training

We consider the task of few-shot intent detection, which involves traini...
research
09/22/2022

Efficient Few-Shot Learning Without Prompts

Recent few-shot methods, such as parameter-efficient fine-tuning (PEFT) ...
research
10/08/2022

Generative Language Models for Paragraph-Level Question Generation

Powerful generative models have led to recent progress in question gener...
research
06/09/2023

Reliability Check: An Analysis of GPT-3's Response to Sensitive Topics and Prompt Wording

Large language models (LLMs) have become mainstream technology with thei...
research
03/15/2023

SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models

Generative Large Language Models (LLMs) such as GPT-3 are capable of gen...

Please sign up or login with your details

Forgot password? Click here to reset