How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection

01/18/2023
by   Biyang Guo, et al.
0

The introduction of ChatGPT has garnered widespread attention in both academic and industrial communities. ChatGPT is able to respond effectively to a wide range of human questions, providing fluent and comprehensive answers that significantly surpass previous public chatbots in terms of security and usefulness. On one hand, people are curious about how ChatGPT is able to achieve such strength and how far it is from human experts. On the other hand, people are starting to worry about the potential negative impacts that large language models (LLMs) like ChatGPT could have on society, such as fake news, plagiarism, and social security issues. In this work, we collected tens of thousands of comparison responses from both human experts and ChatGPT, with questions ranging from open-domain, financial, medical, legal, and psychological areas. We call the collected dataset the Human ChatGPT Comparison Corpus (HC3). Based on the HC3 dataset, we study the characteristics of ChatGPT's responses, the differences and gaps from human experts, and future directions for LLMs. We conducted comprehensive human evaluations and linguistic analyses of ChatGPT-generated content compared with that of humans, where many interesting results are revealed. After that, we conduct extensive experiments on how to effectively detect whether a certain text is generated by ChatGPT or humans. We build three different detection systems, explore several key factors that influence their effectiveness, and evaluate them in different scenarios. The dataset, code, and models are all publicly available at https://github.com/Hello-SimpleAI/chatgpt-comparison-detection.

READ FULL TEXT
research
04/04/2023

To ChatGPT, or not to ChatGPT: That is the question!

ChatGPT has become a global sensation. As ChatGPT and other Large Langua...
research
05/22/2023

Deepfake Text Detection in the Wild

Recent advances in large language models have enabled them to reach a le...
research
09/01/2021

FinQA: A Dataset of Numerical Reasoning over Financial Data

The sheer volume of financial statements makes it difficult for humans t...
research
05/03/2023

Generating Synthetic Documents for Cross-Encoder Re-Rankers: A Comparative Study of ChatGPT and Human Experts

We investigate the usefulness of generative Large Language Models (LLMs)...
research
05/29/2023

Multiscale Positive-Unlabeled Detection of AI-Generated Texts

Recent releases of Large Language Models (LLMs), e.g. ChatGPT, are aston...
research
06/15/2023

Med-MMHL: A Multi-Modal Dataset for Detecting Human- and LLM-Generated Misinformation in the Medical Domain

The pervasive influence of misinformation has far-reaching and detriment...
research
02/06/2023

A Categorical Archive of ChatGPT Failures

Large language models have been demonstrated to be valuable in different...

Please sign up or login with your details

Forgot password? Click here to reset