TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models

06/20/2023
by   Yue Huang, et al.
0

Large Language Models (LLMs) such as ChatGPT, have gained significant attention due to their impressive natural language processing capabilities. It is crucial to prioritize human-centered principles when utilizing these models. Safeguarding the ethical and moral compliance of LLMs is of utmost importance. However, individual ethical issues have not been well studied on the latest LLMs. Therefore, this study aims to address these gaps by introducing a new benchmark – TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT examines toxicity in language models by employing toxic prompt templates derived from social norms. It then quantifies the extent of bias in models by measuring quantifiable toxicity values across different groups. Lastly, TrustGPT assesses the value of conversation generation models from both active value-alignment and passive value-alignment tasks. Through the implementation of TrustGPT, this research aims to enhance our understanding of the performance of conversation generation models and promote the development of language models that are more ethical and socially responsible.

READ FULL TEXT
research
07/18/2023

Unveiling Gender Bias in Terms of Profession Across LLMs: Analyzing and Addressing Sociological Implications

Gender bias in artificial intelligence (AI) and natural language process...
research
04/02/2023

Towards Healthy AI: Large Language Models Need Therapists Too

Recent advances in large language models (LLMs) have led to the developm...
research
07/05/2023

Citation: A Key to Building Responsible and Accountable Large Language Models

Large Language Models (LLMs) bring transformative benefits alongside uni...
research
06/06/2023

Applying Standards to Advance Upstream Downstream Ethics in Large Language Models

This paper explores how AI-owners can develop safeguards for AI-generate...
research
06/28/2023

CBBQ: A Chinese Bias Benchmark Dataset Curated with Human-AI Collaboration for Large Language Models

Holistically measuring societal biases of large language models is cruci...
research
06/02/2023

Is Model Attention Aligned with Human Attention? An Empirical Study on Large Language Models for Code Generation

Large Language Models (LLMs) have been demonstrated effective for code g...
research
08/01/2023

SurveyLM: A platform to explore emerging value perspectives in augmented language models' behaviors

This white paper presents our work on SurveyLM, a platform for analyzing...

Please sign up or login with your details

Forgot password? Click here to reset