Conditioning Predictive Models: Risks and Strategies

02/02/2023
by   Evan Hubinger, et al.
0

Our intention is to provide a definitive reference on what it would take to safely make use of generative/predictive models in the absence of a solution to the Eliciting Latent Knowledge problem. Furthermore, we believe that large language models can be understood as such predictive models of the world, and that such a conceptualization raises significant opportunities for their safe yet powerful use via carefully conditioning them to predict desirable outputs. Unfortunately, such approaches also raise a variety of potentially fatal safety problems, particularly surrounding situations where predictive models predict the output of other AI systems, potentially unbeknownst to us. There are numerous potential solutions to such problems, however, primarily via carefully conditioning models to predict the things we want (e.g. humans) rather than the things we don't (e.g. malign AIs). Furthermore, due to the simplicity of the prediction objective, we believe that predictive models present the easiest inner alignment problem that we are aware of. As a result, we think that conditioning approaches for predictive models represent the safest known way of eliciting human-level and slightly superhuman capabilities from large language models and other similar future models.

READ FULL TEXT

page 5

page 6

page 8

research
09/19/2023

Language Modeling Is Compression

It has long been established that predictive models can be transformed i...
research
04/02/2023

Towards Healthy AI: Large Language Models Need Therapists Too

Recent advances in large language models (LLMs) have led to the developm...
research
05/13/2023

Dual Use Concerns of Generative AI and Large Language Models

We suggest the implementation of the Dual Use Research of Concern (DURC)...
research
07/17/2023

On the application of Large Language Models for language teaching and assessment technology

The recent release of very large language models such as PaLM and GPT-4 ...
research
03/23/2023

Fairness-guided Few-shot Prompting for Large Language Models

Large language models have demonstrated surprising ability to perform in...
research
02/15/2023

The Capacity for Moral Self-Correction in Large Language Models

We test the hypothesis that language models trained with reinforcement l...
research
04/02/2023

Eight Things to Know about Large Language Models

The widespread public deployment of large language models (LLMs) in rece...

Please sign up or login with your details

Forgot password? Click here to reset