Predictability and Surprise in Large Generative Models

by   Deep Ganguli, et al.

Large-scale pre-training has recently emerged as a technique for creating capable, general purpose, generative models such as GPT-3, Megatron-Turing NLG, Gopher, and many others. In this paper, we highlight a counterintuitive property of such models and discuss the policy implications of this property. Namely, these generative models have an unusual combination of predictable loss on a broad training distribution (as embodied in their "scaling laws"), and unpredictable specific capabilities, inputs, and outputs. We believe that the high-level predictability and appearance of useful capabilities drives rapid development of such models, while the unpredictable qualities make it difficult to anticipate the consequences of model deployment. We go through examples of how this combination can lead to socially harmful behavior with examples from the literature and real world observations, and we also perform two novel experiments to illustrate our point about harms from unpredictability. Furthermore, we analyze how these conflicting properties combine to give model developers various motivations for deploying these models, and challenges that can hinder deployment. We conclude with a list of possible interventions the AI community may take to increase the chance of these models having a beneficial impact. We intend this paper to be useful to policymakers who want to understand and regulate AI systems, technologists who care about the potential policy impact of their work, and academics who want to analyze, critique, and potentially develop large generative models.


The Design Space of Generative Models

Card et al.'s classic paper "The Design Space of Input Devices" establis...

ChatGPT is not all you need. A State of the Art Review of large Generative AI models

During the last two years there has been a plethora of large generative ...

AHA!: Facilitating AI Impact Assessment by Generating Examples of Harms

While demands for change and accountability for harmful AI consequences ...

Nine Potential Pitfalls when Designing Human-AI Co-Creative Systems

This position paper examines potential pitfalls on the way towards achie...

On the Trustworthiness Landscape of State-of-the-art Generative Models: A Comprehensive Survey

Diffusion models and large language models have emerged as leading-edge ...

On Training Sample Memorization: Lessons from Benchmarking Generative Modeling with a Large-scale Competition

Many recent developments on generative models for natural images have re...

Please sign up or login with your details

Forgot password? Click here to reset