Llama 2: Open Foundation and Fine-Tuned Chat Models

07/18/2023
∙
by   Hugo Touvron, et al.
∙
0
∙

In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.

READ FULL TEXT

page 3

page 17

page 19

page 25

page 28

page 30

page 32

research
∙ 08/14/2023

Platypus: Quick, Cheap, and Powerful Refinement of LLMs

We present Platypus, a family of fine-tuned and merged Large Language Mo...
research
∙ 10/08/2022

Understanding HTML with Large Language Models

Large language models (LLMs) have shown exceptional performance on a var...
research
∙ 09/12/2023

AstroLLaMA: Towards Specialized Foundation Models in Astronomy

Large language models excel in many human-language tasks but often falte...
research
∙ 08/18/2023

Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment

Larger language models (LLMs) have taken the world by storm with their m...
research
∙ 12/20/2022

Recycling diverse models for out-of-distribution generalization

Foundation models are redefining how AI systems are built. Practitioners...
research
∙ 02/13/2023

Machine Learning Model Attribution Challenge

We present the findings of the Machine Learning Model Attribution Challe...
research
∙ 04/14/2023

MedAlpaca – An Open-Source Collection of Medical Conversational AI Models and Training Data

As large language models (LLMs) like OpenAI's GPT series continue to mak...

Please sign up or login with your details

Forgot password? Click here to reset