Panda LLM: Training Data and Evaluation for Open-Sourced Chinese Instruction-Following Large Language Models

05/04/2023
by   Fangkai Jiao, et al.
0

This project focuses on enhancing open-source large language models through instruction-tuning and providing comprehensive evaluations of their performance. We explore how various training data factors, such as quantity, quality, and linguistic distribution, influence the performance of instruction-tuned models trained on publicly accessible high-quality instruction datasets for both English and Chinese languages. Our goal is to supplement evaluation with quantitative analyses, providing valuable insights for the continued advancement of open-source chat models. Our model, data, and code are publicly available for others to use and build upon.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset