Camoscio: an Italian Instruction-tuned LLaMA

07/31/2023
by   Andrea Santilli, et al.
0

In recent years Large Language Models (LLMs) have increased the state of the art on several natural language processing tasks. However, their accessibility is often limited to paid API services, posing challenges for researchers in conducting extensive investigations. On the other hand, while some open-source models have been proposed by the community, they are typically multilingual and not specifically tailored for the Italian language. In an effort to democratize the available and open resources for the Italian language, in this paper we introduce Camoscio: a language model specifically tuned to follow users' prompts in Italian. Specifically, we finetuned the smallest variant of LLaMA (7b) with LoRA on a corpus of instruction prompts translated to Italian via ChatGPT. Results indicate that the model's zero-shot performance on various downstream tasks in Italian competes favorably with existing models specifically finetuned for those tasks. All the artifacts (code, dataset, model) are released to the community at the following url: https://github.com/teelinsan/camoscio

READ FULL TEXT
research
06/05/2023

InstructZero: Efficient Instruction Optimization for Black-Box Large Language Models

Large language models (LLMs) are instruction followers, but it can be ch...
research
05/11/2023

InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning

General-purpose language models that can solve various language-domain t...
research
03/24/2022

minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models

We present minicons, an open source library that provides a standard API...
research
06/08/2023

PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization

Instruction tuning large language models (LLMs) remains a challenging ta...
research
05/25/2023

RewriteLM: An Instruction-Tuned Large Language Model for Text Rewriting

Large Language Models (LLMs) have demonstrated impressive zero-shot capa...
research
10/06/2022

Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners

Meta-training, which fine-tunes the language model (LM) on various downs...
research
09/19/2023

OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch

Large language models (LLMs) with billions of parameters have demonstrat...

Please sign up or login with your details

Forgot password? Click here to reset