Phoenix: Democratizing ChatGPT across Languages

04/20/2023
by   Zhihong Chen, et al.
0

This paper presents our efforts to democratize ChatGPT across language. We release a large language model "Phoenix", achieving competitive performance among open-source English and Chinese models while excelling in languages with limited resources (covering both Latin and non-Latin languages). We believe this work will be beneficial to make ChatGPT more accessible, especially in countries where people cannot use ChatGPT due to restrictions from OpenAI or local goverments. Our data, code, and models are available at https://github.com/FreedomIntelligence/LLMZoo.

READ FULL TEXT

page 1

page 8

page 11

research
05/25/2023

Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages

We create publicly available language identification (LID) datasets and ...
research
09/14/2023

C-Pack: Packaged Resources To Advance General Chinese Embedding

We introduce C-Pack, a package of resources that significantly advance t...
research
05/22/2023

GPT-SW3: An Autoregressive Language Model for the Nordic Languages

This paper details the process of developing the first native large gene...
research
08/18/2023

ChatHaruhi: Reviving Anime Character in Reality via Large Language Model

Role-playing chatbots built on large language models have drawn interest...
research
03/31/2023

Evaluating GPT-4 and ChatGPT on Japanese Medical Licensing Examinations

As large language models (LLMs) gain popularity among speakers of divers...
research
07/04/2023

Transformed Protoform Reconstruction

Protoform reconstruction is the task of inferring what morphemes or word...
research
10/12/2021

OpenHands: Making Sign Language Recognition Accessible with Pose-based Pretrained Models across Languages

AI technologies for Natural Languages have made tremendous progress rece...

Please sign up or login with your details

Forgot password? Click here to reset