Cross-Lingual Knowledge Editing in Large Language Models

09/16/2023
by   Jiaan Wang, et al.
0

Knowledge editing aims to change language models' performance on several special cases (i.e., editing scope) by infusing the corresponding expected knowledge into them. With the recent advancements in large language models (LLMs), knowledge editing has been shown as a promising technique to adapt LLMs to new knowledge without retraining from scratch. However, most of the previous studies neglect the multi-lingual nature of some main-stream LLMs (e.g., LLaMA, ChatGPT and GPT-4), and typically focus on monolingual scenarios, where LLMs are edited and evaluated in the same language. As a result, it is still unknown the effect of source language editing on a different target language. In this paper, we aim to figure out this cross-lingual effect in knowledge editing. Specifically, we first collect a large-scale cross-lingual synthetic dataset by translating ZsRE from English to Chinese. Then, we conduct English editing on various knowledge editing methods covering different paradigms, and evaluate their performance in Chinese, and vice versa. To give deeper analyses of the cross-lingual effect, the evaluation includes four aspects, i.e., reliability, generality, locality and portability. Furthermore, we analyze the inconsistent behaviors of the edited models and discuss their specific challenges.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2022

Language Anisotropic Cross-Lingual Model Editing

Pre-trained language models learn large amounts of knowledge from their ...
research
08/19/2023

Eva-KELLM: A New Benchmark for Evaluating Knowledge Editing of LLMs

Large language models (LLMs) possess a wealth of knowledge encoded in th...
research
12/13/2021

WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models

Recently, large pretrained language models (LMs) have gained popularity....
research
06/07/2021

Investigating Transfer Learning in Multilingual Pre-trained Language Models through Chinese Natural Language Inference

Multilingual transformers (XLM, mT5) have been shown to have remarkable ...
research
03/23/2022

A Survey on Cross-Lingual Summarization

Cross-lingual summarization is the task of generating a summary in one l...
research
03/11/2022

Cross-lingual Inference with A Chinese Entailment Graph

Predicate entailment detection is a crucial task for question-answering ...
research
10/23/2020

EventKG+Click: A Dataset of Language-specific Event-centric User Interaction Traces

An increasing need to analyse event-centric cross-lingual information ca...

Please sign up or login with your details

Forgot password? Click here to reset