UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition

08/07/2023
by   Wenxuan Zhou, et al.
0

Large language models (LLMs) have demonstrated remarkable generalizability, such as understanding arbitrary entities and relations. Instruction tuning has proven effective for distilling LLMs into more cost-efficient models such as Alpaca and Vicuna. Yet such student models still trail the original LLMs by large margins in downstream applications. In this paper, we explore targeted distillation with mission-focused instruction tuning to train student models that can excel in a broad application class such as open information extraction. Using named entity recognition (NER) for case study, we show how ChatGPT can be distilled into much smaller UniversalNER models for open NER. For evaluation, we assemble the largest NER benchmark to date, comprising 43 datasets across 9 diverse domains such as biomedicine, programming, social media, law, finance. Without using any direct supervision, UniversalNER attains remarkable NER accuracy across tens of thousands of entity types, outperforming general instruction-tuned models such as Alpaca and Vicuna by over 30 absolute F1 points in average. With a tiny fraction of parameters, UniversalNER not only acquires ChatGPT's capability in recognizing arbitrary entity types, but also outperforms its NER accuracy by 7-9 absolute F1 points in average. Remarkably, UniversalNER even outperforms by a large margin state-of-the-art multi-task instruction-tuned systems such as InstructUIE, which uses supervised NER examples. We also conduct thorough ablation studies to assess the impact of various components in our distillation approach. We will release the distillation recipe, data, and UniversalNER models to facilitate future research on targeted distillation.

READ FULL TEXT

page 4

page 5

page 6

page 14

research
05/18/2023

Learning In-context Learning for Named Entity Recognition

Named entity recognition in real-world applications suffers from the div...
research
07/12/2023

Distilling Large Language Models for Biomedical Knowledge Extraction: A Case Study on Adverse Drug Events

Large language models (LLMs), such as GPT-4, have demonstrated remarkabl...
research
05/24/2023

PromptNER: Prompting For Named Entity Recognition

In a surprising turn, Large Language Models (LLMs) together with a growi...
research
08/30/2022

MultiCoNER: A Large-scale Multilingual dataset for Complex Named Entity Recognition

We present MultiCoNER, a large multilingual dataset for Named Entity Rec...
research
08/26/2021

A Realistic Study of Auto-regressive Language Models for Named Entity Typing and Recognition

Despite impressive results of language models for named entity recogniti...
research
04/10/2020

One Model to Recognize Them All: Marginal Distillation from NER Models with Different Tag Sets

Named entity recognition (NER) is a fundamental component in the modern ...
research
05/01/2023

Poisoning Language Models During Instruction Tuning

Instruction-tuned LMs such as ChatGPT, FLAN, and InstructGPT are finetun...

Please sign up or login with your details

Forgot password? Click here to reset