Z-BERT-A: a zero-shot Pipeline for Unknown Intent detection

08/15/2022
by   Daniele Comi, et al.
27

Intent discovery is a fundamental task in NLP, and it is increasingly relevant for a variety of industrial applications (Quarteroni 2018). The main challenge resides in the need to identify from input utterances novel unseen in-tents. Herein, we propose Z-BERT-A, a two-stage method for intent discovery relying on a Transformer architecture (Vaswani et al. 2017; Devlin et al. 2018), fine-tuned with Adapters (Pfeiffer et al. 2020), initially trained for Natural Language Inference (NLI), and later applied for unknown in-tent classification in a zero-shot setting. In our evaluation, we firstly analyze the quality of the model after adaptive fine-tuning on known classes. Secondly, we evaluate its performance casting intent classification as an NLI task. Lastly, we test the zero-shot performance of the model on unseen classes, showing how Z-BERT-A can effectively perform in-tent discovery by generating intents that are semantically similar, if not equal, to the ground truth ones. Our experiments show how Z-BERT-A is outperforming a wide variety of baselines in two zero-shot settings: known intents classification and unseen intent discovery. The proposed pipeline holds the potential to be widely applied in a variety of application for customer care. It enables automated dynamic triage using a lightweight model that, unlike large language models, can be easily deployed and scaled in a wide variety of business scenarios. Especially when considering a setting with limited hardware availability and performance whereon-premise or low resource cloud deployments are imperative. Z-BERT-A, predicting novel intents from a single utterance, represents an innovative approach for intent discovery, enabling online generation of novel intents. The pipeline is available as an installable python package at the following link: https://github.com/GT4SD/zberta.

READ FULL TEXT
research
05/11/2023

Exploring Zero and Few-shot Techniques for Intent Classification

Conversational NLU providers often need to scale to thousands of intent-...
research
03/29/2022

Evaluating Prompts Across Multiple Choice Tasks In a Zero-Shot Setting

Large language models have shown that impressive zero-shot performance c...
research
08/16/2021

A Single Example Can Improve Zero-Shot Data Generation

Sub-tasks of intent classification, such as robustness to distribution s...
research
09/13/2021

Effectiveness of Pre-training for Few-shot Intent Classification

This paper investigates the effectiveness of pre-training for few-shot i...
research
07/11/2023

Synthetic Dataset for Evaluating Complex Compositional Knowledge for Natural Language Inference

We introduce a synthetic dataset called Sentences Involving Complex Comp...
research
12/03/2020

Learning Disentangled Intent Representations for Zero-shot Intent Detection

Zero-shot intent detection (ZSID) aims to deal with the continuously eme...
research
05/23/2023

GrACE: Generation using Associated Code Edits

Developers expend a significant amount of time in editing code for a var...

Please sign up or login with your details

Forgot password? Click here to reset