Seq2Seq and Multi-Task Learning for joint intent and content extraction for domain specific interpreters

08/01/2018
by   Marc Velay, et al.
0

This study evaluates the performances of an LSTM network for detecting and extracting the intent and content of com- mands for a financial chatbot. It presents two techniques, sequence to sequence learning and Multi-Task Learning, which might improve on the previous task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/28/2022

Multi-Task Learning for Visual Scene Understanding

Despite the recent progress in deep learning, most approaches still go f...
research
06/10/2021

A Semi-supervised Multi-task Learning Approach to Classify Customer Contact Intents

In the area of customer support, understanding customers' intents is a c...
research
08/05/2020

TPG-DNN: A Method for User Intent Prediction Based on Total Probability Formula and GRU Loss with Multi-task Learning

The E-commerce platform has become the principal battleground where peop...
research
11/15/2019

Bootstrapping NLU Models with Multi-task Learning

Bootstrapping natural language understanding (NLU) systems with minimal ...
research
04/30/2021

Hierarchical Modeling for Out-of-Scope Domain and Intent Classification

User queries for a real-world dialog system may sometimes fall outside t...
research
09/15/2021

Multi-Task Learning with Sequence-Conditioned Transporter Networks

Enabling robots to solve multiple manipulation tasks has a wide range of...
research
11/24/2019

CopyMTL: Copy Mechanism for Joint Extraction of Entities and Relations with Multi-Task Learning

Joint extraction of entities and relations has received significant atte...

Please sign up or login with your details

Forgot password? Click here to reset