Sublanguage: A Serious Issue Affects Pretrained Models in Legal Domain

04/15/2021
by   Ha-Thanh Nguyen, et al.
0

Legal English is a sublanguage that is important for everyone but not for everyone to understand. Pretrained models have become best practices among current deep learning approaches for different problems. It would be a waste or even a danger if these models were applied in practice without knowledge of the sublanguage of the law. In this paper, we raise the issue and propose a trivial solution by introducing BERTLaw a legal sublanguage pretrained model. The paper's experiments demonstrate the superior effectiveness of the method compared to the baseline pretrained model

READ FULL TEXT
research
02/13/2022

Transformer-based Approaches for Legal Text Processing

In this paper, we introduce our approaches using Transformer-based model...
research
06/25/2021

JNLP Team: Deep Learning Approaches for Legal Processing Tasks in COLIEE 2021

COLIEE is an annual competition in automatic computerized legal text pro...
research
12/13/2021

Dependency Learning for Legal Judgment Prediction with a Unified Text-to-Text Transformer

Given the fact of a case, Legal Judgment Prediction (LJP) involves a ser...
research
03/21/2023

Understand Legal Documents with Contextualized Large Language Models

The growth of pending legal cases in populous countries, such as India, ...
research
01/31/2022

Don't let Ricci v. DeStefano Hold You Back: A Bias-Aware Legal Solution to the Hiring Paradox

Companies that try to address inequality in employment face a hiring par...
research
10/03/2018

Fast Approach to Build an Automatic Sentiment Annotator for Legal Domain using Transfer Learning

This study proposes a novel way of identifying the sentiment of the phra...
research
01/01/2022

Interpretable Low-Resource Legal Decision Making

Over the past several years, legal applications of deep learning have be...

Please sign up or login with your details

Forgot password? Click here to reset