DeepAI AI Chat
Log In Sign Up

Classification of US Supreme Court Cases using BERT-Based Techniques

by   Shubham Vatsal, et al.
NYU college

Models based on bidirectional encoder representations from transformers (BERT) produce state of the art (SOTA) results on many natural language processing (NLP) tasks such as named entity recognition (NER), part-of-speech (POS) tagging etc. An interesting phenomenon occurs when classifying long documents such as those from the US supreme court where BERT-based models can be considered difficult to use on a first-pass or out-of-the-box basis. In this paper, we experiment with several BERT-based classification techniques for US supreme court decisions or supreme court database (SCDB) and compare them with the previous SOTA results. We then compare our results specifically with SOTA models for long documents. We compare our results for two classification tasks: (1) a broad classification task with 15 categories and (2) a fine-grained classification task with 279 categories. Our best result produces an accuracy of 80% on the 15 broad categories and 60% on the fine-grained 279 categories which marks an improvement of 8% and 28% respectively from previously reported SOTA results.


page 1

page 2

page 3

page 4


A More Efficient Chinese Named Entity Recognition base on BERT and Syntactic Analysis

We propose a new Named entity recognition (NER) method to effectively ma...

Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT

Although Bidirectional Encoder Representations from Transformers (BERT) ...

Language Representation Models for Fine-Grained Sentiment Classification

Sentiment classification is a quickly advancing field of study with appl...

DUTH at SemEval-2020 Task 11: BERT with Entity Mapping for Propaganda Classification

This report describes the methods employed by the Democritus University ...

ThreatCrawl: A BERT-based Focused Crawler for the Cybersecurity Domain

Publicly available information contains valuable information for Cyber T...

Deploying a BERT-based Query-Title Relevance Classifier in a Production System: a View from the Trenches

The Bidirectional Encoder Representations from Transformers (BERT) model...

Exploring Hate Speech Detection with HateXplain and BERT

Hate Speech takes many forms to target communities with derogatory comme...