CAIL2018: A Large-Scale Legal Dataset for Judgment Prediction

07/04/2018 ∙ by Chaojun Xiao, et al. ∙ 0

In this paper, we introduce the Chinese AI and Law challenge dataset (CAIL2018), the first large-scale Chinese legal dataset for judgment prediction. contains more than 2.6 million criminal cases published by the Supreme People's Court of China, which are several times larger than other datasets in existing works on judgment prediction. Moreover, the annotations of judgment results are more detailed and rich. It consists of applicable law articles, charges, and prison terms, which are expected to be inferred according to the fact descriptions of cases. For comparison, we implement several conventional text classification baselines for judgment prediction and experimental results show that it is still a challenge for current models to predict the judgment results of legal cases, especially on prison terms. To help the researchers make improvements on legal judgment prediction, both and baselines will be released after the CAIL competition[http://cail.cipsc.org.cn/].

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The task of Legal Judgment Prediction(LJP) aims to empower machine to predict the judgment results of legal cases after reading fact descriptions. It has been studied for decades. Due to the limitation of publicly available cases, early works Lauderdale and Clark (2012); Segal (1984); Keown (1980); Ulmer (1963); Nagel (1963); Kort (1957)

usually conduct statistical analysis on the judgment results over a small number of cases rather than predicting them. With the development of machine learning algorithms, some works take LJP as a text classification task and propose to extract efficient features from fact descriptons 

Liu and Chen (2017); Sulea et al. (2017); Aletras et al. (2016); Lin et al. (2012); Liu and Hsieh (2006). These works are still restricted to particular case types and suffer from generalization issue when applied to other scenarios.

Inspired by the success of deep learning techniques on natural language processing tasks, researchers attempt to employ neural models to handle judgment prediction task under the text classification framework 

Luo et al. (2017); Hu et al. (2018). However, there is not a publicly accessible high-quality dataset for LJP yet. Therefore, we collect and release the first large-scale dataset for LJP, i.e., CAIL2018, to encourage further explorations on this task and other advanced legal intelligence algorithms.

CAIL2018 consists of more than million criminal cases, which are collected from http://wenshu.court.gov.cn/ published by the Supreme People’s Court of China. These documents serve as the reference for professionals to improve their working efficiency and are expected to benefit researches on legal intelligent systems.

Specifically, each case in CAIL2018 consists of two parts, i.e., fact description and corresponding judgment result. Here, the judgment result of each case is refined into representative ones, including relevant law articles, charges, and prison terms. Comparing with other datasets used by existing LJP works, CAIL2018 is on a larger scale and reserves richer annotations of judgment results. Totally, CAIL2018 contains criminal cases, which are annotated with criminal law articles and criminal charges. Both the number of cases and the number of labels are several times than other closed-source LJP datasets.

In the following parts, we give a detailed introduction to the construction of CAIL2018 and the LJP results of baseline methods on this dataset.

Fact Relevant Law Article Charge Prison Term Defendant
被告人胡某… 刑法第234条 故意伤害 12个月 胡某
The Defendant Hu… 234th article of criminal law intentional injury 12 months Miss./Mr. Hu
Table 1: An example in CAIL2018 .

2 Dataset Construction

We construct CAIL2018 from criminal documents collected from China Judgments Online222http://wenshu.court.gov.cn/. There documents of criminal cases belong to five types, including judgment, verdict, conciliation statement, decision letter, and notice. For LJP, we only concern on these cases with judgment results. Therefore, we only keep these judgment documents for training LJP models.

Each original document is well-structured and divided into several parts, e.g., fact description, court view, parties, judgment result and other information. Therefore, we take the fact part as input and extract applicable law articles, charges and prison terms from referee result with regular expressions.

Since many criminal cases own multiple defendants, which would increase the difficulty of LJP greatly, we only retain the cases with a single defendant.

In addition, there are also many low-frequency charges(e.g. insult the national flag, jailbreak) and law articles. We filter out cases with those charges and law articles whose frequency is smaller than . Besides, the top law articles in Chinese Criminal Law are not relevant to specific charges, we filter out these law articles and charges as well.

After preprocessing, the dataset contains criminal cases, criminal law articles, charges and prison term. We also show an instance in CAIL2018 in Table 1.

It is worth noting that, the distribution of different categories in CAIL2018 is quite imbalanced. Considering the number of various charges, the top charges cover cases. On the contrary, the bottom charges only cover cases. The imbalance issue in CAIL2018 makes it challenging to predict low-frequency charges and law articles.

3 Experiments

In this section, we implement and evaluate several typical text classification baselines on three subtasks of LJP, including law articles, charges, and prison terms.

Tasks Charges Relevant Articles Terms of Penalty
Metrics Acc.% MP% MR% Acc.% MP% MR% Acc.% MP% MR%
FastText 94.3 50.9 39.7 93.3 45.8 38.1 74.6 48.0 24.5
TFIDF+SVM 94.0 73.9 56.2 92.9 71.8 52.4 75.4 75.4 46.1
CNN 97.6 37.0 21.4 97.6 37.4 21.8 78.2 45.5 36.1
Table 2: LJP results on CAIL.

3.1 Baselines

We select following baselines for comparison:

TFIDF+ SVM: Term-frequency inverse document frequency (TFIDF) Salton and Buckley (1988)

is an efficient method to extract word features and Support Vector Machine (SVM) 

Suykens and Vandewalle (1999)

is a representative classification model. We implement TFIDF to extract text features and employ SVM with linear kernel to train the classifier.

FastText: FastText Joulin et al. (2017)

is a simple and efficient approach for text classification based on N-grams and Hierarchical softmax 

Mikolov et al. (2013).

CNN:Convolutional Neural Network(CNN) has been proven efficient in text classification Kim (2014). We employ the CNN with multiple filters to encode fact descriptions.

3.2 Implementation Details

For all the methods, we randomly select cases for training and cases for testing. Since all fact descriptions are written in Chinese, we employ THULAC Sun et al. (2016) for word segmentation. For TFIDF+SVM model, we limit the feature size to . For neural-based model, we employ Skip-Gram model Mikolov et al. (2013) to train word embeddings with dimensions.

For CNN, we set the maximum length of a case description to , the filter widths to with each filter size to for consistency.

For training, we employ Adam Kingma and Ba (2015) as the optimizer. We set the learning rate to , the dropout rate to , and the batch size to .

3.3 Results and Analysis

We evaluate baseline models with several metrics, including accuracy(Acc.), macro-precision(MP) and macro-recall(MR) which are widely used in the classification task. Experimental results on the test set are shown in Table 2.

From this table, we find that current models can achieve considerable results on the accuracy of charges prediction and relevant law articles prediction. However, the results of MP and MR show that LJP is still a huge challenge due to the lack of training data and imbalance issue.

4 Conclusion

In this work, we release the first large-scale legal judgment prediction dataset, CAIL2018. Comparing with existing LJP datasets, CAIL2018 is the largest LJP dataset so far and publicly available. Moreover, CAIL2018 reserves more detailed annotations, which is consistent with real-world scenarios. Experiments demonstrate that LJP is still challenging and leave plenty of room to make improvements.

References

  • Aletras et al. (2016) Nikolaos Aletras, Dimitrios Tsarapatsanis, Daniel Preotiuc-Pietro, and Vasileios Lampos. 2016. Predicting judicial decisions of the european court of human rights: A natural language processing perspective. PeerJ Computer Science 2.
  • Hu et al. (2018) Zikun Hu, Xiang Li, Cunchao Tu, Zhiyuan Liu, and Maosong Sun. 2018. Few-shot charge prediction with discriminative legal attributes. In Proceedings of COLING.
  • Joulin et al. (2017) Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of EACL.
  • Keown (1980) R Keown. 1980. Mathematical models for legal prediction. Computer/LJ 2:829.
  • Kim (2014) Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of EMNLP.
  • Kingma and Ba (2015) Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR.
  • Kort (1957) Fred Kort. 1957. Predicting supreme court decisions mathematically: A quantitative analysis of the ”right to counsel” cases. American Political Science Review 51(1):1–12.
  • Lauderdale and Clark (2012) Benjamin E Lauderdale and Tom S Clark. 2012. The supreme court’s many median justices. American Political Science Review 106(4):847–866.
  • Lin et al. (2012) Wan-Chen Lin, Tsung-Ting Kuo, Tung-Jia Chang, Chueh-An Yen, Chao-Ju Chen, and Shou-de Lin. 2012. Exploiting machine learning models for chinese legal documents labeling, case classification, and sentencing prediction. In Processdings of ROCLING. page 140.
  • Liu and Hsieh (2006) Chao-Lin Liu and Chwen-Dar Hsieh. 2006. Exploring phrase-based classification of judicial documents for criminal charges in chinese. In Proceedings of ISMIS. pages 681–690.
  • Liu and Chen (2017) Yi Hung Liu and Yen Liang Chen. 2017.

    A two-phase sentiment analysis approach for judgement prediction.

    Journal of Information Science .
  • Luo et al. (2017) Bingfeng Luo, Yansong Feng, Jianbo Xu, Xiang Zhang, and Dongyan Zhao. 2017. Learning to predict charges for criminal cases with legal basis. In Proceedings of EMNLP.
  • Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS. pages 3111–3119.
  • Nagel (1963) Stuart S Nagel. 1963. Applying correlation analysis to case prediction. Tex. L. Rev. 42:1006.
  • Salton and Buckley (1988) Gerard Salton and Christopher Buckley. 1988. Term-weighting approaches in automatic text retrieval. Information processing & management 24(5):513–523.
  • Segal (1984) Jeffrey A Segal. 1984. Predicting supreme court cases probabilistically: The search and seizure cases, 1962-1981. American Political Science Review 78(4):891–900.
  • Sulea et al. (2017) Octavia Maria Sulea, Marcos Zampieri, Mihaela Vela, and Josef Van Genabith. 2017. Exploring the use of text classi cation in the legal domain. In Proceedings of ASAIL workshop.
  • Sun et al. (2016) Maosong Sun, Xinxiong Chen, Kaixu Zhang, Zhipeng Guo, and Zhiyuan Liu. 2016. Thulac: An efficient lexical analyzer for chinese. .
  • Suykens and Vandewalle (1999) Johan AK Suykens and Joos Vandewalle. 1999. Least squares support vector machine classifiers. Neural processing letters 9(3):293–300.
  • Tang et al. (2015) Duyu Tang, Bing Qin, and Ting Liu. 2015.

    Document modeling with gated recurrent neural network for sentiment classification.

    In Proceedings of EMNLP. pages 1422–1432.
  • Ulmer (1963) S Sidney Ulmer. 1963. Quantitative analysis of judicial processes: Some practical and theoretical applications. Law & Contemp. Probs. 28:164.