Automatic judgment prediction is to train a machine judge to determine whether a certain plea in a given civil case would be supported or rejected. In countries with civil law system, e.g. mainland China, such process should be done with reference to related law articles and the fact description, as is performed by a human judge. The intuition comes from the fact that under civil law system, law articles act as principles for juridical judgments. Such techniques would have a wide range of promising applications. On the one hand, legal consulting systems could provide better access to high-quality legal resources in a low-cost way to legal outsiders, who suffer from the complicated terminologies. On the other hand, machine judge assistants for professionals would help improve the efficiency of the judicial system. Besides, automated judgment system can help in improving juridical equality and transparency. From another perspective, there are currently 7 times much more civil cases than criminal cases in mainland China, with annual rates of increase of and respectively, making judgment prediction in civil cases a promising application Zhuge (2016).
Previous works Aletras et al. (2016); Katz et al. (2017); Luo et al. (2017); Sulea et al. (2017) formalize judgment prediction as the text classification task, regarding either charge names or binary judgments, i.e., support or reject, as the target classes. These works focus on the situation where only one result is expected, e.g., the US Supreme Court’s decisions Katz et al. (2017), and the charge name prediction for criminal cases Luo et al. (2017). Despite these recent efforts and their progress, automatic judgment prediction in civil law system is still confronted with two main challenges:
One-to-Many Relation between Case and Plea. Every single civil case may contain multiple pleas and the result of each plea is co-determined by related law articles and specific aspects of the involved case. For example, in divorce proceedings, judgment of alienation of mutual affection is the key factor for granting divorce but custody of children depends on which side can provide better an environment for children’s growth as well as parents’ financial condition. Here, different pleas are independent.
Heterogeneity of Input Triple. Inputs to a judgment prediction system consist of three heterogeneous yet complementary parts, i.e., fact description, plaintiff’s plea, and related law articles. Concatenating them together and treating them simply as a sequence of words as in previous works Katz et al. (2017); Aletras et al. (2016) would cause a great loss of information. This is the same in question-answering where the dual inputs, i.e., query and passage, should be modeled separately.
Despite the introduction of the neural networks that can learn better semantic representations of input text, it remains unsolved to incorporate proper mechanisms to integrate the complementary triple ofpleas, fact descriptions, and law articles together.
Inspired by recent advances in question answering (QA) based reading comprehension (RC) Wang et al. (2017); Cui et al. (2017); Nguyen et al. (2016); Rajpurkar et al. (2016), we propose the Legal Reading Comprehension (LRC) framework for automatic judgment prediction. LRC incorporates the reading mechanism for better modeling of the complementary inputs above-mentioned, as is done by human judges when referring to legal materials in search of supporting law articles. Reading mechanism, by simulating how human connects and integrates multiple text, has proven an effective module in RC tasks. We argue that applying the reading mechanism in a proper way among the triplets can obtain a better understanding and more informative representation of the original text, and further improve performance . To instantiate the framework, we propose an end-to-end neural network model named AutoJudge.
For experiments, we train and evaluate our models in the civil law system of mainland China. We collect and construct a large-scale real-world data set of case documents that the Supreme People’s Court of People’s Republic of China has made publicly available. Fact description, pleas, and results can be extracted easily from these case documents with regular expressions, since the original documents have special typographical characteristics indicating the discourse structure. We also take into account law articles and their corresponding juridical interpretations. We also implement and evaluate previous methods on our dataset, which prove to be strong baselines.
Our experiment results show significant improvements over previous methods. Further experiments demonstrate that our model also achieves considerable improvement over other off-the-shelf state-of-the-art models under classification and question answering framework respectively. Ablation tests carried out by taking off some components of our model further prove its robustness and effectiveness.
To sum up, our contributions are as follows:
(1) We introduce reading mechanism and re-formalize judgment prediction as Legal Reading Comprehension to better model the complementary inputs.
(2) We construct a real-world dataset for experiments, and plan to publish it for further research.
(3) Besides baselines from previous works, we also carry out comprehensive experiments comparing different existing deep neural network methods on our dataset. Supported by these experiments, improvements achieved by LRC prove to be robust.
2 Related Work
2.1 Judgment Prediction
Automatic judgment prediction has been studied for decades. At the very first stage of judgment prediction studies, researchers focus on mathematical and statistical analysis of existing cases, without any conclusions or methodologies on how to predict them Lauderdale and Clark (2012); Segal (1984); Keown (1980); Ulmer (1963); Nagel (1963); Kort (1957).
Recent attempts consider judgment prediction under the text classification framework. Most of these works extract efficient features from text (e.g., N-grams)Liu and Chen (2017); Sulea et al. (2017); Aletras et al. (2016); Lin et al. (2012); Liu and Hsieh (2006) or case profiles (e.g., dates, terms, locations and types) Katz et al. (2017). All these methods require a large amount of human effort to design features or annotate cases. Besides, they also suffer from generalization issue when applied to other scenarios.
Motivated by the successful application of deep neural networks, Luo et al. Luo et al. (2017) introduce an attention-based neural model to predict charges of criminal cases, and verify the effectiveness of taking law articles into consideration. Nevertheless, they still fall into the text classification framework and lack the ability to handle multiple inputs with more complicated structures.
2.2 Text Classification
As the basis of previous judgment prediction works, typical text classification task takes a single text content as input and predicts the category it belongs to. Recent works usually employ neural networks to model the internal structure of a single input Kim (2014); Baharudin et al. (2010); Tang et al. (2015); Yang et al. (2016).
2.3 Reading Comprehension
Reading comprehension is a relevant task to model heterogeneous and complementary inputs, where an answer is predicted given two channels of inputs, i.e. a textual passage and a query. Considerable progress has been made Cui et al. (2017); Dhingra et al. (2017); Wang et al. (2017). These models employ various attention mechanism to model the interaction between passage and query. Inspired by the advantage of reading comprehension models on modeling multiple inputs, we apply this idea into the legal area and propose legal reading comprehension for judgment prediction.
3 Legal Reading Comprehension
3.1 Conventional Reading Comprehension
Conventional reading comprehension He et al. (2017); Joshi et al. (2017); Nguyen et al. (2016); Rajpurkar et al. (2016) usually considers reading comprehension as predicting the answer given a passage and a query, where the answer could be a single word, a text span of the original passage, chosen from answer candidates, or generated by human annotators.
Generally, an instance in RC is represented as a triple , where , and correspond to , and respectively. Given a triple , RC takes the pair as the input and employs attention-based neural models to construct an efficient representation. Afterwards, the representation is fed into the output layer to select or generate an .
3.2 Legal Reading Comprehension
Existing works usually formalize judgment prediction as a text classification task and focus on extracting well-designed features of specific cases. Such simplification ignores that the judgment of a case is determined by its fact description and multiple pleas. Moreover, the final judgment should act up to the legal provisions, especially in civil law systems. Therefore, how to integrate the information (i.e., fact descriptions, pleas, and law articles) in a reasonable way is critical for judgment prediction.
Inspired by the successful application of RC, we propose a framework of Legal Reading Comprehension(LRC) for judgment prediction in the legal area. As illustrated in Fig. 1, for each plea in a given case, the prediction of judgment result is made based the fact description and the potentially relevant law articles.
In a nutshell, LRC can be formalized as the following quadruplet task:
where is the fact description, is the plea, is the law articles and is the result. Given , LRC aims to predict the judgment result as
The probability is calculated with respect to the interaction among the triple, which will draw on the experience of the interaction between pairs in RC.
To summarize, LRC is innovative in the following aspects:
(1) While previous works fit the problem into text classification framework, LRC re-formalizes the way to approach such problems. This new framework provides the ability to deal with the heterogeneity of the complementary inputs.
(2) Rather than employing conventional RC models to handle pair-wise text information in the legal area, LRC takes the critical law articles into consideration and models the facts, pleas, and law articles jointly for judgment prediction, which is more suitable to simulate the human mode of dealing with cases.
We propose a novel judgment prediction model AutoJudge to instantiate the LRC framework. As shown in Fig. 2, AutoJudge consists of three flexible modules, including a text encoder, a pair-wise attentive reader, and an output module.
In the following parts, we give a detailed introduction to these three modules.
4.1 Text Encoder
As illustrated in Fig. 2, Text Encoder aims to encode the word sequences of inputs into continuous representation sequences.
Formally, consider a fact description , a plea , and the relevant law articles , where denotes the -th word in the sequence and are the lengths of word sequences respectively.
First, we convert the words to their respective word embeddings to obtain , and , where . Afterwards, we employ bi-directional GRU Cho et al. (2014); Bahdanau et al. (2015); Chung et al. (2014) to produce the encoded representation of all words as follows:
Note that, we adopt different bi-directional GRUs to encode fact descriptions, pleas, and law articles respectively(denoted as , , and ). With these text encoders, , , and are converting into , , and .
4.2 Pair-Wise Attentive Reader
How to model the interactions among the input text is the most important problem in reading comprehension. In AutoJudge, we employ a pair-wise attentive reader to process and respectively. More specifically, we propose to use pair-wise mutual attention mechanism to capture the complex semantic interaction between text pairs, as well as increasing the interpretability of AutoJudge.
4.2.1 Pair-Wise Mutual Attention
For each input pair or , we employ pair-wise mutual attention to select relevant information from fact descriptions and produce more informative representation sequences.
As a variant of the original attention mechanism Bahdanau et al. (2015), we design the pair-wise mutual attention unit as a GRU with internal memories denoted as mGRU.
Taking the representation sequence pair for instance, mGRU stores the fact sequence into its memories. For each timestamp , it selects relevant fact information from the memories as follows,
Here, the weight is the softmax value as
Note that, represents the relevance between and . It is calculated as follows,
Here, is the last hidden state in the GRU, which will be introduced in the following part.
is a weight vector, and, , are attention metrics of our proposed pair-wise attention mechanism.
4.2.2 Reading Mechanism
With the relevant fact information and , we get the -th input of mGRU as
where indicates the concatenation operation.
Then, we feed into GRU to get more informative representation sequence as follows,
For the input pair , we can get in the same way. Therefore, we omit the implementation details Here.
4.3 Output Layer
Using text encoder and pair-wise attentive reader, the initial input triple has been converted into two sequences, i.e., and , where is defined similarly to . These sequences reserve complex semantic information about the pleas and law articles, and filter out irrelevant information in fact descriptions.
With these two sequences, we concatenate and along the sequence length dimension to generate the sequence . Since we have employed several GRU layers to encode the sequential inputs, another recurrent layer may be redundant. Therefore, we utilize a -layer CNN Kim (2014) to capture the local structure and generate the representation vector for the final prediction.
Assuming is the predicted probability that the plea in the case sample would be supported and is the gold standard, AutoJudge aims to minimize the cross-entropy as follows,
where is the number of training data. As all the calculation in our model is differentiable, we employ Adam Kingma and Ba (2015) for optimization.
To evaluate the proposed LRC framework and the AutoJudge model, we carry out a series of experiments on the divorce proceedings, a typical yet complex field of civil cases. Divorce proceedings often come with several kinds of pleas, e.g. seeking divorce, custody of children, compensation, and maintenance, which focuses on different aspects and thus makes it a challenge for judgment prediction.
5.1 Dataset Construction for Evaluation
5.1.1 Data Collection
Since none of the datasets from previous works have been published, we decide to build a new one. We randomly collect cases from China Judgments Online111http://wenshu.court.gov.cn, among which cases are for training, each for validation and testing. Among the original cases, are granted divorce and others not. There are valid pleas in total, with supported and rejected. Note that, if the divorce plea in a case is not granted, the other pleas of this case will not be considered by the judge. Case materials are all natural language sentences, with averagely tokens per fact description and per plea. There are relevant law articles in total, each with tokens averagely. Note that the case documents include special typographical signals, making it easy to extract labeled data with regular expression.
5.1.2 Data Pre-Processing
Name Replacement222We use regular expressions to extract names and roles from the formatted case header. : All names in case documents are replaced with marks indicating their roles, instead of simply anonymizing them, e.g. <Plantiff>, <Defendant>, <Daughter_x> and so on. Since “all are equal before the law”333Constitution of the People’s Republic of China, names should make no more difference than what role they take.
Law Article Filtration : Since most accessible divorce proceeding documents do not contain ground-truth fine-grained articles444Fine-grained articles are in the Juridical Interpretations, giving detailed explanation, while the Marriage Law only covers some basic principles., we use an unsupervised method instead. First, we extract all the articles from the law text with regular expression. Afterwards, we select the most relevant 10 articles according to the fact descriptions as follows. We obtain sentence representation with CBOW Mikolov et al. (2013a, b) weighted by inverse document frequency, and calculate cosine distance between cases and law articles. Word embeddings are pre-trained with Chinese Wikipedia pages555https://dumps.wikimedia.org/zhwiki/. As the final step, we extract top relevant articles for each sample respectively from the main marriage law articles and their interpretations, which are equally important. We manually check the extracted articles for cases to ensure that the extraction quality is fairly good and acceptable.
The filtration process is automatic and fully unsupervised since the original documents have no ground-truth labels for fine-grained law articles, and coarse-grained law-articles only provide limited information. We also experiment with the ground-truth articles, but only a small fraction of them has fine-grained ones, and they are usually not available in real-world scenarios.
5.2 Implementation Details
We employ Jieba666https://github.com/fxsjy/jieba for Chinese word segmentation and keep the top frequent words. The word embedding size is set to and the other low-frequency words are replaced with the mark <UNK>. The hidden size of GRU is set to for each direction in Bi-GRU. In the pair-wise attentive reader, the hidden state is set to for mGRu. In the CNN layer, filter windows are set to , , , and with each filter containing feature maps. We add a dropout layer Srivastava et al. (2014) after the CNN layer with a dropout rate of . We use AdamKingma and Ba (2015) for training and set learning rate to , to , to , to , batch size to . We employ precision, recall, F1 and accuracy
for evaluation metrics. We repeat all the experiments fortimes, and report the average results.
For comparison, we adopt and re-implement three kinds of baselines as follows:
Lexical Features + SVM
Neural Text Classification Models
We implement and fine-tune a series of neural text classifiers, including attention-based methodLuo et al. (2017) and other methods we deem important. CNN Kim (2014) and GRU Cho et al. (2014); Yang et al. (2016) take as input the concatenation of fact description and plea. Similarly, CNN/GRU+law refers to using the concatenation of fact description, plea and law articles as inputs.
We implement and train some off-the-shelf RC models, including r-netWang et al. (2017) and AoACui et al. (2017), which are the leading models on SQuAD leaderboard. In our initial experiments, these models take fact description as passage and plea as query. Further, Law articles are added to the fact description as a part of the reading materials, which is a simple way to consider them as well.
5.4 Results and Analysis
From Table 1, we have the following observations:
(1) AutoJudge consistently and significantly outperforms all the baselines, including RC models and other neural text classification models, which shows the effectiveness and robustness of our model.
(2) RC models achieve better performance than most text classification models (excluding GRU+Attention), which indicates that reading mechanism is a better way to integrate information from heterogeneous yet complementary inputs. On the contrary, simply adding law articles as a part of the reading materials makes no difference in performance. Note that, GRU+Attention employ similar attention mechanism as RC does and takes additional law articles into consideration, thus achieves comparable performance with RC models.
(3) Comparing with conventional RC models, AutoJudge achieves significant improvement with the consideration of additional law articles. It reflects the difference between LRC and conventional RC models. We re-formalize LRC in legal area to incorporate law articles via the reading mechanism, which can enhance judgment prediction. Moreover, CNN/GRU+law decrease the performance by simply concatenating original text with law articles, while GRU+Attention/AutoJudge increase the performance by integrating law articles with attention mechanism. It shows the importance and rationality of using attention mechanism to capture the interaction between multiple inputs.
The experiments support our hypothesis as proposed in the Introduction part that in civil cases, it’s important to model the interactions among case materials. Reading mechanism can well perform the matching among them.
|w/o reading mechanism||78.9()||78.2()|
|w/o law articles||79.6()||78.4()|
|w/o law article selection||80.6()||80.5()|
|with GT law articles||85.1()||84.1()|
5.5 Ablation Test
AutoJudge is characterized by the incorporation of pair-wise attentive reader, law articles, and a CNN output layer, as well as some pre-processing with legal prior. We design ablation tests respectively to evaluate the effectiveness of these modules. When taken off the attention mechanism, AutoJudge degrades into a GRU on which a CNN is stacked. When taken off law articles, the CNN output layer only takes as input. Besides, our model is tested respectively without name-replacement or unsupervised selection of law articles (i.e. passing the whole law text). As mentioned above, we system use law articles extracted with unsupervised method, so we also experiment with ground-truth law articles.
Results are shown in Table 2. We can infer that:
(1) The performance drops significantly after removing the attention layer or excluding the law articles, which is consistent with the comparison between AutoJudge and baselines. The result verifies that both the reading mechanism and incorporation of law articles are important and effective.
(2) After replacing CNN with an LSTM layer, performance drops as much as in accuracy and in F1 score. The reason may be the redundancy of RNNs. AutoJudge has employed several GRU layers to encode text sequences. Another RNN layer may be useless to capture sequential dependencies, while CNN can catch the local structure in convolution windows.
(3) Motivated by existing rule-based works, we conduct data pre-processing on cases, including name replacement and law article filtration. If we remove the pre-processing operations, the performance drops considerably. It demonstrates that applying the prior knowledge in legal filed would benefit the understanding of legal cases.
Performance Over Law Articles
It’s intuitive that the quality of the retrieved law articles would affect the final performance. As is shown in Table 2, feeding the whole law text without filtration results in worse performance. However, when we train and evaluate our model with ground truth articles, the performance is boosted by nearly in both F1 and Acc. The performance improvement is quite limited compared to that in previous work Luo et al. (2017) for the following reasons: (1) As mentioned above, most case documents only contain coarse-grained articles, and only a small number of them contain fine-grained ones, which has limited information in themselves. (2) Unlike in criminal cases where the application of an article indicates the corresponding crime, law articles in civil cases work as reference, and can be applied in both the cases of supports and rejects. As law articles cut both ways for the judgment result, this is one of the characteristics that distinguishes civil cases from criminal ones. We also need to remember that, the performance of in accuracy or in F1 score is unattainable in real-world setting for automatic prediction where ground-truth articles are not available.
Reading Weighs More Than Correct Law Articles
In the area of civil cases, the understanding of the case materials and how they interact is a critical factor. The inclusion of law articles is not enough. As is shown in Table 2, compared to feeding the model with an un-selected set of law articles, taking away the reading mechanism results in greater performance drop777 vs. in Acc, and vs. in F1.. Therefore, the ability to read, understand and select relevant information from the complex multi-sourced case materials is necessary. It’s even more important in real world since we don’t have access to ground-truth law articles to make predictions.
5.6 Case Study
Visualization of Positive Samples
We visualize the heat maps of attention results888Examples given here are all drawn from the test set whose predictions match the real judgment.. As shown in Fig. 3, deeper background color represents larger attention score.
The attention score is calculated with Eq. (5). We take the average of the resulting attention matrix over the time dimension to obtain attention values for each word.
The visualization demonstrates that the attention mechanism can capture relevant patterns and semantics in accordance with different pleas in different cases.
As for the failed samples, the most common reason comes from the anonymity issue, which is also shown in Fig. 3. As mentioned above, we conduct name replacement. However, some critical elements are also anonymized by the government, due to the privacy issue. These elements are sometimes important to judgment prediction. For example, determination of the key factor long-time separation is relevant to the explicit dates, which are anonymized.
In this paper, we explore the task of predicting judgments of civil cases. Comparing with conventional text classification framework, we propose Legal Reading Comprehension framework to handle multiple and complex textual inputs. Moreover, we present a novel neural model, AutoJudge, to incorporate law articles for judgment prediction. In experiments, we compare our model on divorce proceedings with various state-of-the-art baselines of various frameworks. Experimental results show that our model achieves considerable improvement than all the baselines. Besides, visualization results also demonstrate the effectiveness and interpretability of our proposed model.
In the future, we can explore the following directions: (1) Limited by the datasets, we can only verify our proposed model on divorce proceedings. A more general and larger dataset will benefit the research on judgment prediction. (2) Judicial decisions in some civil cases are not always binary, but more diverse and flexible ones, e.g. compensation amount. Thus, it is critical for judgment prediction to manage various judgment forms.
Aletras et al. (2016)
Nikolaos Aletras, Dimitrios Tsarapatsanis, Daniel Preotiuc-Pietro, and
Vasileios Lampos. 2016.
Predicting judicial decisions of the european court of human rights: A natural language processing perspective.PeerJ Computer Science, 2.
Baharudin et al. (2010)
Baharum Baharudin, Lam Hong Lee, and Khairullah Khan. 2010.
A review of machine learning algorithms for text-documents classification.JAIT, 1(1):4–20.
- Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR.
- Bian and Shun-yuan (2005) Guo-wei Bian and Teng Shun-yuan. 2005. Integrating query translation and text classification in a cross-language patent access system. In Proceedings of NTCIR-7 Workshop Meeting, pages 252–261.
- Cho et al. (2014) Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. Computer Science.
Chung et al. (2014)
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014.
Empirical evaluation of gated recurrent neural networks on sequence modeling.In Proceedings of NIPS.
- Cui et al. (2017) Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2017. Attention-over-attention neural networks for reading comprehension. In Proceedings of ACL.
- Dhingra et al. (2017) Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. 2017. Gated-attention readers for text comprehension. In Proceedings of ACL.
- He et al. (2017) Wei He, Kai Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yuan Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, Tian Wu, and Haifeng Wang. 2017. Dureader: a chinese machine reading comprehension dataset from real-world applications. arXiv preprint arXiv:1711.05073.
- Hu et al. (2014) Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Proceedings of NIPS, pages 2042–2050.
- Joshi et al. (2017) Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of ACL.
- Katz et al. (2017) Daniel Martin Katz, Michael J Bommarito II, and Josh Blackman. 2017. A general approach for predicting the behavior of the supreme court of the united states. PloS one, 12(4).
- Keown (1980) R Keown. 1980. Mathematical models for legal prediction. Computer/LJ, 2:829.
- Kim (2014) Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of EMNLP.
- Kingma and Ba (2015) Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR.
- Kort (1957) Fred Kort. 1957. Predicting supreme court decisions mathematically: A quantitative analysis of the ”right to counsel” cases. American Political Science Review, 51(1):1–12.
- Lauderdale and Clark (2012) Benjamin E Lauderdale and Tom S Clark. 2012. The supreme court’s many median justices. American Political Science Review, 106(4):847–866.
- Lin et al. (2012) Wan-Chen Lin, Tsung-Ting Kuo, Tung-Jia Chang, Chueh-An Yen, Chao-Ju Chen, and Shou-de Lin. 2012. Exploiting machine learning models for chinese legal documents labeling, case classification, and sentencing prediction. In Processdings of ROCLING, page 140.
- Liu et al. (2004) Chao-Lin Liu, Cheng-Tsung Chang, and Jim-How Ho. 2004. Case instance generation and refinement for case-based criminal summary judgments in chinese. JISE.
- Liu et al. (2003) Chao Lin Liu, Jim How Ho, and Jim How Ho. 2003. Classification and clustering for case-based criminal summary judgments. In Proceedings of the International Conference on Artificial Intelligence and Law, pages 252–261.
- Liu and Hsieh (2006) Chao-Lin Liu and Chwen-Dar Hsieh. 2006. Exploring phrase-based classification of judicial documents for criminal charges in chinese. In Proceedings of ISMIS, pages 681–690.
Liu and Chen (2017)
Yi Hung Liu and Yen Liang Chen. 2017.
A two-phase sentiment analysis approach for judgement prediction.Journal of Information Science.
- Luo et al. (2017) Bingfeng Luo, Yansong Feng, Jianbo Xu, Xiang Zhang, and Dongyan Zhao. 2017. Learning to predict charges for criminal cases with legal basis. In Proceedings of EMNLP.
- Mikolov et al. (2013a) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
- Mikolov et al. (2013b) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111–3119.
- Mitra et al. (2017) Bhaskar Mitra, Fernando Diaz, and Nick Craswell. 2017. Learning to match using local and distributed representations of text for web search. In Proceedings of WWW, pages 1291–1299.
- Nagel (1963) Stuart S Nagel. 1963. Applying correlation analysis to case prediction. Texas Law Review, 42:1006.
- Nguyen et al. (2016) Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268.
- Rajpurkar et al. (2016) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of EMNLP.
- Rocktaschel et al. (2016) Tim Rocktaschel, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In Proceedings of ICLR.
- Segal (1984) Jeffrey A Segal. 1984. Predicting supreme court cases probabilistically: The search and seizure cases, 1962-1981. American Political Science Review, 78(4):891–900.
- Srivastava et al. (2014) Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929–1958.
- Sulea et al. (2017) Octavia Maria Sulea, Marcos Zampieri, Mihaela Vela, and Josef Van Genabith. 2017. Exploring the use of text classi cation in the legal domain. In Proceedings of ASAIL workshop.
- Tang et al. (2015) Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classification. In Proceedings of EMNLP, pages 1422–1432.
- Ulmer (1963) S Sidney Ulmer. 1963. Quantitative analysis of judicial processes: Some practical and theoretical applications. Law & Contemp. Probs., 28:164.
- Wang and Jiang (2016) Shuohang Wang and Jing Jiang. 2016. Learning natural language inference with lstm. In Proceedings of NAACL.
- Wang et al. (2017) Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of ACL, volume 1, pages 189–198.
- Yang et al. (2016) Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J Smola, and Eduard H Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of NAACL, pages 1480–1489.
- Zhuge (2016) Pingping Zhuge. 2016. Chinese Law Yearbook. The Chinese Law Yearbook Press.