Towards Explainable NLP: A Generative Explanation Framework for Text Classification

Building explainable systems is a critical problem in the field of Natural Language Processing (NLP), since most machine learning models provide no explanations for the predictions. Existing approaches for explainable machine learning systems tend to focus on interpreting the outputs or the connections between inputs and outputs. However, the fine-grained information is often ignored, and the systems do not explicitly generate the human-readable explanations. To better alleviate this problem, we propose a novel generative explanation framework that learns to make classification decisions and generate fine-grained explanations at the same time. More specifically, we introduce the explainable factor and the minimum risk training approach that learn to generate more reasonable explanations. We construct two new datasets that contain summaries, rating scores, and fine-grained reasons. We conduct experiments on both datasets, comparing with several strong neural network baseline systems. Experimental results show that our method surpasses all baselines on both datasets, and is able to generate concise explanations at the same time.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/10/2022

Effective Representation to Capture Collaboration Behaviors between Explainer and User

An explainable AI (XAI) model aims to provide transparency (in the form ...
01/25/2022

Explanatory Learning: Beyond Empiricism in Neural Networks

We introduce Explanatory Learning (EL), a framework to let machines use ...
07/25/2018

Grounding Visual Explanations

Existing visual explanation generating agents learn to fluently justify ...
08/07/2019

Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks

To verify and validate networks, it is essential to gain insight into th...
09/13/2021

From Heatmaps to Structural Explanations of Image Classifiers

This paper summarizes our endeavors in the past few years in terms of ex...
02/24/2021

Teach Me to Explain: A Review of Datasets for Explainable NLP

Explainable NLP (ExNLP) has increasingly focused on collecting human-ann...
08/13/2019

Towards Self-Explainable Cyber-Physical Systems

With the increasing complexity of CPSs, their behavior and decisions bec...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep learning methods have produced state-of-the-art results in many natural language processing (NLP) tasks Vaswani et al. (2017); Yin et al. (2018); Peters et al. (2018); Wang et al. (2018); Hancock et al. (2018); Ma et al. (2018). Though these neural network models achieve impressive performance, it comes to a vital issue that whether people should trust the predictions of such neural networks since they are actually black boxes for human beings Samek et al. (2017). For instance, if an essay scoring system only tells the scores of a given essay without providing explicit reasons, the users can hardly be convinced of the judgment. Therefore, the ability to explain the rationale is an important aspect of an NLP system, a need which requires traditional NLP models to provide human-readable explanations.

In recent years, lots of works have been done to solve text classification problems, but few of them have explored the explainability of their systems. ribeiro2016should try to identify an interpretable model over the interpretable representation that is locally faithful to the classifier. samek2017explainable directly build connections between the inputs and the outputs, using heatmap to visualize how much each hidden element contributes to the predicted results. Although these systems are promising in some ways, they typically do not consider the

fine-grained information (e.g. textual explanations for the labels) that may contain sufficient hints for interpreting the behavior of models. However, if a human being wants to rate a product, s/he may first write down some reviews to express his/her opinions towards this product. After that, s/he may score or summarize some attributes of the product, like price, packaging, and quality. Finally, the overall rating for the product will be given based on the fine-grained information. Unfortunately, existing text classification models typically utilize the original review texts to predict the overall results Tang et al. (2015b); Asghar (2016); Xu et al. (2017) directly, but none of them provides fine-grained explanations about why the decisions are made. From human beings’ perspective, we could hardly understand or trust such models that provide almost no reasons for decision making. Therefore, it is crucial to build an explainable text classification model that is capable of explicitly generating fine-grained information for predictions.

To achieve these goals, we propose a novel generative explanation framework for text classification, where our model is capable of not only providing the classification predictions but also generating fine-grained information as explanations for decisions. The novel idea behind our hybrid generative-discriminative method is to explicitly capture the fine-grained information inferred from raw texts, utilizing the information to help interpret the predicted classification results and improve the overall performance. Specifically, we introduce the notion of an explainable factor and a minimum risk training method that learn to generate reasonable explanations for the overall predict results. Meanwhile, such a strategy brings stronger connections between the explanations and predictions, which in return leads to better performance. To the best of our knowledge, we are the first to explicitly explain the predicted results by utilizing the abstractive generative fine-grained information.

In this work, we regard the summaries (texts) and rating scores (numbers) as the fine-grained information. Two datasets that contain these kinds of fine-grained information are collected to evaluate our method. More specifically, we construct a dataset crawled from a website called PCMag111https://www.pcmag.com/. Each item in this dataset consists of three parts: a long review text for one product, three short text comments (respectively explains the property of the product from positive, negative and neutral perspectives) and an overall rating score. We regard the three comments as fine-grained information for the long review text. Besides, we also conduct experiments on the Skytrax User Reviews Dataset222https://github.com/quankiquanki/skytrax-reviews-dataset, where each case consists of three parts: a review text for a flight, five sub-field rating scores (seat comfortability, cabin stuff, food, in-flight environment, ticket value) and an overall rating score. As for this dataset, we regard the five sub-field rating scores as fine-grained information for the flight review text.

Empirically, we evaluate our model-agnostic method on several neural network baseline methods for both datasets, including the CNN-based model Kim (2014), the LSTM-based model Liu et al. (2016) and the CVAE-based sequence-to-sequence model Zhou and Wang (2018). Our experimental results suggest that our approach substantially improves the performance over baseline systems, illustrating the advantage of the usage the fine-grained information. Meanwhile, by providing the fine-grained information as explanations for the classification results, our model is an understandable system that is worth trusting. Our contributions are three-fold:

  • We are the first to leverage the generated fine-grained information for building a generative explanation framework for text classification, propose an explanation factor, and introduce minimum risk training for this hybrid generative-discriminative framework;

  • We evaluate our model-agnostic explanation framework with different neural network architectures, and show considerable improvements over baseline systems on two datasets;

  • We provide two new publicly available explainable NLP datasets that contain fine-grained information as explanations for text classification.

2 Generative Explanation Framework

In this part, we present our Generative Explanation Framework (GEF), including the task definition, notations, and detailed model descriptions.

2.1 Task Definition and Notations

The research problem investigated in this paper is defined as: How can we generate fine-grained explanations for the decisions our classification model makes? To answer this question, we may first investigate what are good fine-grained explanations. For example, in sentiment analysis, if a product

has three attributes: i.e., quality, practicality, and price. Each attribute can be described as “HIGH” or “LOW”. And we want to know whether is a “GOOD” or “BAD” product. If our model categorizes as “GOOD” and it tells that the quality of is “HIGH”, the practicality is “HIGH” and the price is “LOW”, we can regard these values of attributes as good explanations that illustrate why the model judges to be “GOOD”. On the contrary, if our model produces the same values for the attributes, but it tells that is a “BAD” product, we then think the model gives bad explanations. Therefore, for a given classification prediction made by the model, we would like to explore more on the fine-grained information that can explain why it comes to such a decision for the current example. Meanwhile, we also want to figure out whether the fine-grained information inferred from the input texts can help improve the overall classification performance.

We denote the input sequence of texts to be , and we want to predict which category the sequence belongs to. At the same time, the model can also produce generative fine-grained explanations for (Notice that can be in any kinds of forms, including texts, numerical scores, etc.). Figure 1 illustrates the architecture of our method.

Figure 1: The architecture of the Generative Explanation Framework. encodes

into a representation vector

. Predictor

gives the probability distribution

for categories. We extract the ground-truth probability from . Generator takes as input and generates explanations . Then pretrained classifier will respectively take and as input, output the ground-truth probability and . The explanation factor is calculated through , and .

2.2 Base Text Encoder and Explanation Generator

A common way to do text classification tasks is using an Encoder-Predictor architecture Zhang et al. (2015); Lai et al. (2015). As shown in Figure 1, a text encoder takes the input text sequence , and encodes into a representation vector . A category predictor then gets as input and outputs the category and its corresponding probability distribution .

As mentioned above, a desirable model should not only predict the overall results , but also provide generative explanations to illustrate why it makes such predictions. A common way to generate explanations is to feed to an explanation generator to generate fine-grained explanations (Here, can be a text decoder if is in the form of texts, or can be a multi-layer neural network if is in the form of numerical scores). This procedure is formulated as:

(1)
(2)
(3)
(4)

where maps the input sequence into the representation vector ; the takes the as input and outputs the probability distribution over classification categories by using the .

During the training process, the overall loss is composed of two parts, i.e., the classification loss and explanation generation loss :

(5)

where represents all the parameters in the network.

2.3 Explanation Factor

The simple way to generate explanations, as demonstrated in the previous subsection, is quite straightforward. However, there is a significant shortage during the generating process: it fails to build strong connections between the generative explanations and the predicted overall results. In other words, the generative explanations seem to be independent of the predicted overall results. Therefore, in order to generate more reasonable explanations for the results, we propose to use an explanation factor to help build stronger connections between the explanations and predictions.

As we have demonstrated in the introduction, fine-grained information will reflect the overall results more intuitively than the original input text sequence. For example, given a review sentence, “The product is good to use”, we may not be sure if the product should be rated as 5 stars or 4 stars. However, if we see that the attributes of the given product are all rated as 5 stars, we may be more convinced that the overall rating for the product should be 5 stars.

So in the first place, we pretrain a classifier , which also learns to predict the category by directly taking the explanations as input. More specifically, the goal of is to imitate human beings’ behavior, which means that should predict the overall results more accurately than the base model that takes the original text as the input. We will further prove this assumption in the experiments section.

We then use the pretrained classifier to help provide a strong guidance for the text encoder , making it capable of generating a more informative representation vector . During the training process, we first get the generative explanations by utilizing the explanation generator . We then feed this generative explanations to the classifier to get the probability distribution of the predicted results . Meanwhile, we can also get the golden probability distribution by feeding the golden explanations to . The process can be formulated as:

(6)
(7)

In order to measure the distance among predicted results, generated explanations and golden generations, we extract the ground-truth probability , , from , , respectively. They will be used to measure the discrepancy between the predicted result and ground-truth result in minimum risk training.

We define our explanation factor as:

(8)

There are two components in the formula of :

  • The first part represents the distance between the generated explanations and the golden explanations . Since we pretrain using golden explanations, we hold the view that if similar explanations are fed to the classifier, similar predictions should be generated. For instance, if we feed a golden explanation “Great performance” to the classifier and it tells that this explanation means “a good product”, then we feed another explanation “Excellent performance” to , it should also tell that the explanation means “a good product”. In this way, we hope that can express the same or similar meaning as .

  • The second part represents the relevance between the generated explanations and the original texts . The generated explanations should be able to interpret the overall result. For example, if the base model predicts to be “a good product”, but the classifier tends to classify to be the explanations for “a bad product”, then cannot properly explain the reason why the base model gives such predictions.

2.4 Minimum Risk Training

Minimum risk training (MRT) aims to minimize the expected loss, i.e., risk over the training data Ayana et al. (2016). Given a sequence and golden explanations , we define as the set of predicted overall results with parameter . We define as the semantic distance between predicted overall results and ground-truth . Then, the objective function is defined as:

(9)

where dataset presents the whole training dataset.

In our experiment, is the expectation over the set . Explanation Factor represents the joint distance of input texts, generated explanations and golden explanations. Therefore, the objective function of MRT can be further formalized as:

(10)

MRT exploits

to measure the loss, which learns to optimize GEF with respect to the specific evaluation metrics of the task. In order to avoid the total degradation of loss, we define our final loss function as:

(11)

The whole training process of GEF is described in Algorithm 1.

0:  Text Sequence , Golden Explanations , Classifier , Overall Ground-truth
1:  Calculate representation vector of
2:  Generate explanations using
3:  
4:  
5:  
6:  
7:  Calculate the explanation generation loss
8:  Calculate the prediction loss
9:  
10:  
11:  
12:  Update using gradient
Algorithm 1 Generative Explanation Network

2.5 Example 1: Text Explanations

Generally, the fine-grained explanations are in the form of texts in a real-world dataset. In order to test the performance of GEF on generating text explanations, we apply GEF to Conditional Variational Autoencoder (CVAE)

Sohn et al. (2015). CVAE is found to be capable of generating emotional texts and capturing greater diversity than traditional SEQ2SEQ models. As mentioned in the introduction section, text explanations usually express emotional feelings to some extent. The variable ways to express the same meaning of an explanation make CVAE suitable to be the base model to generate emotional and varied explanations.

We give an example of the structure of CVAE+GEF in Figure 2. For space consideration, we leave out the detailed structure of CVAE. In this architecture, golden explanations and generated explanations are both composed of three text comments: i.e., positive comments, negative comments, and neutral comments. We regard these three comments as fine-grained explanations for the final overall rating. The classifier is a skip-connected model of bidirectional GRU-RNN layers Felbo et al. (2017). It takes the positive, negative and neutral comments as inputs, outputs the probability distribution of the predicted results.

Figure 2: Structure of CVAE+GEF. There are totally 4 categories for the classification, and the ground-truth category is 2 in this example.

2.6 Example 2: Numerical Explanations

Another frequently used form of the fine-grained explanations for the overall results is numerical scores. For example, when a user wants to rate a product, s/he may first rate some attributes of the product, like the packaging, price, etc. After rating all the attributes, s/he will give an overall rating for the product. So we can say that the rating for the attributes can somewhat explain why the user gives the overall rating. LSTM and CNN are shown to achieve great performance in text classification tasks Tang et al. (2015a), so we use LSTM and CNN models as the encoder , respectively. The fine-grained numerical explanations are also regarded as a classification problem in this example. We will report the detailed structure of input data and results in the following sections.

3 Dataset

We conduct experiments on two datasets where we use texts and numerical ratings to represent fine-grained information respectively. The first one is crawled from a website called PCMag, and the other one is the Skytrax User Reviews Dataset. Note that all the texts in the two datasets are preprocessed by the Stanford Tokenizer333https://nlp.stanford.edu/software/tokenizer.html Manning et al. (2014).

 

Overall Score 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
Number 21 60 283 809 2399 3981 4838 1179 78

 

Table 1: The distribution of examples by each overall rating score in PCMag Review Dataset.

 

Overall Score 1 2 3 4 5 6 7 8 9 10
Number 4073 2190 1724 1186 1821 1302 2387 3874 4008 4530

 

Table 2: The distribution of examples by each overall rating score in Skytrax User Reviews Dataset.

3.1 PCMag Review Dataset

This dataset is crawled from the website PCMag. It is a website providing reviews for electronic products, like laptops, smartphones, cameras and so on. Each item in the dataset consists of three parts: i.e., a long review text, three short comments, and an overall rating score for the product. Three short comments are summaries of the long review respectively from positive, negative, neutral perspectives. An overall rating score is a number ranging from 0 to 5, and the possible values that the score could be are {1.0, 1.5, 2.0, …, 5.0}.

Since long text generation is not what we focus on, the items where review text contains more than 70 sentences or comments contain greater than 75 tokens are filtered. We randomly split the dataset into 10919/1373/1356 pairs for train/dev/test set. The distribution of the overall rating scores within this corpus is shown in Table

1.

3.2 Skytrax User Reviews Dataset

We incorporate an airline review dataset scraped from Skytrax’s Web portal. Each item in this dataset consists of three parts: i.e., a review text, five sub-field scores and an overall rating score. The five sub-field scores respectively stand for the user’s ratings for seat comfortability, cabin stuff, food, in-flight environment, and ticket value, and each score is an integer between 0 and 5. The overall score is an integer between 1 and 10.

Similar to the PCMag Review Dataset, we filter out the items where the review text contains more than 300 tokens. Then we randomly split the dataset into 21676/2710/2709 pairs for train/dev/test set. The distribution of the overall rating scores within this corpus is presented in Table 2.

4 Experiment Results and Analyses

In this section, we first report the experimental settings. Then, we show the general results of the GEF, including the comparison with the base model. Finally, we give some cases to take a closer look at the quality of the generated explanation using GEF.

4.1 Experimental Settings

As the goal of this study is to propose an explanation framework, in order to test the effectiveness of proposed GEF, we use the same experimental settings on the base model and on the base model+GEF.

We use GloVe Pennington et al. (2014) word embedding for PCMag dataset and minimize the objective function using Adam Kingma and Ba (2014)

. The hyperparameter settings for both datasets are listed in Table

3.

 

Embedding hidden batch size
PCMag GloVe, 100 128 32
Skytrax random, 100 256 64

 

Table 3: Experimental settings for our experiments. Note that for CNN, we additionally set filter number to be and filter sizes to be .

4.2 Experiment Results

4.2.1 Text Explanations on PCMag Review Dataset

We use BLEU Papineni et al. (2002) scores to evaluation the quality of generated text explanations. Table 4 shows the comparison results of explanations generated by CVAE and CVAE+GEF.

 

BLEU-1 BLEU-2 BLEU-3 BLEU-4

 

Positive CVAE 36.1 13.5 3.7 2.2
CVAE+GEF 40.1 15.6 4.5 2.6
Negative CVAE 33.3 14.1 3.1 2.2
CVAE+GEF 35.9 16.0 4.0 2.9
Neutral CVAE 30.0 8.8 2.0 1.2
CVAE+GEF 33.2 10.2 2.5 1.5

 

Table 4: BLEU scores for generated explanations. The low BLEU-3 and BLEU-4 scores are because the target explanations contain many domain-specific words with low frequency, which makes it hard for the model to generate accurate explanations.

There are considerable improvements on the BLEU scores of explanations generated by CVAE+GEF over the explanations generated by CVAE, which demonstrates that the explanations generated by CVAE+GEF are of higher quality. CVAE+GEF can generate explanations that are closer to the overall results, thus can better illustrate why our model makes such a decision.

Our goal is that the generated fine-grained explanations should provide the extra guidance to the classification task, so we also compare the performance of classification on CVAE and CVAE+GEF. We use top-1 accuracy and top-3 accuracy as the evaluation metrics for the performance of classification. In Table 5, we compare the results of CVAE+GEF with CVAE in both test and dev set. As shown in the table, CVAE+GEF has better classification results than CVAE, which indicates that the fine-grained information can really help enhance the overall classification results.

 

Acc% (Dev) Acc% (Test)

 

CVAE 42.07 42.58
CVAE+GEF 44.04 43.67
Oracle 46.43 46.73

 

Table 5: Classification accuracy on PCMag Review Dataset. Oracle means if we feed ground-truth text explanations to the Classifier , the accuracy can achieve to do classification. Oracle confirms our assumption that explanations can do better in classification than the original text.

As aforementioned, we have an assumption that if we use fine-grained explanations to do the classification, we shall get better results than using the original input texts. So we list the performance of the classifier in Table 5 to make the comparison. As we have elaborated in the previous section, the input of is the text explanations. Experiments show that has better performance than both CVAE and CVAE+GEF, which proves our assumption to be reasonable.

4.2.2 Numerical Explanations in Example 2

In the Skytrax User Reviews Dataset, the overall ratings are integers between 1 to 10, and the five sub-field ratings are integers between 0 and 5. All of them can be treated as classification problems, so we use accuracy to evaluate the performance.

The accuracy of predicting the sub-field ratings can indicate the quality of generated numerical explanations. In order to prove that GEF can help generate better explanations, we show the accuracy of the sub-field rating classification in Table 6. The 5 ratings evaluate the seat comfortability, cabin stuff, food, in-flight environment, and ticket value, respectively.

 

s% c% f% i% t%

 

LSTM 46.59 52.27 43.74 41.82 45.04
LSTM+GEF 49.13 53.16 46.29 42.34 48.25
CNN 46.22 51.83 44.59 43.34 46.88
CNN+GEF 49.80 52.49 48.03 44.67 48.76

 

Table 6: Accuracy of sub-field numerical explanations on Skytrax User Reviews Dataset. s, c, f, t, v stand for seat comfortability, cabin stuff, food, in-flight environment and ticket value, respectively.

As we can see from the results in Table 6, the accuracy for 5 sub-field ratings all get enhanced comparing with the baseline. Therefore, we can tell that GEF can improve the quality of generated numerical explanations.

Then we compare the result for classification in Table 7. As the table shows, the accuracy or top-3 accuracy doesn’t differ too much when using LSTM or CNN as the base model. When the models are combined with GEF, accuracy gets improved. Moreover, the performances of the classifier are better than LSTM (+GEF) and CNN (+GEF), which further confirms our assumption that the classifier can imitate the conceptual habits of human beings. Leveraging the explanations can provide guidance for the model when doing final results prediction.

 

Acc% Top-3 Acc%

 

LSTM 38.06 76.89
LSTM+GEF 39.20 77.96
CNN 37.06 76.85
CNN+GEF 39.02 79.07
Oracle 45.00 83.13

 

Table 7: Classification accuracy on Skytrax User Reviews Dataset. Oracle means if we feed ground-truth numerical explanation to the Classifier , the accuracy can achieve to do classification.

4.3 Case Study

Lastly, we show several cases to illustrate the explainability of our proposed Generative Explanation Framework.

 

Product and Overall Rating Explanations

 

Television, 4.0
Positive Generated: Good contrast. Good black levels. Affordable.
Positive Golden: Gorgeous 4k picture. Ggood color accuracy. Solid value for a large uhd screen.
Negative Generated: Mediocre black levels. Poor shadow detail. Poor off-angle viewing.
Negative Golden: Mediocre black levels. Poor input lag. Colors run slightly cool. Disappointing online features. Poor off-angle viewing.
Neutral Generated: A solid, gorgeous 4k screen that offers a sharp 4k picture, but it’s missing some features for the competition.
Neutral Golden: A solid 4k television line, but you can get an excellent 1080p screen with more features and better performance for much less.
Flash Drive, 3.0
Positive Generated: Simple, functional design. Handy features.
Positive Golden: Charming design. Reasonably priced. Capless design.
Negative Generated: All-plastic construction. No usb or color protection.
Negative Golden: All-plastic construction. On the slow side. Crowds neighboring ports. flash drives geared toward younger children don’t have some sort of password protection.
Neutral Generated: The tween-friendly UNK colorbytes are clearly designed and offers a comprehensive usb 3.0, but it’s not as good as the competition.
Neutral Golden: The kid-friendly dane-elec sharebytes value pack drives aren’t the quickest or most rugged flash drives out there, but they manage to strike the balance between toy and technology. Careful parents would be better off giving their children flash drives with some sort of password protection.

 

Table 8: Examples from our generated explanations. UNK stands for “unknown word”.

4.3.1 Text Explanation Cases

First, we show the examples of text explanations generated by CVAE+GEF in Table 8. We can see that our model can accurately capture some key points in the golden explanations. And it can learn to generate grammatical comments that are logically reasonable.

4.3.2 Numerical Explanation Cases

In order to show the effectiveness of our model, we also randomly sample some cases from the generated explanations on the Skytrax User Review Dataset. Since the performance is similar using LSTM or CNN as the base model, so we only show the result from CNN for space consideration. As demonstrated in Table 9, when the overall rating is high, GEF tends to predict higher ratings for the 5 sub-field attributes. And if GEF predicts low ratings for the 5 sub-field attributes, it also gives a low overall rating.

 

Overall s c f i t

 

9.0 pred 4.0 5.0 5.0 4.0 5.0
gold 4.0 5.0 5.0 4.0 4.0
6.0 pred 3.0 5.0 3.0 3.0 4.0
gold 4.0 5.0 3.0 3.0 4.0
2.0 pred 2.0 1.0 2.0 2.0 2.0
gold 2.0 2.0 1.0 2.0 2.0

 

Table 9: Examples from the results on Skytrax User Reviews Dataset. s, c, f, i, t stand for seat comfortability, cabin stuff, food, in-flight environment and ticket value, respectively.

4.3.3 Error and Analysis

We focus on the deficiency of generation for text explanation in this part.

First of all, although our proposed GEF can capture the key points in the explanation, as we can see from the given example in Table 8

, the generated text explanation tend to be shorter than golden explanations. This is because longer explanations tend to bring more loss, so GEF tends to leave out the words that are of less informative, like some function words, conjunctions, etc. In order to solve this problem, we may consider adding length reward/penalty by using reinforcement learning to control the length of generated texts.

Second, as we can see from Table 8, there are UNKs in the generated explanations. Since we are generating abstractive comments for product reviews, there may exist some domain-specific words. The frequency of these special words is low, so it is relatively hard for GEF to learn to embed and generated these words. A substituted way is that we can use char-embedding structure to more detailedly describe the information contained in these words.

5 Related Work

Our work is closely aligned with Explainable Artificial Intelligence (XAI)

Gunning (2017), which is claimed to be essential if users are to understand, appropriately trust, and effectively manage this incoming generation of artificially intelligent partners. In artificial intelligence, providing an explanation of individual decisions is a topic that has attracted attention in recent years. The traditional way of explaining the results is to directly build connections between the input and output, and try to figure out how much each dimension or element in the input contributes to the final output. Some previous works explain the result from two ways: evaluating the sensitivity of output if input changes and analyze the result from a mathematical way by redistributing the prediction function backward using local redistribution rules Samek et al. (2017). There are some works connecting the result with the classification model. ribeiro2016should try to explain the result from the result itself and provide a global view of the model. Although the method is promising and mathematically reasonable, they cannot generate explanations in natural forms. They focus more on how to interpret the result.

Some of the previous works have similar motivations as our work. lei2016rationalizing rationalize neural prediction by extracting the phrases from the input texts as the explanations. They conduct their work in an extractive approach, and focus on rationalizing the predictions. However, our work aims not only to predict the results, but also to generate abstractive explanations, and our framework can generate explanations both in the forms of texts and numerical scores. ouyang2018improving apply explanations to recommendation systems, integrating user information to generate explanation texts and further evaluating these explanations by using them to predict the result. The problem of their work is that they don’t build strong interactions between the explanations and recommendation results, where the strongest connection of the recommendation result and explanations is that they have the same input. P18-1175 proposes to use a classifier with natural language explanations that are annotated by human beings to do the classification. Our work is different from theirs in that we use the natural attributes as the explanations which are more frequent in reality.

6 Conclusion

In this paper, we investigate the possibility of using the fine-grained information to help explain the decision made by our classification model. More specifically, we design a Generative Explanation Framework (GEF) that can be adapted to different models. Minimum risk training method is applied to our proposed framework. Experiments demonstrate that after combining with GEF, the performance of the base model can be enhanced. Meanwhile, the quality of explanations generated by our model is also improved, which demonstrates that GEF is capable of generating more reasonable explanations for the decision.

References

  • Asghar (2016) Nabiha Asghar. 2016. Yelp dataset challenge: Review rating prediction. arXiv preprint arXiv:1605.05362.
  • Ayana et al. (2016) Shiqi Shen Ayana, Zhiyuan Liu, and Maosong Sun. 2016. Neural headline generation with minimum risk training. arXiv preprint arXiv:1604.01904.
  • Felbo et al. (2017) Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1615–1625.
  • Gunning (2017) David Gunning. 2017. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web.
  • Hancock et al. (2018) Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher Ré. 2018. Training classifiers with natural language explanations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1884–1895. Association for Computational Linguistics.
  • Kim (2014) Yoon Kim. 2014. Convolutional neural networks for sentence classification. EMNLP.
  • Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  • Lai et al. (2015) Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In AAAI, volume 333, pages 2267–2273.
  • Lei et al. (2016) Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. EMNLP.
  • Liu et al. (2016) Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Recurrent neural network for text classification with multi-task learning. arXiv preprint arXiv:1605.05101.
  • Ma et al. (2018) Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy. 2018. Stack-pointer networks for dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1403–1414. Association for Computational Linguistics.
  • Manning et al. (2014) Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60.
  • Ouyang et al. (2018) Sixun Ouyang, Aonghus Lawlor, Felipe Costa, and Peter Dolog. 2018. Improving explainable recommendations with synthetic reviews. arXiv preprint arXiv:1807.06978.
  • Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics.
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543.
  • Peters et al. (2018) Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL.
  • Ribeiro et al. (2016) Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. ACM.
  • Samek et al. (2017) Wojciech Samek, Thomas Wiegand, and Klaus-Robert Müller. 2017. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296.
  • Sohn et al. (2015) Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems, pages 3483–3491.
  • Tang et al. (2015a) Duyu Tang, Bing Qin, and Ting Liu. 2015a. Document modeling with gated recurrent neural network for sentiment classification. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1422–1432, Lisbon, Portugal. Association for Computational Linguistics.
  • Tang et al. (2015b) Duyu Tang, Bing Qin, Ting Liu, and Yuekui Yang. 2015b. User modeling with neural network for review rating prediction. In IJCAI, pages 1340–1346.
  • Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008.
  • Wang et al. (2018) Wei Wang, Ming Yan, and Chen Wu. 2018. Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1705–1714.
  • Xu et al. (2017) Weidi Xu, Haoze Sun, Chao Deng, and Ying Tan. 2017. Variational autoencoder for semi-supervised text classification. In Thirty-First AAAI Conference on Artificial Intelligence.
  • Yin et al. (2018) Qingyu Yin, Yu Zhang, Weinan Zhang, Ting Liu, and William Yang Wang. 2018. Deep reinforcement learning for chinese zero pronoun resolution. ACL.
  • Zhang et al. (2015) Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649–657.
  • Zhou and Wang (2018) Xianda Zhou and William Yang Wang. 2018. Mojitalk: Generating emotional responses at scale. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Victoria, Australia. ACL.