Cost-effective Interactive Attention Learning with Neural Attention Processes

06/09/2020
by   Jay Heo, et al.
0

We propose a novel interactive learning framework which we refer to as Interactive Attention Learning (IAL), in which the human supervisors interactively manipulate the allocated attentions, to correct the model's behavior by updating the attention-generating network. However, such a model is prone to overfitting due to scarcity of human annotations, and requires costly retraining. Moreover, it is almost infeasible for the human annotators to examine attentions on tons of instances and features. We tackle these challenges by proposing a sample-efficient attention mechanism and a cost-effective reranking algorithm for instances and features. First, we propose Neural Attention Process (NAP), which is an attention generator that can update its behavior by incorporating new attention-level supervisions without any retraining. Secondly, we propose an algorithm which prioritizes the instances and the features by their negative impacts, such that the model can yield large improvements with minimal human feedback. We validate IAL on various time-series datasets from multiple domains (healthcare, real-estate, and computer vision) on which it significantly outperforms baselines with conventional attention mechanisms, or without cost-effective reranking, with substantially less retraining and human-model interaction cost.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

06/29/2020

Robustifying Sequential Neural Processes

When tasks change over time, meta-transfer learning seeks to improve the...
12/10/2021

Global Attention Mechanism: Retain Information to Enhance Channel-Spatial Interactions

A variety of attention mechanisms have been studied to improve the perfo...
05/15/2019

FAT-DeepFFM: Field Attentive Deep Field-aware Factorization Machine

Click through rate (CTR) estimation is a fundamental task in personalize...
02/06/2022

Aligning Eyes between Humans and Deep Neural Network through Interactive Attention Alignment

While Deep Neural Networks (DNNs) are deriving the major innovations in ...
05/24/2018

Uncertainty-Aware Attention for Reliable Interpretation and Prediction

Attention mechanism is effective in both focusing the deep learning mode...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks are arguably the most prevalent tools for predictive modeling tasks nowadays, thanks to their ability to learn complex functions with multiple layers of non-linear transformations. However, the complex nature of the model, at the same time, makes it difficult to interpret what they have learned, which has led to the recent surge of interest in interpretable models that are capable of providing interpretations of the model and the prediction in human-understandable forms 

(gilpin2018explaining).

Although recent works propose diverse solutions to interpretability (retain; ahmad2018interpretable; lage2018human), including attention mechanisms, activation visualization, and optimization for human-interpretability under human-in-the-loop, we face yet another challenge: not all machine-generated interpretations are correct or human understandable. This is mainly due to two reasons: 1) correctness and reliability of a learning model heavily depends on the quantity and quality of the training data. 2) neural networks tend to learn non-robust features that help with predictions but are not human-perceptible (ilyas2019). Such unreliability of the interpretations is highly problematic for safety-critical applications such as clinical risk predictions (ahmad2018interpretable; sankar2019sisc) or autonomous driving (chi2017deep).

The main limitation of the existing models is that they mostly only consider passive roles for human supervisors, where they simply take the provided interpretations as is. Yet, a more effective way to use the interpretations is to use them as channels for human-model communications, such that the models learn by continuously interacting with the human supervisors, where they iteratively correct the model-generated interpretations. From a cognitive science perspective, human learning is done by internal reflection (back-propagation) and external explanation (human feedback) during social interactions (neuroscience).

      
        (A) Neural Attention Process                 (B) Cost-Effective Re-ranking                    (C) Human Annotation
Figure 1: Our Interactive Attention Learning (IAL) framework. IAL is an interactive learning framework which iteratively learns by interacting with the human supervisor, via the learned attentions. It allows efficient model update using (A) Neural Attention Process which does not require retraining, and cost-effective interaction via (B) Cost-effective reranking of the instances and features.

Based on this motivation, we propose an interactive learning framework, where the model learns by iteratively interacting with the human supervisors who manipulate the model by adjusting the provided interpretations, which is depicted in Figure 1. The specific interpretation mechanism we consider in this work is the attention mechanism (bahdanau2014neural)

. While active learning asks for supervision at the instance level, in our interactive learning model, it asks for supervision at the attention level. However, this leads to multiple challenges regarding efficiency, which hinders their applications to practical scenarios:

  • Model retraining cost and overfitting: To reflect human feedback, the model needs to be retrained, which is costly. Moreover, retraining the model with scarce human feedback may result in the model overfitting.

  • Expensive human supervision cost: Obtaining human feedback on datasets with large numbers of training instances and features is extremely costly. Further, obtaining feedback on already correct interpretations is wasteful.

To tackle these practical challenges, we propose a novel interactive learning framework, which we refer to as Interactive Attention Learning (IAL), that allows both efficient model retraining and sample-efficient learning that minimizes human supervision cost. IAL consists of two main components: 1) Neural Attention Processes (NAP) and 2) Cost-Effective instance and feature Reranking (CER). Basically, our model minimizes retraining cost via NAP which allows the model to correct its attention-generating behaviour in a sample-efficient manner by incorporating new labeled instances without retraining. NAP also prevents overfitting, which is inevitable with scarce human feedbacks when using a conventional attention mechanism. Secondly, to address the expensive human labeling cost, CER reranks the instances, features, and timesteps (for time-series data) by their negative impacts. This enables the model to minimize human interaction cost, such that the human supervisors only correct the interpretations that are likely to be incorrect and influential to the prediction. The importance of each sample and feature is measured either by the uncertainty, influence function (cook1980characterizations)

, or counterfactual estimation.

We validate our IAL framework on a variety of real world tasks with time-series data, including cerebral infarction risk prediction from electronic health records (EHR), New York City real-estate price forecast, and squat-posture prediction task. The experimental results show that our model outperforms baseline interactive learning schemes with significant margins, with considerably smaller interaction cost in terms of both model retraining and human annotation cost. Our contributions are as follows:

  • We propose a novel interactive learning framework which iteratively updates the model by interacting with the human supervisor via the generated attentions.

  • To minimize the retraining cost, we propose a novel probabilistic attention mechanism which sample-efficiently incorporates new attention-level supervisions on-the-fly without retraining and overfitting.

  • To minimize human supervision cost, we propose an efficient instance and feature reranking algorithm, that prioritizes them based on their negative impacts on the prediction, measured either by uncertainty, influence function, or counterfactual estimation.

  • We validate our model on five real-world datasets with binary, multi-label classification, and regression tasks, and show that our model obtains significant improvements over baselines with substantially less retraining and human feedback cost.

2 Related work

Interpretable machine learning

The literature on interpretable machine learning is vast, but we only discuss a few. A popular approach to obtain interpretable model is to build a simple proxy model that mimics the (local) behaviours of a complex model, using either simplified linear models (Ribeiro2016)

or decision trees 

(sato2001rule; salzberg1994c4). Another approach, specific for neural networks, is analyzing their learned representations (sharif2014cnn; yosinski2014transferable) at each unit via visualization.  bau2017network further consider interpretability of representations in light of their correspondence to semantic concepts, and utilize it for controlling the behaviours of generative adversarial networks (bau2019visualizing). In this work, we propose a novel interactive learning framework that leverages the model’s interpretation to iteratively correct the model’s behaviour, while minimizing the interaction cost.

Attention Mechanism

Attention mechanism (bahdanau2014neural) is an effective approach to adaptively select a subset of features in an input-dependent manner, such that the model dynamically focuses on more relevant features for prediction. This mechanism works by input-adaptively generating coefficients for input features to allocate more weights to more relevant features for prediction. Attention mechanisms have achieved success with various applications, including image translation (show_attend_tell), natural language understanding (Bahdanau15; vaswani2017attention), and visual question answering (das2017human). However, in the interactive learning setting, conventional attention mechanisms are either not trainable, or require retraining of the attention generator on the newly delivered attention-level annotations, which may lead to performance degeneration due to catastrophic forgetting. In this work, we incorporate benefits from the nonparametric and amortized inference of Neural Process (NPs) (Garnelo2018np) into an attention mechanism such that it generalizes well with scarce human labels in a semi-supervised manner and can incorporate new labeled instances without retraining via an approximation of stochastic process.

Active learning

While there are vast literature on annotation methodology and active learning (tong2001active; sener2017active)

, we here discuss a few relevant pre-existing works for learning from rationales, which is a popular annotation technique in natural language processing 

(zaidan2008modeling) and vision (donahue2011annotator), where a human highlights the important region of input. However, while these works directly zero out or modify input features, the attention generator in IAL provides its interpretation in the form of the attention, and the human supervisor corrects them. Furthermore, in conventional active learning settings, annotators’ roles are relatively passive, as they simply provide labels to each given instance such that they can’t see the effect of one’s annotation. However, the annotators in IAL actively interpret the generated attentions, directly modify the learning manifold of the model by masking them, and can immediately see the effect of the newly added annotation.

3 Interactive Attention Learning

Suppose we have a pre-trained neural network with a parameter trained on a dataset . is a time-series instance with , and is the corresponding label. We denote each labeled instance as . is trained to minimize the empirical risk, the expectation of individual loss over all training instances; we use mean-squared error for regressions or the categorical cross-entropy for classification problems. We further assume that consists of two sub-parameters , where corresponds to the parameter of the main neural network and corresponds to the parameter of the attention-generating network . generates an attention for , where each is separated into an attention for time-axis and an attention for feature-axis (see (6) for detailed definition). The attentions are applied to the features along time-steps, and let the model focus on a specific features of the representations of inputs relevant to the prediction. Hence, the attention provides an interpretation of the model’s decision.

Our goal in this paper is to correct the behaviour of the attention-generating network with human supervision. This may be done by incrementally retraining over multiple rounds, where for each round human supervisors inspect the attentions generated by and update . We assume that a human supervisor provides an attention mask for each sample as ground-truth label, after manually examining the attention produced by . An attention mask for a certain axis is defined to be a ternary value , where indicates "I don’t know", indicates "Not attend", and indicates "Attend". Note that a naïve retraining of leads to the costly retraining of via gradient back-propagation. Instead, we choose to fix and update only to minimize the cost of retraining. We refer to this general framework that learns by interacting with the human supervisor via learned attention, as Interactive Attention Learning framework (IAL).

Input: , , rounds .
Output: .

1:  Pretrain .
2:  for  do
3:     . Cost-Effective Re-ranking (CER)
4:      Get attention masks for
5:      Learn human feedback with quick forward pass using Neural Attention Process (NAP).
6:     if  then
7:         Retrain with an adapted network containing NAP.
8:     end if
9:  end for
Algorithm 1 Interactive Attention Learning Framework
(a) Neural Attention Process (NAP) (b) First Round (=1) (c) Further Rounds (=2,3,..)
Figure 2: (a): NAP naturally reflects the information from the annotation summarization via amortization. (b) For new observations (annotation mask

), NAP accepts them as input and generates the mean and variance parameter for

. (c) NAP doesn’t require retraining for further new observations, in that NAP automatically adapt to them at the cost of a forward pass through a network .

Yet, as discussed in the introduction, there are still remaining challenges that need to be tackled. First, the retraining of will still incur a non-negligible cost and may also result in overfitting when human feedback is scarce. To tackle this, we propose a novel attention generator that can readily incorporate human annotations without retraining. Another challenge is reducing the human interaction cost. Ideally, a human annotator may have a look on the entire attentions generated by . This involves examining all instances , and within each instance, all features over all time-steps . This is not feasible and wasteful since many attention values are already correct. To tackle this problem, we further propose a cost-effective reranking method which prioritizes the instances and features by their impacts on the model’s prediction, to maximize performance gains with minimal human effort.

Algorithm 1 describes the detailed algorithm for our IAL framework that leverages the proposed attention mechanism and re-ranking method. In the next two subsections, we describe the two components that minimize both the model retraining cost and human-model interaction cost.

3.1 Neural Attention Process

In this section, we describe Neural Attention Process (NAP), an novel attention generator based on NPs (Garnelo2018np). NAP can effectively update the model without retraining by amortization using sparse human annotations.

Before describing our approach, we briefly explain how attention is applied for time-series prediction, using RETAIN (retain) as our base model. Let be a linear embedding of an input. We restrict to have the same dimensionality as , so that we can directly compute the contribution of a certain feature to a prediction111Please refer to the supplementary material to see how to compute the contribution of input features to predictions based on attentions and embedding . For now, treat each dimension of to be directly linked to the corresponding feature in .. The model computes attention coefficients for both time-steps and input-features as,

(1)
(2)
(3)
(4)
(5)
(6)

Here, are attention weights applied for time-steps and are attention weights for the input features. We may also consider the stochastic attention as in (show_attend_tell). Given , the model makes predictions as where is the element-wise multiplication and is an output layer.

Now we describe NAP, especially how it amortizes the procedure of updating the model given human annotations. Let be a set of attention masks given by human annotators for a subset with . Instead of exhaustively retraining , NAP learns to summarize

to a latent vector, and give the summarization as an additional input to the attention generating network. This approach, when trained properly, can automatically adapt to new annotations without having to retrain the parameters. From below, we describe the components of NAP in more detail.

Embedding & summarizing the annotations

We first feed the input embedding to LSTM (Hochreiter1997) () to generate time-series representation . Given attention masks , we build an intermediate representation via another LSTM. Then, for each time step, we build a summarized representation by a permutation-invariant operation (for instance, average),

(7)

Having , we define a distribution for the summary variable as Gaussian:

(8)
(9)
(10)
Generating attentions Training NAP

Now we generate the attention by a similar procedure to (6), but instead of feeding only , we feed both and the annotation summarization vector by concatenation. This allows the network to naturally reflect the information obtained from without having to retrain the whole attention network parameter . The original NP is meta-trained using many training examples. Likewise, NAP requires a meta-training for adapting the attention generating network to take as an additional input (Figure 2, (b)). We found that this adaptation requires significantly fewer training examples than the typical NP training, possibly because the network is pretrained using in advance. For such adaptation training, given a set of annotated examples, we randomly subsample annotations for each training step to comprise a random task to meta-train the model. The subsampling prevents NAP from completely being over-fitted to the entire annotation set, leading to effective generalization to newly delivered annotations across rounds. We also regularize by positing a standard Gaussian prior distribution as in Garnelo2018np. We train the parameters of NAP via stochastic gradient variational inference.

3.2 Cost-Effective instance and feature Reranking

As we discussed earlier, letting human annotators inspect attentions for all instances and features is inefficient even for a small dataset. We may reduce this cost by randomly subsampling from all attention values, but it may result in selecting instances or features that are already correct or have little impact to the model’s prediction. Thus, we want to prioritize the attentions by their negative impact on the model’s prediction, such that each feedback given by the human supervisor results in large performance improvements. In this section, we propose a general framework, depicted in Figure 3, to select important instances and features. For instance-level selection, we use the influence score and uncertainty score. For feature-level, we use the influence score, uncertainty score, and counterfactual score.

3.2.1 Instance-level reranking

Influence score

We use the influence function (koh2017understanding) to approximate the impact of individual training points on the model prediction. The idea behind this is simple; given a validation point , how would the validation loss change if a certain training instance is excluded from training procedure? Formally, let be the minimizer of empirical risk for the original training set, , and be simply the one computed from empirical risk without , . The effect of removing is then measured as . Since exactly computing this involves retraining procedures and quite expensive, koh2017understanding propose to use the influence function to approximate it as follows:

(11)
(12)

where is the Hessian. To summarize, the influence function approximates the change in the validation loss (up to a constant) without having to retrain the model.

During training, we are given a set of validation instances . Then, we first select instances that have the highest validation loss to comprise . The intuition behind is that we want to select the training instances having large impact on the validation instances that are mis-predicted by the current model. In the supplementary file, we empirically show that this indeed improves the performance. Having , the influence score of a training instance is computed as .

Uncertainty score

While influence scores provide direct measures of the negative impact of an instance, it is expensive because of the Hessian computation. An alternative, and less expensive approach to measure the negative impacts is using the uncertainty. We assume that instances having high-predictive uncertainties are potential candidate to be corrected. This is a common approach in active learning or Bayesian optimization literature, where the points with high-uncertainties are explored. Instance-level predictive uncertainty can simply be obtained by Monte-Carlo (MC) sampling (dropout_as_bayesian). We denote the instance-level uncertainty score as .

Figure 3: Cost-Effective Re-ranking Procedure (CER).

3.2.2 Feature-level reranking

Influence score

We can also estimate the feature-level influence score by a similar idea; if certain feature value is modified, how would the validation loss change? Let be a training instance, and suppose we want to compute the influence of , which is the -th input feature for timestep , . Define a perturbed data point where is an one-hot vector having -th feature of -th time step as one. Let be the empirical risk minimizer with replaced by . Then, as before, we have

(13)

Based on this approximation, we sampled from mean 2std of features, and computed the average influence score over multiple perturbations to rank features. As for the instance-level influence score, we add up the influence scores for all selected validation samples. We denote the influence score obtained by perturbing .

Uncertainty score

NAP induces stochasticity to the attentions applied to the individual features, and this naturally leads to feature-level uncertainty scores. As for the instance-level uncertainty score, we computed variances of attentions applied for each feature by MC sampling. We denote the feature-level uncertainty score of as .

Input: , , , , , .
Output: , .

1:  Evaluate the loss for .
2:  Sort in the descending order of and select top- valid points .
3:   Instance-level re-ranking
4:  for  do
5:     Compute the influence or uncertainty score .
6:     Select the top -training points w.r.t the score.
7:  end for
8:   Feature-level re-ranking
9:  for  do
10:     for  do
11:         Compute influence or uncertainty or counterfactual score.
12:         Select top- features.
13:     end for
14:  end for
Algorithm 2 Cost-Effective Re-ranking
Figure 4: Attention annotation interface (risk prediction for Cardiovascular Disease (CVD)) with counterfactual estimation tool.
Conterfactual score

The last score, which we call as counterfactual score, is the most direct measure of the negative impact of a feature. It answers the following question: how would the prediction change if we ignore a certain feature by manually turning off the corresponding attention value? This does not require retraining since we can simply set its attention value to zero, yet still effective because our goal is to rank the features w.r.t. their importance in attention feedback. Recall that given an attention generated from , a prediction is given as

(14)

where is the linear embedding of . The effect of perturbing can be then computed as follows:

(15)

where is the attention where = 0. We empirically found that the counterfactual score is the most effective measure for feature-level reranking (See Table 2).

3.3 Human Annotation

Finally, given a subset selected using CER whose instances and features also sorted by their negative impacts, we visualize and present the attentions to human annotators, using an online interactive user interface. We provide an example of this interface in Figure 4 for the clinical risk prediction task. On the interface, the annotators set the attention mask for each feature to one of the following values: . The interface visually emphasizes the features with high attentions using either a bar plot (for tabular data) or an attention map (for image data) depending on the given task. Then, the annotators examine attention weights to check whether they are incorrectly allocated, and correct them when necessary.

EHR Fitness Real Estate
Heart Failure Cerebral Infarction CVD Squat Forecasting
One-time Training RETAIN 0.6069 0.01 0.6394 0.02 0.6018 0.02 0.8425 0.03 0.2136 0.01
Random-RETAIN 0.5952 0.02 0.6256 0.02 0.5885 0.01 0.8221 0.05 0.2140 0.01
IF-RETAIN 0.6134 0.03 0.6422 0.02 0.5882 0.02 0.8363 0.03 0.2049 0.01
Random Re-ranking Random-UA 0.6231 0.03 0.6491 0.01 0.6112 0.02 0.8521 0.02 0.2222 0.02
Random-NAP 0.6414 0.01 0.6674 0.02 0.6284 0.01 0.8525 0.01 0.2061 0.01
IAL (Cost-effective) AILA 0.6363 0.03 0.6602 0.03 0.6193 0.02 0.8425 0.01 0.2119 0.01
IAL-NAP 0.6612 0.02 0.6892 0.03 0.6371 0.02 0.8689 0.01 0.1835 0.01
Table 1: The binary multi-class classification performance on the three electronic health records datasets and one fitness dataset. The reported numbers are mean-AUROC for EHR and mean-Accuracy for squat. In the real estate forecasting task, the number indicates mean-percentage error, meaning a lower error indicates better performance.
IAL-NAP Variants EHR Fitness Real Estate
Instance-level Feature-level Heart Failure Cerebral Infarction CVD Squat Forecasting
Influence Function Uncertainty 0.6563 0.01 0.6821 0.02 0.6308 0.02 0.8712 0.01 0.1921 0.01
Influence Function Influence Function 0.6514 0.02 0.6825 0.01 0.6329 0.03 0.8632 0.01 0.1865 0.02
Influence Function Counterfactual 0.6592 0.02 0.6921 0.03 0.6379 0.02 0.8682 0.01 0.1863 0.02
Uncertainty Counterfactual 0.6612 0.01 0.6892 0.03 0.6371 0.02 0.8689 0.02 0.1835 0.02
Table 2: Results of Ablation study with proposed IAL-NAP combinations for instance- and feature-level reranking on all tasks.
(a) Heart Failure (b) Cerebral Infarction (c) CVD (d) Squat (e) Real Estate
Figure 5: (top) Retraining Time to retrain examples of human annotation on all task for Random-UA, AILA, Random-NAP, and IAL-NAP. (bottom) mean Response Time (mean-RT) of human labeling on three risk prediction task, one squat posture classification task, and one realestate forecasting task (IAL-NAP with features ranked by uncertainty vs Random-NAP with features ranked randomly).
Age Smoking SysBP HDL LDL 2009 31 Yes 139 54 97 2010 32 Yes 134 55 97 Current State 33 yrs Yes 141 mmHg 55 mg/dL 102 mg/dL (a) Pretrained (b) =1 (c) =2
Figure 6: Visualization of attention for a selected patient on Cardiovascular Disease (CVD) prediction task. Contribution indicates the extent to which each individual feature affects the onset of CVD in 1 year. Age - Age, Smoking - Whether currently smokes a cigarette, SysBP - Systolic blood pressure, HDL - High-density lipoproteins cholesterol, LDL - Low-density lipoprotein cholesterol. Bars correspond to attentions.
(a) Heart Failure (b) Cerebral Infarction (c) CVD (d) Squat
Figure 7: Change of accuracy with 100 annotations across four rounds () between IAL-NAP (blue) vs Random-NAP (red).

4 Experiments

4.1 Datasets and Baselines

1) Medical Check-ups

These datasets are subsets of the electronic health records (EHR) database of a major hospital, which consists of medical check-ups from to (4 timesteps) for patients over the age of 15 in out-patient units. We extracted patient records from the total of million records, each of which contains variables including general information (e.g., sex and height), vital signs (e.g., hemoglobin level), and risk-inducing behaviors (e.g., alcohol consumption). The task is to predict the onset of the following disease in the next year: 1) Heart Failure, 2) Cerebral Infarction, 3) Cardiovascular Disease (CVD).

2) Fitness - Squat Pose Correction

This dataset contains video frames of human subject performing squats, where the task is to predict whether the person is performing the squat with the correct posture or with one of ten different types of incorrect postures (e.g., 0: Correct posture, 1: Exaggerated knees-forward movement, 2: Sitting on the thighs). Thus this is a multi-label classification task. We extract 14 pairs of key points from joints (e.g., left shoulder or right ankle) over all frames, to clearly visualize which body joints an attention generator attends to for each instance.

3) Real Estate Sales Transactions

This datasets is a subset of public rolling sales transaction database (zhu2018house) from New York City Department of Finance that is publicly available, which consists of house records with sales transaction records over 10 years from 2010 to 2019 (10 time-steps). The subset used for experiments includes housing transactions, each of which includes 47 variables that describes the property (e.g. number of rooms), neighborhood (e.g. minimum distance to a supermarket), and macro-economy indicators (e.g., mortgage rate). The task is to make an one-year forecast for the price of a given residential property.

Baselines and our models
1) RETAIN:

This is the attentional recurrent neural network model (RETAIN) proposed in 

(retain).
2) Random-RETAIN: RETAIN, which is newly trained from a training set without randomly selected samples.
3) IF-RETAIN: RETAIN that is newly trained from the training set without the top -negative points, which are obtained using the influence function (koh2017understanding).
4) Random-UA: This is the Uncertainty-Aware attentional network (UA) (heo2018uncertainty)

which is trained using IAL with random instance and feature selection.


5) Random-NAP: Our IAL framework with Neural Attention Process model (NAP), which is trained using random instance and feature selection.
6) Cost-effective AILA: This is a modified version of the interactive attention learning model proposed by (aila) which retrains the attention generator by using a

binary cross entropy loss function

between the attention vector and the attention annotation . We train the model with CER to verify the effectiveness of the NAP.
7) IAL-NAP Our IAL framework with Neural Attention Process (NAP) and cost-effective instance and feature Reranking (CER), which uses uncertainty for instance-wise reranking and counterfactual score for feature reranking.

Experimental setup For all datasets, we generate train/valid/test splits with the ratio of 70:10:20. For Random-UA and AILA model, we use -regularization

to prevent overfitting. Please see supplementary file for more details of the datasets, network configurations, and hyperparameters. We will also publicly release the codes and all datasets used in the experiments.

4.2 Experimental results

We first examine the prediction performance of the baselines and our models. Table 1 shows the results, where the performance is measured with Area Under the ROC curve (AUROC) on the risk prediction tasks, accuracy on squat posture task with multi-labels, and mean percentage error on real estate price forecasts. Note that IF-RETAIN, which uses influence functions to remove instances with negative influence scores, performs relatively better on most tasks than other RETAIN baselines, but fails to improve on CVD and squat posture task. We observe that Random-UA, which is retrained with human attention-level supervision on randomly selected samples, performs worse than Random-NAP on all tasks. This is due to overfitting to few supervised labels, while NAP does not suffer from overfitting. IAL-NAP significantly outperforms Random-NAP on all tasks, which shows that the effect of attention annotation cannot have much effect on the model when the instances are randomly selected. AILA with cost-effective reranking also performs worse than IAL-NAP, due to severe overfitting even with regularizations to prevent it. We further perform an ablation study of cost-effective reranking with different scoring measures in table 2. The results show that for instance-level scoring, influence and uncertainty scores work similarly, while the counterfactual score was the most effective for feature-wise reranking. However, considering the computation cost, the combination of uncertainty-counterfactual is the most cost-effective solution since it avoids expensive computation of the Hessians.

Effect of Neural Attention Process

Line plots in Figure 5 (top) shows averaged time to retrain examples over the rounds of interactions with Random-UA, AILA, Random-NAP, and IAL-NAP on the five tasks. IAL-NAP and Random-NAP shows shorter retraining time, while Random-UA and AILA which fine-tune the attention-generating network take a longer time to retrain. This shows another benefit of our neural attention process, which is its ability to perform amortized inference. A more responsive system can also improve the quality of the interaction, in the interactive learning setting.

Effect of Cost-Effective Re-ranking We further measure the average response time of the annotators with and without cost-effective reranking. Figure 5 (bottom) shows that annotators spend less time with annotation if variables are prioritized by their negative impacts measured using uncertainty (blue bars) compared to presenting them in the original order (grey bars), on all tasks. Figure 7 shows the change in model accuracy over training rounds with and without cost-effective reranking, where the negative impacts are measured by the influence score. On the risk prediction and squat posture tasks, the accuracy of IAL-NAP increases over the 4 rounds of interaction, while Random-NAP achieves only marginal increases. Especially, on the heart failure task (a), the line plot shows that IAL-NAP uses a smaller number of annotated examples (100 examples) than Random-NAP (400 examples) to improve the model with comparable accuracy (auc: 0.6414), which shows that IAL-NAP improves the model with fewer examples.

Qualitative analysis

We further analyze the contribution of each feature for a CVD patient (label=1) whose records showed significant changes in attention with the help of physicians in Figure 7. The table (top in Figure 7) shows the patient’s medical records at the previous (2009, 2010) and the current time-step (2011), yearly registered records. The three graphs shows the values of the allocated attentions across three rounds. Our model, IAL-NAP failed to predict the label at pretrained round (a), but makes a correct prediction at =2 (c). We visualized five variables that have clinically meaningful changes. Across the change of attentions from (a) to (c), the physicians consider that attentions on age, HDL, and LDL in (a) are false positive (red bars) and smoking as false negative (blue bars), except SysBP as true positive (grey bars). Noting that the patient’s age (30) is younger than the median age (50 years-old) of female CVD patient (garcia2016cardiovascular)

, initial IAL-NAP (a) allocated too much weights on age, which led to an overconfident attention model and in turn resulted in the incorrect prediction. However, our model gradually allocated less weights on age over rounds, as it started to learn

what to attend to from interactive attention learning. Note that attention on smoking highly increased at =2 (c), which is also clinically guided by a physician for the reason that CVD risk increases by 25 for women who smoke cigarettes (huxley2011cigarette). Previous incorrect attentions on HDL and LDL (a) decrease over rounds, since the HDL level (55 mg/dL) is in the normal range (40-60) and the level of LDL (102 mg/dL) is still lower than borderline high (130-159).

5 Conclusion

We proposed an interactive learning framework which iteratively learns by interacting with the human supervisors via the generated attentions. The framework utilizes a novel stochastic attention mechanism based on neural process that can correct the model’s interpretation from scarce human feedback without retraining or overfitting. Further, it uses cost-effective reranking of the instances and features by their negative impacts to maximize the effect of each human-machine interaction. We validated our model on five real-world tasks from the healthcare, real estate, and fitness domains, on which our model significantly outperforms baselines with smaller retraining and human annotation cost. Qualitative analysis of our model shows that it generates more human-interpretable attentions that is crucial for its reliability on safety-critical tasks.

Acknowledgements
This work was supported by Institute for Information communications Technology Planning

Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2017-0-01779, A machine learning and statistical inference framework for explainable artificial intelligence).

References