Game-Theoretic Interpretability for Temporal Modeling

06/30/2018 ∙ by Guang-He Lee, et al. ∙ 0

Interpretability has arisen as a key desideratum of machine learning models alongside performance. Approaches so far have been primarily concerned with fixed dimensional inputs emphasizing feature relevance or selection. In contrast, we focus on temporal modeling and the problem of tailoring the predictor, functionally, towards an interpretable family. To this end, we propose a co-operative game between the predictor and an explainer without any a priori restrictions on the functional class of the predictor. The goal of the explainer is to highlight, locally, how well the predictor conforms to the chosen interpretable family of temporal models. Our co-operative game is setup asymmetrically in terms of information sets for efficiency reasons. We develop and illustrate the framework in the context of temporal sequence models with examples.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

State-of-the-art predictive models tend to be complex and involve a very large number of parameters. While the added complexity brings modeling flexibility, it comes at the cost of transparency or interpretability. This is particularly problematic when predictions feed into decision-critical applications such as medicine where understanding of the underlying phenomenon being modeled may be just as important as raw predictive power.

Previous approaches to interpretability have focused mostly on fixed-size data, such as scalar-feature datasets (Lakkaraju et al., 2016) or image prediction tasks (Selvaraju et al., 2016). Recent methods do address the more challenging setting of sequential data (Lei et al., 2016; Arras et al., 2017) in NLP tasks where the input is discrete. Interpretability for continuous temporal data has remained mostly unexplored (Wu et al., 2018).

In this work, we propose a novel approach to model interpretability that is naturally tailored (though not limited to) time-series data. Our approach differs from interpretable models such as interpretation generators (Al-Shedivat et al., 2017; Lei et al., 2016) where the architecture or the function class is itself explicitly constrained towards interpretability, e.g., taking it to be the set of linear functions. We also differ from post-hoc explanations of black-box methods through local perturbations (Ribeiro et al., 2016; Alvarez-Melis & Jaakkola, 2017). In contrast, we establish an intermediate regime, game-theoretic interpretability, where the predictor remains functionally complex but is encouraged during training to follow a locally interpretable form.

At the core of our approach is a game-theoretic characterization of interpretability. This is set up as a two-player co-operative game between predictor and explainer. The predictor remains a complex model whereas the explainer is chosen from a simple interpretable family. The players minimize asymmetric objectives that combine the prediction error and the discrepancy between the players. The resulting predictor is biased towards agreeing with a co-operative explainer. The co-operative game equilibrium is stable in contrast to GANs (Goodfellow et al., 2014).

The main contributions of this work are as follows:

  • A novel game-theoretic interpretability framework that can take a wide range of prediction models without architectural modifications.

  • Accurate yet explainable predictors where the explainer is trained coordinately with the predictor to actively balance interpretability and accuracy.

  • Interpretable temporal models, validated through quantitative and qualitative experiments, including stock price prediction and a physical component modeling.

2 Methodology

In this work, we learn a (complex) predictive target function together with a simpler function defined over an axiomatic class of interpretable models . We refer to functions and as the predictor and explainer, respectively, throughout the paper. Note that we need not make any assumptions on the function class , instead allowing a flexible class of predictors. In contrast, the family of explainers is explicitly and intentionally constrained such as the set of linear functions. As any is assumed to be interpretable, the family does not typically itself suffice to capture the regularities in the data. We can therefore hope to encourage the predictor to remain close to such interpretable models only locally.

For expository purposes, we will develop the framework in a discrete time setting where the predictor maps to for . The data are denoted as . We then instantiate the predictor with deep sequence generative models, and the explainers with linear models.

2.1 Game-Theoretic Interpretability

(a) Neighborhood
(b) Piece-wise linear
(c)
(d)
Figure 1: Examples of fitting a neighborhood  (0(a)) with piece-wise linear predictor (0(b)). When playing with different families of explainers (Figure 0(c) and 0(d); dashed lines), the resulting predictor (in solid green) behaves differently although they admit the same prediction error (mean squared error = 1.026).

There are many ways to use explainer functions to guide the predictor by means of discrepancy measures. However, since the explainer functions are inherently weak such as linear functions, we cannot expect that a reasonable predictor would be nearly linear globally. Instead, we can enforce this property only locally. To this end, we define local interpretability by measuring how close is to a family over a local neighborhood around an observed point . One straightforward instantiation of such a neighborhood in temporal modeling will be simply a local window of points . Our resulting local discrepancy measure is

(1)

where is a deviation measurement. The minimizing explainer function, , is indexed by the point

around which it is estimated. Indeed, depending on the function

, the minimizing explainer can change from one neighborhood to another. If we view the minimization problem game-theoretically, is the best response strategy of the local explainer around .

The local discrepancy measure can be subsequently incorporated into an overall regularization problem for the predictor either symmetrically (shared objective) or asymmetrically (game-theoretic) where the goals differ between the predictor and the explainer.

Symmetric criterion. Assume that we are given a primal loss that is to be minimized for the problem of interest. The goal of the predictor is then to find that offers the best balance between the primal loss and local interpretability. Due to the symmetry between the two players, the full game can be collapsed into a single objective

(2)

to be minimized with respect to both and . Here, is a hyper-parameter that we have to set.

To illustrate the above idea, we generate a synthetic dataset to show a neighborhood in Figure 0(a) with a perfect piece-wise linear predictor in Figure 0(b). Clearly, does not agree with linear explainers within the neighborhood, despite its piece-wise linear nature. However, when we establish a game between and a linear explainer in Figure 0(c), it admits lower functional deviation (and thus stronger linear interpretability). We also show in Figure 0(d) that different explainer family would induce different outcomes of the game.

Asymmetric Game. The symmetric criterion makes it simple to understand the overall co-operative objective, but solving it is inefficient computationally. Different possible sizes of the neighborhood (e.g., end-point boundary cases) makes it hard to parallelize optimization for (note that this does not hold for

), which is problematic when we require parallel training for neural networks. Also, since

is reused many times across neighborhoods in the discrepancy measures, the value of at each may be subject to different functional regularization across the neighborhoods, which is undesirable.

In principle, we would like to impose a uniform functional regularization for every , where the regularizer is established on a local region basis. This new modeling framework leads to an asymmetric co-operative game, where the information sets are asymmetric between predictor and local explainers . Accordingly, each local best response explainer is minimized for local interpretability (1) within , thus relying on values within this region. In contrast, the predictor only receives feedback in terms of resulting deviation at and thus only sees . From the point of view of the predictor, the best response strategy is obtained by

(3)

Discussion. We can analyze the scenario when the loss and deviation are measured in squared error, the explainer is in constant family, and the predictor is non-parametric. Both games induce a predictor that is equal to recursive convolutional average of , where the decay rate in each recursion is the same for both games, but the convolutional kernel evolves twice faster in the symmetric game than in the asymmetric game.

The formulation involves a key trade-off between the size of the region where explanation should be simple and the overall accuracy achieved with the predictor. When the neighborhood is too small, local explainers become perfect, inducing no regularization on . Thus the size of the region is a key parameter in our framework. Another subtlety is that solving (1) requires optimization over explainer family , where specific deviation and family choices matter for efficiency. For example,

and affine family lead to linear regression with closed-form local explainers. Finally, a natural extension to solving (

1) is to add regularizations.

We remark that the setup of and leaves considerable flexibility in tailoring the predictive model and the explanations to the application at hand — which is not limited to temporal modeling. Indeed, the derivation is for temporal models but extends naturally to others as well.

3 Examples

Conditional Generative Model. The basic idea of sequence modeling can be generalized to conditional sequence generation. For example, given historical data of a stock’s price, how will the price evolve over time? Such mechanism allows us to inspect the temporal dynamics inside the problem of interest to assist in long-term decisions, while a conditional generation allows us to control the progression of generation with different settings of interest.

Formally, given an observation sequence

, the goal is to estimate the probability

of future events . For notational simplicity, we will use to denote the sequence of variables . A popular approach to estimate this conditional probability is to train a conditional model by maximum likelihood (ML) (Van Den Oord et al., 2016). If we model the conditional distribution of given

as a multivariate Gaussian distribution with mean

and covariance , we can define the asymmetric game on by minimizing

(4)

with respect to and

, both parametrized as recurrent neural networks. For the explainer model

, we use the neighborhood data to fit a -order Markov autoregressive (AR) model:

(5)

where and . AR model is a generalization of linear model to temporal modeling and thus admits a similar analytical solution. The choice of Markov horizon makes this model flexible and should be informed by the application at hand.

Explicit Interpretability Game. In some cases, we wish to articulate interpretable parts explicitly in the predictor . For example, if we view the predictor as approximately locally linear, we could explicitly parameterize in a manner that highlights these linear coefficients. To this end, in the temporal domain, we can explicate the locally linear assumption already in the parametrization of :

(6)

which we can write as , where and are learned as recurrent networks. However, this explicit parameterization is relevant only if we further encourage them to act their part, i.e., that the locally linear part of is really expressed by , .

To this end, we formulate a refined game that defines the discrepancy measure for the explainer in a coefficient specific manner, separately for and so as to locally mimic the AR and constant family, respectively. The objective of local explainer at with respect to and then becomes

(7)

where is the family of AR models (which does not include any offset/bias, consistent with ) and

is simply the set of constant vectors. The objective for

is defined analogously, symmetrically or asymmetrically. For simplicity, our notation doesn’t include end-point boundary cases with respect to the neighborhoods.

4 Experiments

Dataset # of seq. input len. output len.
Stock 15,912 30 7
Bearing 200,736 80 20
Table 1: Dataset statistics
Stock Error Deviation TV
AR 1.557 0.000 0.000
game-implicit 1.478 0.427 0.000
deep-implicit 1.472 0.571 0.000
game-explicit 1.479 0.531 73.745
deep-explicit 1.475 0.754 91.664
Bearing Error Deviation TV
AR 9.832 0.000 0.000
game-implicit 8.309 3.431 5.706
deep-implicit 8.136 4.197 7.341
game-explicit 8.307 4.177 27.533
deep-explicit 8.151 6.134 29.756
Table 2: Performance. All units are in .
AR
Error 8.136 8.057 8.309 9.284 9.832
Deviation 4.197 4.178 3.431 1.127 0.000
TV 7.341 7.197 5.706 1.177 0.000
Table 3: Implicit game on bearing dataset when varies.
Figure 2: Visualization of weight vectors for predicting the first channel along each autoregressive timestamp in bearing dataset. The -axis from to denotes , and

.

We validate our approach on two real-world applications: a bearing dataset from NASA (Lee et al., 2016) and a stock market dataset consisting of historical data from the S&P 500 index111www.kaggle.com/camnugent/sandp500/data. The bearing dataset records 4-channel acceleration data on 4 co-located bearings, and stock dataset records daily prices (4 channels in open, high, low, and close). Due to the diverse range of stock prices, we transform the data to daily percentage difference. We divide the sequence into disjoint subsequences and train the sequence generative model on them. The input and output length are decided based on the nature of dataset. The bearing dataset has a high frequency period of 5 points and low frequency period of 20 points. On the stock dataset, we used 1 month to predict the next week’s prices. The statistics of the processed dataset is shown in Table 1. We randomly sample , , and of the data for training, validation, and testing.

We set neighborhood and Markov order to be and to impose sequence-level coherence in stock dataset; and and for smooth variation in bearing dataset. We parametrize and jointly by stacking layer of CNN, LSTM, and

fully connected layers. We use Ridge regression

222Ridge is used to alleviate degenerate cases of linear regression. with default parameter in scikit-learn (Pedregosa et al., 2011) to implement the AR model. For efficiency, of the sequences are sampled for regularization in each batch.

We compare our asymmetric game-theoretic approach (‘game’) against the same model class without an explainer (‘deep’). We use ‘implicit’ label to distinguish predictors from those ‘explicitly’ written in an AR-like form. Evaluation involves three different types of errors: ‘Error’ is the root mean squared error (RMSE) between greedy autoregressive generation and the ground truth; ‘Deviation’ is RMSE between the model prediction and the explainer , estimated also for ‘deep’ that is not guided by the explainer; and ‘TV’ is the average total variation of over any two consecutive time points. For testing, the explainer is estimated based on a greedy autoregressive generative trajectory. TV for implicit models is based on the parameters of the explainer . For explicit models, deviation RMSE is the sum of AR and constant deviations as in Eq. (7), thus not directly comparable to the implicit ‘Deviation’ based only on output differences.

The results are shown in Table 2

. TV is not meaningful for implicit formulation in stock dataset due to the size of the neighborhood. The proposed game reduces the gap between deep learning model and AR model on deviation and TV, while retaining promising prediction accuracy.

We also show results of implicit setting on bearing dataset for different in Table 3. The trends in increasing error and decreasing deviation and TV are quite monotonic with . When , the ‘game’ model is even more accurate than ‘deep’ model due to regularization effect of the game.

We visualize the explanations from implicit setting ( in explainer model) over autoregressive generative trajectories in Figure 2. The explanation from the ‘game’ model is more stable. Compared to the ground truth, different temporal pattern after the point is captured by the explanations.

5 Conclusion

We provide a novel game-theoretic approach to interpretable temporal modeling. The game articulates how the predictor accuracy can be traded off against locally agreeing with a simple axiomatically interpretable explainer. The work opens up many avenues for future work, from theoretical analysis of the co-operative games to estimation of interpretable unfolded trajectories through GANs.

References