1 Introduction
The consumer credit market in the Eurozone, the United States, and China went up dramatically since 2014. According to the European Central Bank, the US Federal Reserve, and the National Bureau of Statistics of China^{1}^{1}1The data is from official releases. US: Federal Reserve (www.federalreserve.gov/Releases/G19/current/). Euro Area: European Central Bank (www.euroareastatistics.org/banksbalancesheetloans). China: National Bureau of Statistics of China (www.stats.gov.cn/tjsj/), the outstanding notional at the end of 2018 is 770, 4,018 and 5608 billion US dollars, respectively, with China being the most notable in terms of the size of the market and the speed of the growth (see figure 1). While property financing such as housing and automobile remains the main driver, a fastgrowing portion comes from people’s spending on credit for necessities and consumables. One reason is that technology enables credit to channel into greater coverage of population and deeper penetration of consumer spending. A case in point is the credit issuance through global ecommerce platforms. Tremendous purchasing and borrowing activities now migrate from offline to online. For researchers, this paradigm shift from offline to online opens the door to observe consumer behavior at an unprecedented granularity, presenting new opportunities to decipher retail credit risk, and at the same time, new challenges to credit risk modeling.
Retail credit risk is the risk of capital loss when consumers fail on payments of credit card or personal loan. Traditionally, analysis of consumer credit risk focuses on credit score using lowfrequency data where maintaining a good payment record play a dominant role. In these analyses, characteristics regarding customer’s purchasing activities are either not available or not included. Whether it is a teenager buying a ten thousand dollar watch or it is a business owner buying a laptop, their credit scores are likely not very different to a credit card company as long as they pay on time. However, in the ecommerce context, consumers’ shopping footprints and the subsequent purchasing activity are naturally connected with their creditseeking and payment records. Including these behavioral data into the credit analysis allows online risk managers to tell one from another both their willingness to repay and their ability to repay with better confidence.
A typical cycle of online shopping on credit consists of three stages of actions. A customer first browses items that she is interested in. She then places orders on items she decides to buy and starts to think whether and how much payment credit she would like to apply from the platform. Once the credit is granted if she did apply for it, she effectively enters into an unsecured loan with the platform as the lending counterparty, and she is expected to make installments according to the payment schedule. These decisions and events, which we classify into three distinct groups (
browsing, ordering, and borrowing), collaboratively shape the credit profile of a customer. As these decisions and events intrinsically indicate a consumer’s ability and the willingness to repay, modeling credit risk based on them seems promising. However, the behavioral nature of these data and the granularity they present pose several daunting challenges.First, the three groups of actions take place very irregularly in time. This is due to their wide spectrum of event frequencies, therefore causing distinct degrees of serialdependencies. The event frequencies can range from hundreds of times a day such as browsing activities, to once in a few days where subsequent purchases occur, and all the way to quarterly or semiannual frequencies when periodical installments on borrowing are either paid or past due. This makes it difficult to learn the temporal dynamics of consumer behavior if one wants to use all three group of data together. This irregularity can also cause serious dominant view problem in model fitting. For example, browsing happens much more frequent than the other two groups of actions. In this situation, browsing data easily dominates the feature space, especially over those events that are very informative yet occurred much less often, such as defaults.
Second, the three groups of actions are interacting with each other in a complex manner. The relationship between browsing, ordering, and borrowing activities can be highly nonlinear. For instance, an increase of browsing activities may result in more purchases if the costumer’s financial wellbeing is healthy, but it may also cause less if the customer realizes that her accumulated spending in the past is about to reach her financial limit and thus becomes more cautious. The three groups of actions have their own information heterogeneity and complement each other in reflecting the behavioral pattern of a customer. It is a challenge to model these complex interactions in time series effectively.
Third, how to interpret the predicted result regarding a consumer’s credit risk is critical.
In other words, finding the determinants are as important as predicting the outcome from the perspective of credit risk management. While enlarging datasets to include rich behavioral information surely leads to better model estimation and more accurate forecasts, the complexity of interpreting results also increases. Nevertheless, carefully exploiting the browsing and ordering actions as well as the outcomes of related borrowing should shed lights on whether a customer is going to default or not and if so, why.
This paper develops a deep neural network (DNN) model to estimate and forecast consumer credit risk, and at the same time provide a structural attribution of the perceived risk into a consumer’s ability to repay factor, her willingness to repay factor, and her behavioral factor. We call it the NeuCredit model and test it on a unique data set collected from one of the largest global ecommerce platforms. The dataset contains realworld proprietary records collected by one of the largest global ecommerce platforms. It includes 38,182 loans with 499,572 relevant orders and 356,338 relevant sessions of clicks. The goal is to estimate the realtime default risk when a customer uses her approved credit to finance a purchase.
In particular, the model features a hierarchical architecture
in which three groups of actions are processed separately to avoid the problem of the dominant view. The sequence of borrowing actions that specifies the timestamps of loan issuance is regarded as the mainstream, i.e., the first layer, while the browsing and ordering actions are respectively clustered to their nearest future loan to form two subsequences (the second layer) for each loan. Considering the sequential nature of data, we propose a variant of Long Short Term Memory (LSTM) model, named the
Timevalueaware LSTM (TvaLSTM) model, to learn the temporal dynamics of irregular consumer behavior. By assuming the effect of an action in future prediction is continuously growing or decaying at trainable rates, the TvaLSTM model captures the varying time intervals between every two consecutive actions in time series. Furthermore, subsequences are integrated into the mainstream through a novel multiview fusion mechanism that explicitly models the mutual effects via feature interactions. The fusion is performed in nearly realtime as it launches at each element of the mainstream. We supervise the training of the NeuCredit model using labeled data, i.e., whether a consumer is delinquent or she defaults on her payments.We conducted extensive experiments to validate the effectiveness of the NeuCredit model, followed by regressions to understand the learning result. Comparing with conventional and other stateoftheart models, the NeuCredit model successfully captures the complex behavioral dynamics and improves the performance of consumer credit risk estimation. It achieves remarkable performance not seen before in outofsample forecasts. In particular, the model can capture both serial dependences in multidimensional time series data when event frequencies in each dimension differ. It also captures nonlinear crosssectional interactions among different timeevolving features. Besides, the predicted credit risk is designed to be interpretable such that risks can be decomposed into three components: the subjective risk indicating the consumer’s the willingness to repay, the objective risk indicating their ability to repay, and the behavioral risk indicating their behavioral differences. The willingness and the ability of customer repaying are modeled into the neural network via a speciallydesigned conditional loss function even though their groundtruths are unobservable.
1.1 Our Contribution
The contributions as well as messages of this study are threefolds.

Ticklevel shopping behavioral data enhances online credit risk forecasts.
The underlying relationship between consumer shopping behavior and their credit risks has not been formally studied before. In this paper, we profile consumer credit at an unprecedented granular level by zooming into the ticklevel shopping behavior and the subsequent financing records. Deciphering them carefully allows realtime assessment of future payment risk, particularly when online purchases are financed without posting collateral. Our extensive experiments demonstrate that online credit risk forecasts are improved significantly when browsing and purchasing data are added into the model training, comparing to using only the payment data. To the best of our knowledge, this is the first academic study that focuses on consumer credit risk in ecommerce contexts using a large comprehensive dataset to model consumer delinquencies and defaults.

Deep learning approach outperforms conventional machine learning significantly.
We propose a novel LSTMbased deep learning framework designed to handle complex consumer behavior, especially the irregularities of sequential actions and the interactions across different groups of actions. Here, we propose a hierarchical network structure and a TvaLSTM unit to handle temporal sequences of irregular consumer actions. Besides, we design a multiview fusion mechanism to model action interactions so that it can uncover the mutual effects of different groups of shopping behavior. The model is effective: empirical results demonstrated its performance superiority over the conventional machine learning model such as the logistic regression model and the random forest model as well as the competing stateoftheart deep learning models using the LSTM architecture. To the best of our knowledge, this is the first systematic study of consumer credit risk modeling using an LSTMbased deep learning approach. Moreover, the framework is generic in that one can use it to in nonfinancial applications such as recommendation, anomaly detection, etc. The source codes of our algorithm are available upon request.

Our model outputs structural interpretation of the risk determinants.
The deep learning models are often criticized for their blackbox nature and the lack of interpretability. Our approach to addressing this issue is to propose a speciallydesigned conditional loss objective in order to incorporate domain knowledge into the system. Specifically, the ability and the willingness to repay are considered as two significant loan determinants defaults (Lee 1991, Chehrazi and Weber 2015) in credit risk management. Understanding their contributions to the predicted credit risks is, therefore, informative. It helps a risk manager to identify the sources of credit risk and makes informed decisions on debt collection and credit extension. However, as ability and willingness cannot be observed in consumer actions directly, their groundtruths are not available in modeling. Here, we inferred their values through the repaying outcomes of loans and designed a conditional loss function to take these inferred values as guidance. In this way, the system can generate interpretable outputs. In the literature, this is the first deep learning approach that provides interpretable predictions of consumer credit risk.
The rest of the paper is organized as follows. We first review related works in Section 2. We then give descriptions of the dataset we use in Section 3. Section 4 introduces our model. Experiments are presented and analyzed in Section 5. We conclude the paper in Section 6 together with a discussion on possible future directions.
2 Literature Review
Our paper is related to the machine learning approach to the modeling and understanding of consumer credit risk. Academic studies concerning retail credit are fewer comparing to the vast majority of the credit risk literature that is corporate, sovereign or mortgage oriented. One reason is that there is little outright trading of individual personal loans, hence no public assessments of retail credit risk. Unlike corporate bonds, secondary trading of securities related to consumer credit are only in secularized form^{2}^{2}2In US market, AssetBackSecurities backed up with creditcard proceeds are liquid.
. Another reason is the lack of accountlevel data unless one has access to proprietary data owned by commercial banks and credit card companies. In terms of risk metrics and the models used, the historical focuses are credit scoring and linear regression when it comes to consumer credit risk. However, as ecommerce plays an everlarger role in retail credit insurance and much richer data becomes available, sophisticated credit models are needed for the management of retail credit risk.
Earlier work using machine learning approach to analyze consumer creditrisk starts from Khandani, Kim, and Lo (2010) where classification and regression trees are used to construct forecasting models. Using a unique dataset consisting of transactionlevel, credit bureau and accountbalance data for individual consumers, they were able to forecast credit events related to consumer credit default and delinquency 312 months in advance with great accuracy. The results in Khandani, Kim, and Lo (2010) show that machine learning approach is very suitable to build forecasting models when the sources of information are vast, the nature of data is distinct, and the connections between them are unclear.
Sirignano, Sadhwani, and Giesecke (2016
) advance the machine learning approach to credit risk modeling from classical machine learning methods to deep neural networks. Comparing to classical machine learning models, the recurrent neural networks (RNN) used in Sirignano, Sadhwani, and Giesecke (
2016) are extremely capable of extracting nonlinear relationships between explanatory variables and response variables. These nonlinear relationships are shown to be very important in the outofsample forecast when benchmarked with linear models such as logistic regression. Using a dataset of over 120 million mortgages and over 3.5 billion loanmonth observations across the US between 1995 and 2014, the authors demonstrate the powerfulness of RNN in terms of estimating transition probabilities of credit states and understanding of mortgage credit and prepayment risk at an unprecedented level.
Our paper further adds to the literature on using a machine learning approach to study consumer credit. Methodologywise, the first comparative merit of our model is its interpretability. The neural network architecture we design can output interpretable factors in order to understand what drives the consumer defaults and delinquencies, such as ”the willingness to repay” factor and ”the ability to repay” factor suggested earlier in the literature Lee (1991).
The second merit of our model is its ability to allow irregular time interval in data when learning complex serial dependence in high dimensional time series. Our findings coincide with the study of Chehrazi and Weber (2015) where self and crossexcited Hawkes process captures dependencies between the arrival times of repayment events. The authors show that it is essential to capture the dependence structure when accountlevel data is used either for valuation or forecasting. Since our data show a wide spectrum of event frequencies, ranging from hundreds of times a day in browsing activates all the way to monthly or quarterly frequencies in payment installments, we need more flexibility than previous machine learning approaches to model potentially distinct degrees of serialdependencies and complex nonlinear crosssectional interactions. Thus, the deep neural network we construct uses a hierarchical architecture rather than outright RNN or classic machine learning methods. On top of that, the LSTM specification we use addresses the issue that traditional RNN is not very good at learning long term memories in the data, but keeping the nonlinear mapping ability of RNN between inputs and outputs.
3 Data Description
The data set is from one of the largest global ecommerce platforms in which, the whole courses of customers’ online shopping on credit are recorded, i.e., browsing items, placing orders, seeking credit, and repaying loans. The browsing, ordering, and borrowing activities are recorded in the forms of sessions of clicks, orders, loans, respectively.
A session of clicks is defined as beginning with a click which occurs after 15 minutes or more have elapsed since the last click and continuing until 15 minutes or more elapse between clicks. The consumers in our dataset are required to have conducted at least three times of borrowing instances on the platform during the period from Nov. 1st, 2016 to Nov. 1st, 2018, i.e., have at least three historical loans. To limit the length of loan sequence, only the most recent 15 loans of each consumer are recorded. In this way, each consumer in the dataset possesses a temporal loan sequence with a minimum length of 3 and a maximum length of 15.
For each loan in a loan sequence, only the orders within the past 6 months before the issuance of that loan and the sessions within the past 14 days before the issuance of that loan are recorded. This is because the contribution of ordering and browsing actions in predicting default risk is considered timesensitive. For example, it is unlikely that a customer would spend more than two weeks to make a single decision on whether to buy something. Therefore, browsing behavior more than 14 days before the current loan might not be helpful.
Also, only the most recent 15 orders and 15 sessions before the issuance of each loan are recorded to limit the length of order and session sequences. A loan sequence that has a loan with less than 3 orders or 3 sessions before the issuance of that loan are dropped. In this way, each loan in a loan sequence possesses a temporal order subsequence and a temporal session subsequence both with a minimum length of 3 and a maximum length of 15. From the consumers that meet the above requirements, 2,500 of them with no default records in their loan sequence are randomly selected, and 2,500 of them with at least one default record in their loan sequence are randomly selected. A default record generates when a consumer has been delinquent for more than 90 days on a loan. In total, 5,000 consumers are selected. Finally, the dataset contains 38,182 loans where 11,184 of them are default ones, 499,572 orders, and 356,338 sessions of clicks. On average, each consumer has 7.64 loans, and each loan is related to 13.08 orders and 9.33 sessions, i.e., the average length of loan sequences, order subsequences, and session subsequences is 7.64, 13.08, and 9.33, respectively.
Variable  Mean  SD  5th  25th  Median  75th  95th  

All Loans  l.amt  322.09  756.71  18.69  51.49  107.61  227.01  1439.61 
term  1.87  1.97  1.00  1.00  1.00  1.00  6.00  
int.rate  2.18  4.41  0.00  0.00  0.00  0.00  12.00  
l.itv  16.48  30.72  0.00  0.00  4.00  17.00  80.00  
Default Loans  l.amt  312.52  722.52  19.97  50.01  105.55  230.83  1299.01 
term  2.52  2.67  1.00  1.00  1.00  3.00  6.00  
int.rate  3.64  5.26  0.00  0.00  0.00  9.60  12.00  
l.itv  11.40  24.22  0.00  0.00  2.00  10.00  57.00  
NonDefault Loans  l.amt  326.06  770.41  16.97  52.41  107.89  224.49  1497.81 
term  1.60  1.50  1.00  1.00  1.00  1.00  6.00  
int.rate  1.58  3.86  0.00  0.00  0.00  0.00  12.00  
l.itv  18.58  32.82  0.00  0.00  5.00  21.00  87.00 
Variable  Mean  SD  5th  25th  Median  75th  95th  

All Orders  oamt  664.35  10746.74  28.53  57.06  171.18  399.42  3024.21 
damt  77.55  288.74  0.00  0.00  0.00  57.06  313.83  
qtty  2.73  14.52  1.00  1.00  1.00  2.00  7.00  
catep  1.82  1.72  1.00  1.00  1.00  2.00  5.00  
oitv  7.78  14.78  0.00  0.00  2.00  9.00  34.00  
Orders w.r.t. Default Loans  oamt  579.33  7847.55  28.53  57.06  142.65  370.89  2995.68 
damt  59.96  216.95  0.00  0.00  0.00  57.06  256.77  
qtty  2.49  15.53  1.00  1.00  1.00  2.00  6.00  
catep  1.69  1.61  1.00  1.00  1.00  2.00  5.00  
oitv  6.34  13.88  0.00  0.00  1.00  6.00  30.00  
Orders w.r.t. NonDefault Loans  oamt  700.68  11769.62  28.53  85.59  171.18  399.42  3195.39 
damt  85.06  314.16  0.00  0.00  0.00  85.59  342.36  
qtty  2.83  14.07  1.00  1.00  1.00  3.00  8.00  
catep  1.87  1.76  1.00  1.00  1.00  2.00  5.00  
oitv  8.40  15.11  0.00  0.00  2.00  10.00  36.00 
Variable  Mean  SD  5th  25th  Median  75th  95th  

All Sessions  nclick  10.66  17.51  1.00  2.00  5.00  12.00  40.00 
catev  1.94  1.55  1.00  1.00  1.00  2.00  5.00  
duration  120.09  454.27  0.00  0.94  18.42  105.08  572.39  
sitv  401.58  438.80  0.00  35.78  206.23  697.48  1312.65  
Sessions w.r.t. Default Loans  nclick  11.61  20.11  1.00  2.00  5.00  13.00  44.00 
catev  2.00  1.67  1.00  1.00  1.00  2.00  5.00  
duration  122.47  436.29  0.00  0.94  19.84  109.09  582.31  
sitv  396.00  436.61  0.00  35.92  199.27  682.12  1309.10  
Sessions w.r.t. NonDefault Loans  nclick  10.28  16.33  1.00  2.00  5.00  12.00  38.00 
catev  1.91  1.51  1.00  1.00  1.00  2.00  5.00  
duration  119.13  461.33  0.00  0.71  17.71  103.19  568.00  
sitv  403.82  439.66  0.00  35.67  208.83  703.43  1313.95 
present the descriptive statistics for some features of loans, orders, and click sessions. There are 38,182 loans for 5,000 consumers in our dataset, where 11,184 of the loans default. Each consumer has 7.64 loans on average. The major features of a loan include the loan amount, loan term, and interest rate. Besides, the time interval between the current loan and the last loan is also of interest. As the table shows, default loans tend to have smaller loan amounts, longer loan terms, higher interest rates, and shorter borrowing intervals.
There are 499,572 orders for 38,182 loans in our dataset, where 149,564 of the orders are in the subsequences for default loans. Each loan has an order subsequence with 13.08 orders on average. The major features of an order include order amount, discount amount, the number of items purchased (Qtty.), the number of categories purchased (Cate. Purchase). Besides, the time interval between the current order and the last order is also of interest. As the table shows, default loans are usually related to orders with lower order amount, lower discount amount, fewer items and categories of products within an order, and shorter ordering intervals, suggesting the possibility of irrational consumption.
There are 356,338 sessions for 38,182 loans in our dataset, where 102,425 of the sessions are in the subsequences for default loans. Each loan has a session subsequence with 9.33 sessions on average. The major features of a click session include the number of clicks within a session (N. of clicks), the number of categories visited (Cate. Visit), and the duration of a session. Besides, the time interval between the current session and the last session is also interesting. As the table shows, default loans are usually related to sessions with more clicks and more considerable duration of sessions, suggesting higher user stickiness.
4 Methodology
In this section, we introduce the NeuCredit model, which takes the temporal sequences of browsing, ordering, and borrowing as input and outputs the consumer credit risk at the issuance of each loan. The components of the model are illustrated one after another in the following subsections. We use bold lowercase letters to denote vectors and bold uppercase letters to denote matrices. A summary of variable notations is provided in Appendix
Appendix C. Notations for Main Variables. The shapes of vectors and matrices can also be found in the summary.4.1 Input Definition
For a consumer on an ecommerce platform, her borrowing actions forms a loan sequence where is the timestamp of loan issuance and is the vector containing variables related to loan . is the number of dimensions of . The loan variables are comprised of two parts: loan features such as amount, interest rate, loan term, etc., and a temporal feature specifying the time interval between this loan and the last loan.
For each loan, ordering actions before and within a preset observation period are assigned to the loan to form a corresponding order subsequence. There are in total order subsequences where is the order subsequence for loan . is the vector containing order information like order amount, product quantity, and the time interval between this order and the last order, etc. is the number of dimensions of .
Browsing actions are first grouped into sessions, where a session is defined as beginning with a click which occurs after 15 minutes or more have elapsed since the last click and continuing until 15 minutes or more elapse between clicks. Then, the sessions are assigned to loans in the same manner as orders. This gives subsequences of browsing sessions where is the browsing session subsequence for loan . is the vector containing the browsing information within session of loan such as duration of the session, timeonpage, total number of clicks, and the time interval between this session and the last one, etc. is the number of dimensions of .
An exemplary data structure is illustrated in Figure 2.
4.2 Sequence Encoding
The most fundamental component of NeuCredit is the recurrent unit employed to learn behavioral dynamics. Usually, Long ShortTerm Memory (LSTM) neural network (Hochreiter and Schmidhuber 1997, Gers et al. 1999) is regarded as the most popular and effective recurrent unit in plenty of sequence modeling tasks (Ren et al. 2015, Wang et al. 2016, Yang et al. 2017)
. However, conventional sequential models, including LSTM, implicitly assume that elements in a sequence are discrete and uniformly distributed along the timeline, i.e., time intervals between consecutive elements are equal. This is not the case in most reallife tasks where events happen stochastically in continuous time. Time intervals between consumer actions can reveal valuable information in many scenarios, including credit risk modeling. For instance, a recent purchase of an expensive good in cash indicates a good economic condition, while a purchase months ago may not play an active role in predicting the default risk of the current loan issued to finance an order.
In our situation, events in a loan sequence as well as in its related order subsequences and session subsequences are taking place irregularly in time. So it is imperative to consider these irregularities in modeling. In the literature, the most straightforward approach is to regard the time interval between two successive elements in a sequence as an extra feature so that the standard LSTM is applicable as before. As Equation (1
) shows, this approach implicitly models the nonlinear effects of the time interval on other features through the activation functions in LSTM.
In Equation (1), is the Hadamard product operator that implements the elementwise multiplication, and are activation functions that introduce nonlinearity into fitting, represents the current input vector, is the time interval between the current timestamp and the previous timestamp, and are the previous and the current hidden states, and are the previous and the current cell memories, {, , }, {, , }, {, , }, and {, , } are the trainable network parameters of the input, forget, output gates and the candidate memory, respectively, and , , , and are the input, forget, output gates and the candidate memory, respectively.
(1)  
The shape of these vectors and matrices are in Appendix Appendix C. Notations for Main Variables. For the theories and details of Long ShortTerm Memory neural network, please refer to Hochreiter and Schmidhuber (1997) and Gers et al. (1999).
Alternatively, Baytas et al. (2017) is the first to explicitly model the effect of time intervals by proposing Timeaware LSTM (TLSTM). Instead of regarding as a common feature, the authors use it to process the cell memory in standard LSTM. Specifically, the cell memory is first decomposed into shortterm and longterm memories. Then, the shortterm memory is discounted by a factor where is some preset monotonically nonincreasing function. The longterm and the discounted shortterm memories are next fused into that serves the role of the original cell memory in standard LSTM. The mathematical forms of the above operations are as follows,
(2)  
In Equation (2), is the cell memory in standard LSTM, and are the shortterm and longterm memories, respectively, is the discounted shortterm memory, and are trainable network parameters for decomposition, and is the new cell memory that will take the place of the original in Equation (1). According to Baytas et al. (2017), TLSTM performs much better than standard LSTM on both synthetic and real world sequential data.
However, this method is problematic to some extent. First, it uses a preset function that only allows monotonically nonincreasing discounting of the cell memory and thus prohibits the enhancement of cell memory in time. This setting is too rigorous in practice as some events are effective in a very long run, and their importance can even naturally grows over time.
For instance, the amount of money deposited in a bank can increase persistently at the interest rate. Second, the third formula in Equation (2) implicitly assumes that the values at different positions of vector possess a same discounting rate , which limits the expressiveness of TLSTM. Third, the discounting is taking place in a lowdimensional space which makes it hard for to discount information in high dimensions. This constraint is caused by the network parameter that maintains the number of dimensions during mapping. Lastly, the discounting with a preset function lacks theoretical insights about how does come into effect in modeling.
Therefore, we propose Timevalueaware LSTM (TvaLSTM) that settles the problems of TLSTM. TvaLSTM is very flexible that allows both decaying and growing of the cell memory over time. The decaying or growing rates are trainable so that the discounting process is datadriven. The discounting is taking place in a highdimensional space, and each dimension has its own discounting rate.
Besides, the discounting mechanism is theoretically derived upon a reasonable assumption so that it shades lights on the functionality of . Particularly, the cell memory vector is first mapped to a highdimensional space represented by a matrix . At the same time, a discounting matrix that has the same shape as is initialized by . Then, multiplies elementwisely to allow different discounting rates for different dimensions.
Lastly, the product matrix is mapped back to a lowdimensional space to serve as the new cell memory . The nonlinearity is introduced via activation functions. The mathematical forms of the above operations are as follows,
(3)  
In Equation (3), is the cell memory in standard LSTM, is the mapped cell memory in a highdimensional space, is the corresponding discounting matrix, is the discounted mapped cell memory, and is the new cell memory that will take the place of the original in Equation (1). {, } are the trainable parameters responsible for mapping the cell memory to a highdimensional space. {, } are the trainable parameters for initializing the discounting matrix. is the trainable parameter for discounting the mapped cell memory. {, } are the trainable parameters for mapping the discounted mapped cell memory back to a lowdimensional space.
Note that the discounting factor takes the form of exponentiation. In fact, this specific form can be derived by assuming that the elements in the mapped cell memory are continuously changing at different rates over time. Since the derivation is straightforward, we put it in Appendix Appendix A. Derivation of the Discounting Factor for clarity.
Figure 3 gives a brief illustration of the proposed TvaLSTM recurrent unit. To be specific, TvaLSTM takes the hidden state and the cell memory
from last moment as inputs. Before passing them to different gates, the cell memory first entries into a discounting unit to regularize the time gap between the last moment and the current moment. In the discounting unit, the cell memory
will first be mapped into a highdimensional space, then be elementwisely discounted via a discounting factor matrix, and lastly be mapped back to the original lowdimensional space.As denoted in Equation (3), the complete process of time gap regularization is datadriven such that both the mapping parameters and the decaying/growing rate parameters are learned simultaneously with the rest of network parameters by backpropagation. This renders TvaLSTM very expressive as it not only allows both decaying and growing of cell memory over time but also assigns different changing rates to different dimensions in the highdimensional space. Following discounting, the hidden state and the regularized cell memory are passed to typical LSTM gates.
4.3 Multiview Fusion
Another critical component of the NeuCredit model is the fusion strategy used to combine the main loan sequence and its related subsequences. The objective of fusion is to integrate the information heterogeneity maintained in different views of actions and more importantly, to model the mutual effects due to behavioral interactions. In this study, order and session subsequences are encoded via two TvaLSTM, separately. The fusion is carried out at the issuance of each loan in the loan sequence.
Taking the fusion at loan as an example, the inputs of fusion are the loan vector , the final hidden state of the TvaLSTM for the th order subsequence , and the final hidden state of the TvaLSTM for the th session subsequence . One straightforward idea is to first concatenate the three vectors and then pass it through a fully connected neural network layer with a nonlinear activation function , i.e.,
(4) 
Another approach is to capture the interactions of different groups of actions by exploiting the concept of Multiview Machines (Cao et al. 2016). Here, we employ a Multiview Machines layer (Cao et al. 2017) for fusion. The layer explicitly models the feature interactions so that it acquires nonlinearity more efficiently in training. Besides, it captures fullorder interactions from 0 to the number of input vectors. For the theories and details of Multiview Machines, please refer to Cao et al. (2016, 2017). The formula of this layer is
(5) 
where , , and are three trainable factor matrices for fusion. Their shapes are , , and , respectively. and are the number of hidden units in the TvaLSTM for order subsequences and session subsequence, respectively. is the number of dimensions of the fused vector .
4.4 Hierarchical Network
In this part, the forementioned components are combined to present the hierarchical network proposed for sophisticated consumer behavior. The architecture is illustrated in Figure 4.
In the bottomlevel layers, two separate TvaLSTM recurrent units are used to encode the order subsequences and session subsequences. This avoids the difficulties of aligning different groups of actions that have distinct patterns of serialdependency and frequencies of occurrence. For subsequences and , the encoding is done as follows,
(6)  
where and are the hidden states of TvaLSTM units, and denote the two TvaLSTM units employed for order and session subsequences. The last hidden states and summarize the information in subsequences and and thus are regarded as their final representations.
In the uplevel layer, the loan vector is first fused with and as in Equation (5). The procedure is denoted by the MvM Fusion unit in Figure 4. Following that, the fused vector is encoded by a uplevel TvaLSTM:
(7) 
where is the hidden state of the uplevel recurrent unit . is the number of hidden units. represents a summary of consumer behavior up to timestamp in the loan sequence .
4.5 Conditional Loss
In the last section, we successfully obtain the representation of all historical events at timestamp . Following that, is often used to fulfill some classification or regression tasks. For example, in credit management, a critical task for risk assessment is to predict whether a loan will default. A loan is considered as default if its repayment delays more than 90 days. The prediction can be implemented as follows,
(8) 
In Equation (8), is a trainable vector that maps to one dimension, is a realvalue bias, is the sigmoid activation function, and the predicted default probability is . The dissimilarity between and the real binary outcome is measured by a loss function . if loan defaults; otherwise, . The model parameters are learned by minimizing the loss function in training.
This approach is standard in classification problems. But it has one serious drawback in credit risk modeling. The predicted default probability is not interpretable. It neither distinguishes the sources of risk nor illuminates the contributions of different sources to default. Here, we propose to construct the default probability based on three major determinants of loan defaults (Lee 1991, Chehrazi and Weber 2015): the objective risk (the ability to repay), the subjective risk (the willingness to repay), and the behavioral risk (the risk neither objective nor subjective). In probability, we formulate the default probability as follows,
(9) 
In Equation (9), is the default probability, is the default probability when the ability is , is the default probability when the willingness is , and is the default probability conditioned on and , i.e., the default risk caused by behavioral patterns other than the ability and the willingness to repay. In this way, the default probability becomes interpretable.
To simulate the construction of an interpretable default probability in neural networks, we first decompose into three vectors:
(10)  
where are trainable parameters for decomposition, and , , and are hidden vectors containing the information for ability risk, willingness risk, and behavioral risk, respectively. Then, the hidden vectors are separately mapped to one dimension to predict , , and :
(11)  
where are trainable parameters for mapping. Following that, the predicted default probability is supervised by as before.
In order to let , , and truly represent the meaning we imposed on them, it is imperative to supervise them independently by their own groundtruth in training. However, , , and are completely unobservable in practice. Therefore, we put forward a method to infer the values of and for a loan by carefully analyzing the repayment behavior on that loan.
Particularly, if a borrower defaults on a loan, although we are not sure about whether it is caused by a low ability or a low willingness to repay, we can still infer that one of them must be low enough to lead to the outcome. That is, in probability, the probability of the default that is caused by neither ability nor willingness to repay is very low.
On the contrary, if a borrower repays every installment on time and never defaults on that loan, it is certain that he has not only a high ability but also a high willingness to repay. Another interesting situation in between is that a borrower never defaults, but he is often delinquent (overdue) on the periodical installments of that loan. In this condition, the repaying ability of the borrower must be high as he is always able to complete the payment, but the willingness may be low because of her frequent delinquencies. Mathematically, the inference above can be summarized as
(12) 
where is the proportion of the installments of loan that the borrower has been delinquent on. In this way, we inferred the values of and under different conditions. These inferred values can be used as teachers in training via a conditional loss function:
(13) 
Note that
is conditioned on a binary variable
, we can write the two expressions into one and combine it with . In summary, the proposed loss function for the NeuCredit model is(14) 
where is the batch size used in minibatch optimization and is the loan sequence length. The first part of Equation (14) is the conventional loss for classification. Here, we use binary crossentropy as . The second and third parts of Equation (14) are the conditional loss hinging on the value of .
Following the computational graph, one can straightforwardly compute the gradients for all the network parameters in the NeuCredit model. Also, the error messages by weighing the predicted outputs with the observed loan outcomes can be backpropagated through the decomposition layers and fusion layers all the way to the very beginning to update the parameters in different branches of TvaLSTMs. In that sense, the NeuCredit model is said to be an endtoend deep neural network model that learns the dynamics of consumer behavior for interpretable credit risk modeling.
5 Experiment
In this section, we design and conduct experiments using both synthetic datasets and reallife datasets to address the following four groups of questions:

How much better are deep learning models than conventional machine learning models?

How much value is added by incorporating shopping behavior data when forecasting consumer credit risk?

Is it indeed important to model the irregular event timeinternals?

Can we interpret the forecasted default probabilities into consumer’s ability to repay, willingness to repay, and their behavioral factors?
We use the synthetic dataset to demonstrate the superiority of the TvaLSTM model over other competing ones on recovering the dynamics of complex patterns. The construction details is in Appendix Appendix B. Generation of the Synthetic Dataset. The synthetic dataset contains 10,000 sequences with a length of 50 for each sequence. Every data point in the dataset has 106 features and 1 label. Among the 106 features, only 5 are involved in the generation of the label, while the rest is all noise. Besides, to produce sequential dependencies, the 5 features at the current timestamp in a sequence is generated by transforming the 5 features at the previous timestamp in a highly nonlinear manner. The label of each data point is a binary indicator which takes the value of 1 or 0. Among the 500,000 data points, 323,326 of them are positive instances, i.e., their labels equal to 1.
The reallife dataset contains 5,000 loan sequences with 38,182 loans in total. The average length of loan sequences is 7.64. Each loan possesses 15 features (). Among the 38,182 loans, 11,184 () of them default. For each loan, an order subsequence and a session subsequence are matched. Therefore, there are 5,000 order subsequences and 5,000 session subsequences. The dataset contains 499,572 orders and 356,338 sessions. On average, the length of an order subsequence is 13.08 and of a session subsequence is 9.33. Each order possesses 45 features () and each session possesses 16 features ().
In the experiments, sequences and subsequences with length less than 15 are padded to length of 15 using 0. The influence of padding is eliminated through masking both in training and testing. This treatment is a common practice in temporal data modeling, which allows us to handle variable length sequences in recurrent models. Features are standardized before passing to models.
As different group of questions require a different set of benchmark models, these models and their implementation details are left to be specified in corresponding subsections. All methods are evaluated using fivefold crossvalidation (Kohavi et al. 1995). The AreaUnderROC Curve (AUC) score is used as the primary performance metric in evaluation (Bradley 1997). Experiments are implemented using Python. Pandas^{3}^{3}3http://pandas.pydata.org/ and Numpy^{4}^{4}4http://www.numpy.org/ libraries are used to process the datasets. ScikitLearn^{5}^{5}5http://scikitlearn.org/
and Tensorflow
^{6}^{6}6https://www.tensorflow.org/ libraries are used to implement the algorithms. The source code of all implementation will be publicly available after paper acceptance.5.1 Deep Learning vs. Conventional Machine Learning Models
In this part, we test the performance improvements of our model over other conventional and competitive models. Specifically, does our model perform better in credit risk prediction than conventional models? Can a model with a similar structure but conventional units achieve comparable performance to our model? To answer these questions, the following methods are compared in experiments:

LR (loan): the Logistic Regression model trained on loans with the time interval as an extra feature. This is similar to the traditional consumer credit management scenario where only financing behavior can be observed.

LR (all): the Logistic Regression model trained on all three groups of data (loans, orders, and sessions). The features of subsequences are averaged along the timeline and concatenated with loan features. The time intervals are regarded as extra features.

RF (loan): the Random Forest model trained on loans with the time interval as an extra feature. This is similar to the traditional consumer credit management scenario where only financing behavior can be observed.

RF (all): the Random Forest model trained on all three groups of data (loans, orders, and sessions). The features of subsequences are averaged along the timeline and concatenated with loan features. The time intervals are regarded as extra features.

LSTMwdt (loan): the standard LSTM model trained on loans with the time interval as an extra feature.

MvMTvaLSTM (all): the model that employs the same hierarchical structure and fusion mechanism as Figure 4. The model is trained on all three groups of sequential data (loans, orders, and sessions).
The models are trained to predict loan defaults using binary crossentropy loss. The number of hidden units () is set as 5 for the TvaLSTM unit and the LSTM unit employed in the aforementioned methods. The number of output units () of the fullyconnected fusion layer in the FCLSTM model is set as 5. The number of output units () of the factor matrices in the MvMTvaLSTM model is set as 5. All neural network models are trained with a minibatch stochastic Adam optimizer (Kingma and Ba 2014)
. The batch size is set as 1,000. The learning rate is 0.001. The number of epochs in training is determined using an early stopping criteria
(Caruana et al. 2001). The logistic regression models and the random forest models are trained with default parameter setting in ScikitLearn. The AUCs of different models in fivefold crossvalidation are shown in Table 4.First, the conventional methods indeed cannot reach comparable performance to deep neural network methods. Second, compared with the FCLSTM model that employs conventional units but uses the same hierarchical structure as that of our model, the MvMTvaLSTM model achieves better performance in experiments. An interesting finding is that the average AUC of the FCLSTM is , which outperforms the average AUC of the FCTvaLSTM model in Section 5.2. This is inconsistent with our finding in Section 5.3 that the TvaLSTM model is better at handling the time intervals and can outperform the conventional LSTM model without . The reason is that both the FCLSTM model and the FCTvaLSTM model are trained in an endtoend manner that requires a model to learn all the parameters from scratch (coldstart). While the units in the FCLSTM model are conventional and easy to train, the units in the FCTvaLSTM model are much more complicated in design. It leads to insufficient training of the TvaLSTM unit in the FCTvaLSTM model. This problem can be settled by using the pretrained parameters to initialize the TvaLSTM in FCTvaLSTM (warmstart). In general, these results demonstrate the feasibility and effectiveness of using shopping behavior to model credit risk for consumers. Also, it suggests that the proposed hierarchical architecture is better at capturing the underlying behavioral patterns of consumers than conventional methods.
Method/AUC (%)  AUC1  AUC2  AUC3  AUC4  AUC5  Avg. AUC  S.D. 
LR (loan)  63.59  64.68  66.33  60.47  63.96  63.80  0.0191 
LR (all)  69.13  68.99  71.22  67.02  68.18  68.91  0.0138 
RF (loan)  59.76  61.72  60.17  59.83  60.80  60.46  0.0073 
RF (all)  68.59  67.16  69.28  66.97  68.53  68.11  0.0089 
LSTMwdt (loan)  70.59  70.45  69.87  68.30  71.69  70.18  0.0111 
MvMTvaLSTM (all)  74.25  73.22  75.86  72.37  73.98  73.94  0.0116 
5.2 The Importance of Adding Browsing and Purchasing Data
To better understand the roles played by different views of shopping behavior in default risk modeling, we train a TvaLSTM model on each of the three types of temporal data. Besides, we study the importance of modeling the behavioral interactions and the necessity of multiview fusion. Specifically, without borrowing data, are consumer behavior alone contain information in terms of predicting the outcomes of borrowing? If they are, does the multiview fusion strategy successfully model the behavioral interactions in online shopping and uncover their contributions to credit risk prediction? Does the Multiview Machines fusion layer behave better than the straightforward fullyconnected fusion layer? To answer these questions, the following methods are implemented in experiments:

TvaLSTM (loan): the Timevalueaware LSTM model trained on loan sequences.

TvaLSTM (order): the Timevalueaware LSTM model trained on order subsequences, the hierarchical structure as Figure 4 is employed without fusion with other sequences/subsequences.

TvaLSTM (session): the Timevalueaware LSTM model trained on session subsequences, the hierarchical structure as Figure 4 is employed without fusion with other sequences/subsequences.

FCTvaLSTM (all): the model that employs the same hierarchical structure as Figure 4 but uses a fullyconnected layer instead of a Multiview Machines layer for fusion. The model is trained on all three groups of sequential data (loans, orders, and sessions).

MvMTvaLSTM (all): the model that employs the same hierarchical structure and fusion mechanism as Figure 4. The model is trained on all three groups of sequential data (loans, orders, and sessions).
Method/AUC (%)  AUC1  AUC2  AUC3  AUC4  AUC5  Avg. AUC  S.D. 
TvaLSTM (loan)  71.13  71.14  71.02  68.95  72.29  70.91  0.0108 
TvaLSTM (order)  72.53  71.28  72.29  69.87  72.42  71.68  0.0101 
TvaLSTM (click)  54.88  53.87  54.61  57.27  54.43  55.01  0.0118 
FCTvaLSTM (all)  73.11  72.04  74.87  71.18  73.92  73.02  0.0131 
MvMTvaLSTM (all)  74.25  73.22  75.86  72.37  73.98  73.94  0.0116 
The models are trained to predict loan defaults using binary crossentropy loss. The number of hidden units () is set as 5 for all TvaLSTM units employed in the aforementioned methods. The number of output units () of the fullyconnected fusion layer in the FCTvaLSTM model is set as 5. The number of output units () of the factor matrices in the MvMTvaLSTM model is set as 5. All models are trained with a minibatch stochastic Adam optimizer (Kingma and Ba 2014). The batch size is set as 1,000. The learning rate is 0.001. The number of epochs in training is determined using an early stopping criteria (Caruana et al. 2001). The AUCs of different models in fivefold crossvalidation are shown in Table 5.
First, the average AUCs achieved with orders and sessions are and . Both of them are greater than , indicating that behavior in shopping is indeed useful in predicting one’s credit risk. Moreover, ordering actions seem more informative than borrowing actions as the average AUC achieved with orders is consistently higher than that achieved with loans. This finding provides the empirical foundation that supports the development of online credit shopping on ecommerce platforms. When all types of actions are considered, the performance increases. The AUCs of both the FCTvaLSTM model and the MvMTvaLSTM model are consistently higher than that of models with a single type of actions, indicating that the fusion layer indeed explored the macroscopic interactions cross different views of data, and the mutual effects it exposed are effective in evaluating consumer credit risk. Besides, the MvMTvaLSTM model performs better than the FCTvaLSTM model, which demonstrates Multiview Machines fusion is better at capturing the interactions.
5.3 The Importance of Modeling Irregular Timeintervals of Events
In this part, we study the importance of handling irregularity in temporal data modeling. Specifically, is it indeed necessary to take time intervals into consideration? Is the proposed handling of time intervals, via the TvaLSTM model, better at capturing the irregularities in behavior than other methods? To answer these questions, the following models are used for comparison:

LSTM: the standard LSTM model that ignores time intervals.

LSTMwdt: the standard LSTM model that takes time intervals into modeling as in (1).

TLSTM: the Timeaware LSTM model proposed by Baytas et al. (2017) that takes time intervals into modeling via a present discounting function .

TvaLSTM: the Timevalueaware LSTM model proposed in this study that handles the time intervals in a more expressive way.
The experiments are conducted on both the synthetic sequences and the loan sequences in the reallife data. The models are trained to predict loan defaults using binary crossentropy loss. The number of hidden units (
) is set as 2 for all models in experiments with the synthetic data, and 5 for all models in experiments with the reallife data. All models are trained with a minibatch stochastic RMSprop optimizer
(Mukkamala and Hein 2017). The batch size is set as 1,000. The learning rate is 0.001. The number of epochs in training is determined using an early stopping criteria (Caruana et al. 2001). The AUCs of different models in fivefold crossvalidation are plotted in Figure 5. The average AUCs are shown in Table 6.The proposed TvaLSTM model achieves the best performance on both synthetic and reallife data. Models that incorporate time intervals achieve better average AUC in experiments. This implication is more evident in experiments with reallife data, where LSTMwdt, TLSTM, and TvaLSTM all outperform the conventional LSTM by more than three percentage points. These results demonstrate the necessity of taking the time intervals into consideration and the superiority of the proposed discounting mechanism in TvaLSTM.
Data  Synthetic  RealLife  

Method/Metric  Avg. AUC (99%+bps)  S.D.  Avg. AUC (%)  S.D. 
LSTM  69  0.0002  66.40  0.0135 
LSTMwdt  69  0.0003  70.18  0.0111 
TLSTM  74  0.0005  70.03  0.0115 
TvaLSTM  82  0.0004  70.91  0.0108 
5.4 Structural Interpretation of Forecasted Default Probabilities
Up to now, models such as FCLSTM, FCTvaLSTM, or MvMTvaLSTM are all trained to predict loan defaults using binary crossentropy loss. In this part, we turn to the interpretable conditional loss function and evaluate the complete NeuCredit model. Specifically, we want to address the following questions: Given the highly complicated structures and operations inside the NeuCredit model, does it converge properly in training? If it does, what is its performance? More importantly, are the values of predicted ability and predicted willingness consistent with our design? How does consumer behavior relate to the ability and the willingness of repaying?
The parameter setting of the NeuCredit model is the same as that of the MvMTvaLSTM model. The curve of training loss and the curve of training AUC are plotted in Figure 6. Here, only the curves for one of the five splits in fivefold crossvalidation is presented for simplicity. As they show, the convergence of the NeuCredit model is not affected even if we employ many complicated units such as TvaLSTM, Multiview Machines Fusion, and Conditional Loss in the NeuCredit model. The loss is continuously decreasing and the AUC is continuously increasing as the training process proceeds.
Besides, the prediction performance is presented in Table 7. The performance of the MvMTvaLSTM model is also presented in Table 7 for reference. Note that the performance of the NeuCredit model is inferior to MvMTvaLSTM. It is a reasonable result since while MvMTvaLSTM is trained to serve as an MLE (maximum likelihood estimator), i.e., directly optimizing the binary crossentropy, the NeuCredit model needs to weigh between the prediction performance and the interpretability. This tradeoff leads to a little performance decrease of the NeuCredit model in default risk prediction.
Method/AUC (%)  AUC1  AUC2  AUC3  AUC4  AUC5  Avg. AUC  S.D. 

NeuCredit  74.91  72.18  72.39  74.00  75.06  73.71  0.0122 
MvMTvaLSTM  74.25  73.22  75.86  72.37  73.98  73.94  0.0116 
Next, we check if the predicted values of behavioral risk, ability risk, and willingness risk are consistent with our design. We interpret results from two perspectives. First, the three types of predicted risks are scattered against predicted default probabilities on Figure 7 to visualize the correlation between credit risk and its determinants. Together with equation (9), it is very interesting to notice how the ultimate default probability attributes to the three claimed risk types.
We then use linear regression to test if the three types of risk are significantly correlated with the outcomes of loans, i.e., the default indicator and the delinquency ratio . The meaning of all regression variables is detailed in Table 8. The results are collected in Table 9, 10, and 11. The regressions in Table 9 reveals what we have encoded in the NeuCredit model. The regressions in Table 10 answers the question of whether the predicted values are significant in differentiating consumers with different risk levels. Finally, the regressions in Table 11 studies if the obtained factors are significant determinants in predicting consumer defaults.
As it shows, the behavioral risk and the willingness risk are indeed positively correlated with and . The predicted values of the willingness risk is consistent with our expectation. One thing worthy of future investigations is that the correlation between the ability risk and default or delinquency ratio is reversed. Also, the explanatory power of the predicted ability risk is low compared to the other two types of predicted risk. The reason could be that the model does not distinguish well the behavioral risk from the ability risk as they essentially share the same guidance in the training process. And the reason that the predicted behavioral risk is in line with our expectation is that it is supervised by which incorporates more information via . In general, both the scatter plots and the regressions demonstrate that the predicted values of three types of determinants are consistent with our design and they do reveal the sources of credit risk.
Group  Variable  Description 
Real Outcome  the dummy variable that indicates default of a loan 

Prediction of NeuCredit  the predicted default probability of a loan  
the predicted default probability of a loan with ability  
the predicted default probability of a loan with willingness  
the predicted default probability of a loan with behavioral risk  
1 if is larger than the median of ; 0 otherwise  
1 if is larger than the median of ; 0 otherwise  
1 if is larger than the median of ; 0 otherwise  
Loan Variable  the natural logarithm of the principal of a loan (CNY)  
the term of a loan (month)  
the annualized interest rate of a loan  
the time interval between the current and the previous loan issuance  
the natural logarithm of the minimum payment of a loan installment (CNY)  
Order Variable  the natural logarithm of the average order amount (CNY)  
the average discount rate of an order  
the average quantity of items within an order  
the average quantity of different items within an order  
the average time interval between the current and the previous order  
the average proportion of virtual goods within an order  
the average proportion of selfselling goods within an order  
the average proportion of free gifts within an order  
the user level when placing an order  
Session Variable  the average number of clicks within a session  
the average number of category visited  
the average duration of a session (minute)  
the average time interval between the current and the previous session 
Explanatory/Response  

# of obs.  
# of groups 
Explanatory/Response  

# of obs.  
# of groups 
Explanatory/Response  

# of obs.  
# of groups 
6 Conclusion
In this paper, we take a datadriven bottomup approach to model consumer credit risk with structural interpretability in the ecommerce scenario when a platform provides unsecured lending to finance consumer purchasing and needs to manage the resulting credit exposure. By zooming into the ticklevel shopping behavior and the subsequent financing records of large population, we open a window to profile consumer credit at an unprecedented granular level. Deciphering them carefully would allow realtime assessment of future payment risk, particularly when payments are financed without posting collateral.
The structure of our deep neural network is novel. First, we propose TvaLSTM recurrent unit to encode temporal shopping behavior that happen stochastically in time. TvaLSTM unit effectively regularizes the time intervals in temporal data. The discounting mechanism in this unit is explainable as it is derived on mild assumptions. Then, the encoded representations are passed to a Multiview Machines layer to do information fusion. The fusion strategy explicitly computes the interactions across different types of shopping behavior via tensor multiplication. Finally, the NeuCredit model organizes temporal data in a hierarchical structure which avoids dominant view problem and achieves realtime fusion of various types of information. Besides, we propose a novel conditional loss function that exploits repaying behavior to infer the values of determinants for credit risk. We decompose the consumer credit risk into three of its determinants: behavioral risk, abilitytorepay risk, and willingnesstorepay risk. The supervising of these risks are accomplished in training even if their groundtruths are not observable. In this way, the NeuCredit model is able to output interpretable credit risk predictions. Extensive experiments are conducted using both a synthetic dataset and a massive reallife dataset collected from one of the largest global ecommerce platforms. The outofsample forecasts of consumer default risk demonstrate the effectiveness of the methodology proposed in this paper, in terms of the superiority of our model over conventional machine learning models as well as other stateoftheart deep learning models, as well as the interpretability of the model predictions.
In our opinion, there are three future directions that are very interesting to study further. First, the prediction performance can be further boosted. In this paper, we adopt an endtoend learning schema that trains the parameters of a neural network from scratch. However, a more efficient way in training is to ’warmup’ the network with pretrained parameters. The pretraining can be done in a lot of ways, and some of which include the use of transfer learning algorithms. Thus, how to transfer richer information into the NeuCredit model to further improve its performance is an interesting direction. Second, in this paper, we propose a deep learning method to break down the credit risk into its determinants. Is there any other determinants can be incorporated into this framework? If there is, then how to? This is also an interesting problem to enrich the interpretability of the NeuCredit model. Third, as presented in this study, the NeuCredit model can output the predicted values of the determinants of credit risk. Therefore, they should not only be used for understanding the source of risk, but also be used for risk management. For example, based on the predictions of ability and willingness to repay, how to build more accurate models to fulfill tasks like debt collection or credit extension is also of great importance.
References
 Baytas et al. (2017) Baytas IM, Xiao C, Zhang X, Wang F, Jain AK, Zhou J (2017) Patient subtyping via timeaware lstm networks. Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, 65–74 (ACM).
 Bradley (1997) Bradley AP (1997) The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern recognition 30(7):1145–1159.
 Cao et al. (2017) Cao B, Zheng L, Zhang C, Yu PS, Piscitello A, Zulueta J, Ajilore O, Ryan K, Leow AD (2017) Deepmood: Modeling mobile phone typing dynamics for mood detection. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 747–755 (ACM).
 Cao et al. (2016) Cao B, Zhou H, Li G, Yu PS (2016) Multiview machines. Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, 427–436, WSDM ’16 (New York, NY, USA: ACM), ISBN 9781450337168, URL http://dx.doi.org/10.1145/2835776.2835777.

Caruana et al. (2001)
Caruana R, Lawrence S, Giles CL (2001) Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping.
Advances in neural information processing systems, 402–408.  Chehrazi and Weber (2015) Chehrazi N, Weber TA (2015) Dynamic valuation of delinquent creditcard accounts. Management Science 61(12):3077–3096.
 Gers et al. (1999) Gers FA, Schmidhuber J, Cummins F (1999) Learning to forget: Continual prediction with lstm .
 Hochreiter and Schmidhuber (1997) Hochreiter S, Schmidhuber J (1997) Long shortterm memory. Neural computation 9(8):1735–1780.
 Khandani et al. (2010) Khandani AE, Kim AJ, Lo AW (2010) Consumer creditrisk models via machinelearning algorithms. Journal of Banking & Finance 34(11):2767–2787.
 Kingma and Ba (2014) Kingma DP, Ba J (2014) Adam: A method for stochastic optimization. International Conference on Learning Representations.
 Kohavi et al. (1995) Kohavi R, et al. (1995) A study of crossvalidation and bootstrap for accuracy estimation and model selection. Ijcai, volume 14, 1137–1145 (Montreal, Canada).
 Lee (1991) Lee SH (1991) Ability and willingness to service debt as explanation for commercial and official rescheduling cases. Journal of Banking & Finance 15(1):5–27.
 Mukkamala and Hein (2017) Mukkamala MC, Hein M (2017) Variants of rmsprop and adagrad with logarithmic regret bounds. Proceedings of the 34th International Conference on Machine LearningVolume 70, 2545–2553 (JMLR. org).
 Ren et al. (2015) Ren M, Kiros R, Zemel R (2015) Exploring models and data for image question answering. Advances in neural information processing systems, 2953–2961.
 Sirignano et al. (2016) Sirignano J, Sadhwani A, Giesecke K (2016) Deep learning for mortgage risk. arXiv preprint arXiv:1607.02470 .

Wang et al. (2016)
Wang Y, Huang M, Zhao L, et al. (2016) Attentionbased lstm for aspectlevel
sentiment classification.
Proceedings of the 2016 conference on empirical methods in natural language processing
, 606–615. 
Yang et al. (2017)
Yang M, Tu W, Wang J, Xu F, Chen X (2017) Attention based lstm for target
dependent sentiment classification.
ThirtyFirst AAAI Conference on Artificial Intelligence
.
Appendix A. Derivation of the Discounting Factor
In TvaLSTM, we assume that each element of the mapped cell memory is changing at a distinct rate every unit of time during the time interval . The changing rates for all the elements are denoted by a matrix that has the same size as . Therefore, the new cell memory after is
(15) 
If this change is continuous during , that is decays/grows times in every unit of time and , we have
(16) 
According to the definition of Euler’s number , Equation (16) can be simplified to
(17) 
where is regarded as a discounting factor of the mapped cell memory . Based on that, we introduce basic changing rates for by setting up a bias matrix , which allows changing of the cell memory even when . Also, an activation function is used to add nonlinearity. In summary, the discounting factor becomes the one we employed in the TvaLSTM recurrent unit:
(18) 
Appendix B. Generation of the Synthetic Dataset
The synthetic dataset contains 10,000 sequences with length of 50 for each sequence. Here, we denote a sequence as . Each data point possesses 106 features, i.e., . is the time interval between data point and . The value of is 0 when and is sampled from otherwise. Other features are sampled as
Comments
There are no comments yet.