Neural Learning of Online Consumer Credit Risk

06/05/2019 ∙ by Di Wang, et al. ∙ 0

This paper takes a deep learning approach to understand consumer credit risk when e-commerce platforms issue unsecured credit to finance customers' purchase. The "NeuCredit" model can capture both serial dependences in multi-dimensional time series data when event frequencies in each dimension differ. It also captures nonlinear cross-sectional interactions among different time-evolving features. Also, the predicted default probability is designed to be interpretable such that risks can be decomposed into three components: the subjective risk indicating the consumers' willingness to repay, the objective risk indicating their ability to repay, and the behavioral risk indicating consumers' behavioral differences. Using a unique dataset from one of the largest global e-commerce platforms, we show that the inclusion of shopping behavioral data, besides conventional payment records, requires a deep learning approach to extract the information content of these data, which turns out significantly enhancing forecasting performance than the traditional machine learning methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The consumer credit market in the Eurozone, the United States, and China went up dramatically since 2014. According to the European Central Bank, the US Federal Reserve, and the National Bureau of Statistics of China111The data is from official releases. US: Federal Reserve (www.federalreserve.gov/Releases/G19/current/). Euro Area: European Central Bank (www.euro-area-statistics.org/banks-balance-sheet-loans). China: National Bureau of Statistics of China (www.stats.gov.cn/tjsj/), the outstanding notional at the end of 2018 is 770, 4,018 and 5608 billion US dollars, respectively, with China being the most notable in terms of the size of the market and the speed of the growth (see figure 1). While property financing such as housing and automobile remains the main driver, a fast-growing portion comes from people’s spending on credit for necessities and consumables. One reason is that technology enables credit to channel into greater coverage of population and deeper penetration of consumer spending. A case in point is the credit issuance through global e-commerce platforms. Tremendous purchasing and borrowing activities now migrate from offline to online. For researchers, this paradigm shift from offline to online opens the door to observe consumer behavior at an unprecedented granularity, presenting new opportunities to decipher retail credit risk, and at the same time, new challenges to credit risk modeling.

Retail credit risk is the risk of capital loss when consumers fail on payments of credit card or personal loan. Traditionally, analysis of consumer credit risk focuses on credit score using low-frequency data where maintaining a good payment record play a dominant role. In these analyses, characteristics regarding customer’s purchasing activities are either not available or not included. Whether it is a teenager buying a ten thousand dollar watch or it is a business owner buying a laptop, their credit scores are likely not very different to a credit card company as long as they pay on time. However, in the e-commerce context, consumers’ shopping footprints and the subsequent purchasing activity are naturally connected with their credit-seeking and payment records. Including these behavioral data into the credit analysis allows online risk managers to tell one from another both their willingness to repay and their ability to repay with better confidence.

Figure 1: Consumer credit outstanding across years and countries/areas. The horizontal axis indicates the time. The amounts are the stocks at the end of each period. The vertical axis measures the amount of outstanding consumer credit. All currencies are denominated in US dollar. The colors suggest different entities. The Eurozone, China, and the United States are marked blue, orange, and gray, respectively. Data sources: European Central Bank, National Bureau of Statistics of China, and Federal Reserve.

A typical cycle of online shopping on credit consists of three stages of actions. A customer first browses items that she is interested in. She then places orders on items she decides to buy and starts to think whether and how much payment credit she would like to apply from the platform. Once the credit is granted if she did apply for it, she effectively enters into an unsecured loan with the platform as the lending counterparty, and she is expected to make installments according to the payment schedule. These decisions and events, which we classify into three distinct groups (

browsing, ordering, and borrowing), collaboratively shape the credit profile of a customer. As these decisions and events intrinsically indicate a consumer’s ability and the willingness to repay, modeling credit risk based on them seems promising. However, the behavioral nature of these data and the granularity they present pose several daunting challenges.

First, the three groups of actions take place very irregularly in time. This is due to their wide spectrum of event frequencies, therefore causing distinct degrees of serial-dependencies. The event frequencies can range from hundreds of times a day such as browsing activities, to once in a few days where subsequent purchases occur, and all the way to quarterly or semi-annual frequencies when periodical installments on borrowing are either paid or past due. This makes it difficult to learn the temporal dynamics of consumer behavior if one wants to use all three group of data together. This irregularity can also cause serious dominant view problem in model fitting. For example, browsing happens much more frequent than the other two groups of actions. In this situation, browsing data easily dominates the feature space, especially over those events that are very informative yet occurred much less often, such as defaults.

Second, the three groups of actions are interacting with each other in a complex manner. The relationship between browsing, ordering, and borrowing activities can be highly nonlinear. For instance, an increase of browsing activities may result in more purchases if the costumer’s financial wellbeing is healthy, but it may also cause less if the customer realizes that her accumulated spending in the past is about to reach her financial limit and thus becomes more cautious. The three groups of actions have their own information heterogeneity and complement each other in reflecting the behavioral pattern of a customer. It is a challenge to model these complex interactions in time series effectively.

Third, how to interpret the predicted result regarding a consumer’s credit risk is critical.

In other words, finding the determinants are as important as predicting the outcome from the perspective of credit risk management. While enlarging datasets to include rich behavioral information surely leads to better model estimation and more accurate forecasts, the complexity of interpreting results also increases. Nevertheless, carefully exploiting the browsing and ordering actions as well as the outcomes of related borrowing should shed lights on whether a customer is going to default or not and if so, why.

This paper develops a deep neural network (DNN) model to estimate and forecast consumer credit risk, and at the same time provide a structural attribution of the perceived risk into a consumer’s ability to repay factor, her willingness to repay factor, and her behavioral factor. We call it the NeuCredit model and test it on a unique data set collected from one of the largest global e-commerce platforms. The dataset contains real-world proprietary records collected by one of the largest global e-commerce platforms. It includes 38,182 loans with 499,572 relevant orders and 356,338 relevant sessions of clicks. The goal is to estimate the real-time default risk when a customer uses her approved credit to finance a purchase.

In particular, the model features a hierarchical architecture

in which three groups of actions are processed separately to avoid the problem of the dominant view. The sequence of borrowing actions that specifies the time-stamps of loan issuance is regarded as the mainstream, i.e., the first layer, while the browsing and ordering actions are respectively clustered to their nearest future loan to form two sub-sequences (the second layer) for each loan. Considering the sequential nature of data, we propose a variant of Long Short Term Memory (LSTM) model, named the

Time-value-aware LSTM (Tva-LSTM) model, to learn the temporal dynamics of irregular consumer behavior. By assuming the effect of an action in future prediction is continuously growing or decaying at trainable rates, the Tva-LSTM model captures the varying time intervals between every two consecutive actions in time series. Furthermore, sub-sequences are integrated into the mainstream through a novel multi-view fusion mechanism that explicitly models the mutual effects via feature interactions. The fusion is performed in nearly real-time as it launches at each element of the mainstream. We supervise the training of the NeuCredit model using labeled data, i.e., whether a consumer is delinquent or she defaults on her payments.

We conducted extensive experiments to validate the effectiveness of the NeuCredit model, followed by regressions to understand the learning result. Comparing with conventional and other state-of-the-art models, the NeuCredit model successfully captures the complex behavioral dynamics and improves the performance of consumer credit risk estimation. It achieves remarkable performance not seen before in out-of-sample forecasts. In particular, the model can capture both serial dependences in multi-dimensional time series data when event frequencies in each dimension differ. It also captures nonlinear cross-sectional interactions among different time-evolving features. Besides, the predicted credit risk is designed to be interpretable such that risks can be decomposed into three components: the subjective risk indicating the consumer’s the willingness to repay, the objective risk indicating their ability to repay, and the behavioral risk indicating their behavioral differences. The willingness and the ability of customer repaying are modeled into the neural network via a specially-designed conditional loss function even though their ground-truths are unobservable.

1.1 Our Contribution

The contributions as well as messages of this study are threefolds.

  • Tick-level shopping behavioral data enhances online credit risk forecasts.

    The underlying relationship between consumer shopping behavior and their credit risks has not been formally studied before. In this paper, we profile consumer credit at an unprecedented granular level by zooming into the tick-level shopping behavior and the subsequent financing records. Deciphering them carefully allows real-time assessment of future payment risk, particularly when online purchases are financed without posting collateral. Our extensive experiments demonstrate that online credit risk forecasts are improved significantly when browsing and purchasing data are added into the model training, comparing to using only the payment data. To the best of our knowledge, this is the first academic study that focuses on consumer credit risk in e-commerce contexts using a large comprehensive dataset to model consumer delinquencies and defaults.

  • Deep learning approach outperforms conventional machine learning significantly.

    We propose a novel LSTM-based deep learning framework designed to handle complex consumer behavior, especially the irregularities of sequential actions and the interactions across different groups of actions. Here, we propose a hierarchical network structure and a Tva-LSTM unit to handle temporal sequences of irregular consumer actions. Besides, we design a multi-view fusion mechanism to model action interactions so that it can uncover the mutual effects of different groups of shopping behavior. The model is effective: empirical results demonstrated its performance superiority over the conventional machine learning model such as the logistic regression model and the random forest model as well as the competing state-of-the-art deep learning models using the LSTM architecture. To the best of our knowledge, this is the first systematic study of consumer credit risk modeling using an LSTM-based deep learning approach. Moreover, the framework is generic in that one can use it to in non-financial applications such as recommendation, anomaly detection, etc. The source codes of our algorithm are available upon request.

  • Our model outputs structural interpretation of the risk determinants.

    The deep learning models are often criticized for their black-box nature and the lack of interpretability. Our approach to addressing this issue is to propose a specially-designed conditional loss objective in order to incorporate domain knowledge into the system. Specifically, the ability and the willingness to repay are considered as two significant loan determinants defaults (Lee 1991, Chehrazi and Weber 2015) in credit risk management. Understanding their contributions to the predicted credit risks is, therefore, informative. It helps a risk manager to identify the sources of credit risk and makes informed decisions on debt collection and credit extension. However, as ability and willingness cannot be observed in consumer actions directly, their ground-truths are not available in modeling. Here, we inferred their values through the repaying outcomes of loans and designed a conditional loss function to take these inferred values as guidance. In this way, the system can generate interpretable outputs. In the literature, this is the first deep learning approach that provides interpretable predictions of consumer credit risk.

The rest of the paper is organized as follows. We first review related works in Section 2. We then give descriptions of the dataset we use in Section 3. Section 4 introduces our model. Experiments are presented and analyzed in Section 5. We conclude the paper in Section 6 together with a discussion on possible future directions.

2 Literature Review

Our paper is related to the machine learning approach to the modeling and understanding of consumer credit risk. Academic studies concerning retail credit are fewer comparing to the vast majority of the credit risk literature that is corporate, sovereign or mortgage oriented. One reason is that there is little outright trading of individual personal loans, hence no public assessments of retail credit risk. Unlike corporate bonds, secondary trading of securities related to consumer credit are only in secularized form222In US market, Asset-Back-Securities backed up with credit-card proceeds are liquid.

. Another reason is the lack of account-level data unless one has access to proprietary data owned by commercial banks and credit card companies. In terms of risk metrics and the models used, the historical focuses are credit scoring and linear regression when it comes to consumer credit risk. However, as e-commerce plays an ever-larger role in retail credit insurance and much richer data becomes available, sophisticated credit models are needed for the management of retail credit risk.

Earlier work using machine learning approach to analyze consumer credit-risk starts from Khandani, Kim, and Lo (2010) where classification and regression trees are used to construct forecasting models. Using a unique dataset consisting of transaction-level, credit bureau and account-balance data for individual consumers, they were able to forecast credit events related to consumer credit default and delinquency 3-12 months in advance with great accuracy. The results in Khandani, Kim, and Lo (2010) show that machine learning approach is very suitable to build forecasting models when the sources of information are vast, the nature of data is distinct, and the connections between them are unclear.

Sirignano, Sadhwani, and Giesecke (2016

) advance the machine learning approach to credit risk modeling from classical machine learning methods to deep neural networks. Comparing to classical machine learning models, the recurrent neural networks (RNN) used in Sirignano, Sadhwani, and Giesecke (

2016

) are extremely capable of extracting nonlinear relationships between explanatory variables and response variables. These nonlinear relationships are shown to be very important in the out-of-sample forecast when benchmarked with linear models such as logistic regression. Using a dataset of over 120 million mortgages and over 3.5 billion loan-month observations across the US between 1995 and 2014, the authors demonstrate the powerfulness of RNN in terms of estimating transition probabilities of credit states and understanding of mortgage credit and prepayment risk at an unprecedented level.

Our paper further adds to the literature on using a machine learning approach to study consumer credit. Methodology-wise, the first comparative merit of our model is its interpretability. The neural network architecture we design can output interpretable factors in order to understand what drives the consumer defaults and delinquencies, such as ”the willingness to repay” factor and ”the ability to repay” factor suggested earlier in the literature Lee (1991).

The second merit of our model is its ability to allow irregular time interval in data when learning complex serial dependence in high dimensional time series. Our findings coincide with the study of Chehrazi and Weber (2015) where self- and cross-excited Hawkes process captures dependencies between the arrival times of repayment events. The authors show that it is essential to capture the dependence structure when account-level data is used either for valuation or forecasting. Since our data show a wide spectrum of event frequencies, ranging from hundreds of times a day in browsing activates all the way to monthly or quarterly frequencies in payment installments, we need more flexibility than previous machine learning approaches to model potentially distinct degrees of serial-dependencies and complex nonlinear cross-sectional interactions. Thus, the deep neural network we construct uses a hierarchical architecture rather than outright RNN or classic machine learning methods. On top of that, the LSTM specification we use addresses the issue that traditional RNN is not very good at learning long term memories in the data, but keeping the nonlinear mapping ability of RNN between inputs and outputs.

3 Data Description

The data set is from one of the largest global e-commerce platforms in which, the whole courses of customers’ online shopping on credit are recorded, i.e., browsing items, placing orders, seeking credit, and repaying loans. The browsing, ordering, and borrowing activities are recorded in the forms of sessions of clicks, orders, loans, respectively.

A session of clicks is defined as beginning with a click which occurs after 15 minutes or more have elapsed since the last click and continuing until 15 minutes or more elapse between clicks. The consumers in our dataset are required to have conducted at least three times of borrowing instances on the platform during the period from Nov. 1st, 2016 to Nov. 1st, 2018, i.e., have at least three historical loans. To limit the length of loan sequence, only the most recent 15 loans of each consumer are recorded. In this way, each consumer in the dataset possesses a temporal loan sequence with a minimum length of 3 and a maximum length of 15.

For each loan in a loan sequence, only the orders within the past 6 months before the issuance of that loan and the sessions within the past 14 days before the issuance of that loan are recorded. This is because the contribution of ordering and browsing actions in predicting default risk is considered time-sensitive. For example, it is unlikely that a customer would spend more than two weeks to make a single decision on whether to buy something. Therefore, browsing behavior more than 14 days before the current loan might not be helpful.

Also, only the most recent 15 orders and 15 sessions before the issuance of each loan are recorded to limit the length of order and session sequences. A loan sequence that has a loan with less than 3 orders or 3 sessions before the issuance of that loan are dropped. In this way, each loan in a loan sequence possesses a temporal order sub-sequence and a temporal session sub-sequence both with a minimum length of 3 and a maximum length of 15. From the consumers that meet the above requirements, 2,500 of them with no default records in their loan sequence are randomly selected, and 2,500 of them with at least one default record in their loan sequence are randomly selected. A default record generates when a consumer has been delinquent for more than 90 days on a loan. In total, 5,000 consumers are selected. Finally, the dataset contains 38,182 loans where 11,184 of them are default ones, 499,572 orders, and 356,338 sessions of clicks. On average, each consumer has 7.64 loans, and each loan is related to 13.08 orders and 9.33 sessions, i.e., the average length of loan sequences, order sub-sequences, and session sub-sequences is 7.64, 13.08, and 9.33, respectively.

Variable Mean SD 5th 25th Median 75th 95th
All Loans l.amt 322.09 756.71 18.69 51.49 107.61 227.01 1439.61
term 1.87 1.97 1.00 1.00 1.00 1.00 6.00
int.rate 2.18 4.41 0.00 0.00 0.00 0.00 12.00
l.itv 16.48 30.72 0.00 0.00 4.00 17.00 80.00
Default Loans l.amt 312.52 722.52 19.97 50.01 105.55 230.83 1299.01
term 2.52 2.67 1.00 1.00 1.00 3.00 6.00
int.rate 3.64 5.26 0.00 0.00 0.00 9.60 12.00
l.itv 11.40 24.22 0.00 0.00 2.00 10.00 57.00
Non-Default Loans l.amt 326.06 770.41 16.97 52.41 107.89 224.49 1497.81
term 1.60 1.50 1.00 1.00 1.00 1.00 6.00
int.rate 1.58 3.86 0.00 0.00 0.00 0.00 12.00
l.itv 18.58 32.82 0.00 0.00 5.00 21.00 87.00
Table 1: Summary Statistics of Loans. The number of loans is 38,182, where 11,184 loans default. Variable names l.amt, term, int.rate, and l.itv represent loan amount (CNY), loan term (month), annualized interest rate (%), and time interval between consecutive loans (day), respectively.
Variable Mean SD 5th 25th Median 75th 95th
All Orders oamt 664.35 10746.74 28.53 57.06 171.18 399.42 3024.21
damt 77.55 288.74 0.00 0.00 0.00 57.06 313.83
qtty 2.73 14.52 1.00 1.00 1.00 2.00 7.00
catep 1.82 1.72 1.00 1.00 1.00 2.00 5.00
oitv 7.78 14.78 0.00 0.00 2.00 9.00 34.00
Orders w.r.t. Default Loans oamt 579.33 7847.55 28.53 57.06 142.65 370.89 2995.68
damt 59.96 216.95 0.00 0.00 0.00 57.06 256.77
qtty 2.49 15.53 1.00 1.00 1.00 2.00 6.00
catep 1.69 1.61 1.00 1.00 1.00 2.00 5.00
oitv 6.34 13.88 0.00 0.00 1.00 6.00 30.00
Orders w.r.t. Non-Default Loans oamt 700.68 11769.62 28.53 85.59 171.18 399.42 3195.39
damt 85.06 314.16 0.00 0.00 0.00 85.59 342.36
qtty 2.83 14.07 1.00 1.00 1.00 3.00 8.00
catep 1.87 1.76 1.00 1.00 1.00 2.00 5.00
oitv 8.40 15.11 0.00 0.00 2.00 10.00 36.00
Table 2: Summary Statistics of Orders. The number of orders is 499,572, where 149,564 of them are in the sub-sequences for default loans. Variable names oamt, damt, qtty, catep, and oitv represent order amount (CNY), discount amount (CNY), quantity purchased, number of commodity category purchased, and time interval between consecutive orders (day), respectively.
Variable Mean SD 5th 25th Median 75th 95th
All Sessions nclick 10.66 17.51 1.00 2.00 5.00 12.00 40.00
catev 1.94 1.55 1.00 1.00 1.00 2.00 5.00
duration 120.09 454.27 0.00 0.94 18.42 105.08 572.39
sitv 401.58 438.80 0.00 35.78 206.23 697.48 1312.65
Sessions w.r.t. Default Loans nclick 11.61 20.11 1.00 2.00 5.00 13.00 44.00
catev 2.00 1.67 1.00 1.00 1.00 2.00 5.00
duration 122.47 436.29 0.00 0.94 19.84 109.09 582.31
sitv 396.00 436.61 0.00 35.92 199.27 682.12 1309.10
Sessions w.r.t. Non-Default Loans nclick 10.28 16.33 1.00 2.00 5.00 12.00 38.00
catev 1.91 1.51 1.00 1.00 1.00 2.00 5.00
duration 119.13 461.33 0.00 0.71 17.71 103.19 568.00
sitv 403.82 439.66 0.00 35.67 208.83 703.43 1313.95
Table 3: Summary Statistics of Click Sessions. The number of sessions is 356,338, where 102,425 of them are in the sub-sequences for default loans. Variable names nclick, catev, duration, and sitv represent number of clicks, number of category visited, duration of the session (minute), and time interval between consecutive sessions (minute), respectively. Note that sitv has values that are less than 15 minutes. This is because the collection of clicks into sessions is done day by day, therefore a session with a sitv less than 15 minutes means that on the one hand the session before it happened yesterday near midnight, on the other hand, the current session happens today right after last midnight.

Table 1, 2 and 3

present the descriptive statistics for some features of loans, orders, and click sessions. There are 38,182 loans for 5,000 consumers in our dataset, where 11,184 of the loans default. Each consumer has 7.64 loans on average. The major features of a loan include the loan amount, loan term, and interest rate. Besides, the time interval between the current loan and the last loan is also of interest. As the table shows, default loans tend to have smaller loan amounts, longer loan terms, higher interest rates, and shorter borrowing intervals.

There are 499,572 orders for 38,182 loans in our dataset, where 149,564 of the orders are in the sub-sequences for default loans. Each loan has an order sub-sequence with 13.08 orders on average. The major features of an order include order amount, discount amount, the number of items purchased (Qtty.), the number of categories purchased (Cate. Purchase). Besides, the time interval between the current order and the last order is also of interest. As the table shows, default loans are usually related to orders with lower order amount, lower discount amount, fewer items and categories of products within an order, and shorter ordering intervals, suggesting the possibility of irrational consumption.

There are 356,338 sessions for 38,182 loans in our dataset, where 102,425 of the sessions are in the sub-sequences for default loans. Each loan has a session sub-sequence with 9.33 sessions on average. The major features of a click session include the number of clicks within a session (N. of clicks), the number of categories visited (Cate. Visit), and the duration of a session. Besides, the time interval between the current session and the last session is also interesting. As the table shows, default loans are usually related to sessions with more clicks and more considerable duration of sessions, suggesting higher user stickiness.

4 Methodology

In this section, we introduce the NeuCredit model, which takes the temporal sequences of browsing, ordering, and borrowing as input and outputs the consumer credit risk at the issuance of each loan. The components of the model are illustrated one after another in the following subsections. We use bold lowercase letters to denote vectors and bold uppercase letters to denote matrices. A summary of variable notations is provided in Appendix

Appendix C. Notations for Main Variables. The shapes of vectors and matrices can also be found in the summary.

4.1 Input Definition

For a consumer on an e-commerce platform, her borrowing actions forms a loan sequence where is the time-stamp of loan issuance and is the vector containing variables related to loan . is the number of dimensions of . The loan variables are comprised of two parts: loan features such as amount, interest rate, loan term, etc., and a temporal feature specifying the time interval between this loan and the last loan.

For each loan, ordering actions before and within a preset observation period are assigned to the loan to form a corresponding order sub-sequence. There are in total order sub-sequences where is the order sub-sequence for loan . is the vector containing order information like order amount, product quantity, and the time interval between this order and the last order, etc. is the number of dimensions of .

Browsing actions are first grouped into sessions, where a session is defined as beginning with a click which occurs after 15 minutes or more have elapsed since the last click and continuing until 15 minutes or more elapse between clicks. Then, the sessions are assigned to loans in the same manner as orders. This gives sub-sequences of browsing sessions where is the browsing session sub-sequence for loan . is the vector containing the browsing information within session of loan such as duration of the session, time-on-page, total number of clicks, and the time interval between this session and the last one, etc. is the number of dimensions of .

An exemplary data structure is illustrated in Figure 2.

Figure 2: An example of structured time-series consumer behavior. A series of loans is observed along the timeline for a consumer. Loans, orders, and sessions are marked red, green and purple, respectively. The time-stamps marked by red squares are at loan issuance. is the vector containing loan features for loan . Orders and sessions within preset observation periods before loan issuance are grouped and assigned to loans to form two sub-sequences for each loan. is a set of order vectors that forms the order sub-sequence for loan . is a set of session vectors that forms the session sub-sequence for loan . Sub-sequences will overlap with each other if loans cluster in time (as sub-sequences for loan 2 and 3 in the figure). Time intervals between consecutive elements in each sequence are not equal with each other due to behavioral irregularities in time.

4.2 Sequence Encoding

The most fundamental component of NeuCredit is the recurrent unit employed to learn behavioral dynamics. Usually, Long Short-Term Memory (LSTM) neural network (Hochreiter and Schmidhuber 1997, Gers et al. 1999) is regarded as the most popular and effective recurrent unit in plenty of sequence modeling tasks (Ren et al. 2015, Wang et al. 2016, Yang et al. 2017)

. However, conventional sequential models, including LSTM, implicitly assume that elements in a sequence are discrete and uniformly distributed along the timeline, i.e., time intervals between consecutive elements are equal. This is not the case in most real-life tasks where events happen stochastically in continuous time. Time intervals between consumer actions can reveal valuable information in many scenarios, including credit risk modeling. For instance, a recent purchase of an expensive good in cash indicates a good economic condition, while a purchase months ago may not play an active role in predicting the default risk of the current loan issued to finance an order.

In our situation, events in a loan sequence as well as in its related order sub-sequences and session sub-sequences are taking place irregularly in time. So it is imperative to consider these irregularities in modeling. In the literature, the most straightforward approach is to regard the time interval between two successive elements in a sequence as an extra feature so that the standard LSTM is applicable as before. As Equation (1

) shows, this approach implicitly models the non-linear effects of the time interval on other features through the activation functions in LSTM.

In Equation (1), is the Hadamard product operator that implements the element-wise multiplication, and are activation functions that introduce non-linearity into fitting, represents the current input vector, is the time interval between the current time-stamp and the previous time-stamp, and are the previous and the current hidden states, and are the previous and the current cell memories, {, , }, {, , }, {, , }, and {, , } are the trainable network parameters of the input, forget, output gates and the candidate memory, respectively, and , , , and are the input, forget, output gates and the candidate memory, respectively.

(1)

The shape of these vectors and matrices are in Appendix Appendix C. Notations for Main Variables. For the theories and details of Long Short-Term Memory neural network, please refer to Hochreiter and Schmidhuber (1997) and Gers et al. (1999).

Alternatively, Baytas et al. (2017) is the first to explicitly model the effect of time intervals by proposing Time-aware LSTM (T-LSTM). Instead of regarding as a common feature, the authors use it to process the cell memory in standard LSTM. Specifically, the cell memory is first decomposed into short-term and long-term memories. Then, the short-term memory is discounted by a factor where is some preset monotonically non-increasing function. The long-term and the discounted short-term memories are next fused into that serves the role of the original cell memory in standard LSTM. The mathematical forms of the above operations are as follows,

(2)

In Equation (2), is the cell memory in standard LSTM, and are the short-term and long-term memories, respectively, is the discounted short-term memory, and are trainable network parameters for decomposition, and is the new cell memory that will take the place of the original in Equation (1). According to Baytas et al. (2017), T-LSTM performs much better than standard LSTM on both synthetic and real world sequential data.

However, this method is problematic to some extent. First, it uses a preset function that only allows monotonically non-increasing discounting of the cell memory and thus prohibits the enhancement of cell memory in time. This setting is too rigorous in practice as some events are effective in a very long run, and their importance can even naturally grows over time.

For instance, the amount of money deposited in a bank can increase persistently at the interest rate. Second, the third formula in Equation (2) implicitly assumes that the values at different positions of vector possess a same discounting rate , which limits the expressiveness of T-LSTM. Third, the discounting is taking place in a low-dimensional space which makes it hard for to discount information in high dimensions. This constraint is caused by the network parameter that maintains the number of dimensions during mapping. Lastly, the discounting with a preset function lacks theoretical insights about how does come into effect in modeling.

Therefore, we propose Time-value-aware LSTM (Tva-LSTM) that settles the problems of T-LSTM. Tva-LSTM is very flexible that allows both decaying and growing of the cell memory over time. The decaying or growing rates are trainable so that the discounting process is data-driven. The discounting is taking place in a high-dimensional space, and each dimension has its own discounting rate.

Besides, the discounting mechanism is theoretically derived upon a reasonable assumption so that it shades lights on the functionality of . Particularly, the cell memory vector is first mapped to a high-dimensional space represented by a matrix . At the same time, a discounting matrix that has the same shape as is initialized by . Then, multiplies element-wisely to allow different discounting rates for different dimensions.

Lastly, the product matrix is mapped back to a low-dimensional space to serve as the new cell memory . The non-linearity is introduced via activation functions. The mathematical forms of the above operations are as follows,

(3)

In Equation (3), is the cell memory in standard LSTM, is the mapped cell memory in a high-dimensional space, is the corresponding discounting matrix, is the discounted mapped cell memory, and is the new cell memory that will take the place of the original in Equation (1). {, } are the trainable parameters responsible for mapping the cell memory to a high-dimensional space. {, } are the trainable parameters for initializing the discounting matrix. is the trainable parameter for discounting the mapped cell memory. {, } are the trainable parameters for mapping the discounted mapped cell memory back to a low-dimensional space.

Note that the discounting factor takes the form of exponentiation. In fact, this specific form can be derived by assuming that the elements in the mapped cell memory are continuously changing at different rates over time. Since the derivation is straightforward, we put it in Appendix Appendix A. Derivation of the Discounting Factor for clarity.

Figure 3: The illustration of the Tva-LSTM recurrent unit. The product sign with a circle around denotes the point-wise multiplication operator and the plus sign with a circle around is the point-wise addition operator. The operations inside the discounting module are presented in Equation (3). Other operations are the same as Equation (1). The sigmoid and the tanh represent the activation function and , respectively.

Figure 3 gives a brief illustration of the proposed Tva-LSTM recurrent unit. To be specific, Tva-LSTM takes the hidden state and the cell memory

from last moment as inputs. Before passing them to different gates, the cell memory first entries into a discounting unit to regularize the time gap between the last moment and the current moment. In the discounting unit, the cell memory

will first be mapped into a high-dimensional space, then be element-wisely discounted via a discounting factor matrix, and lastly be mapped back to the original low-dimensional space.

As denoted in Equation (3), the complete process of time gap regularization is data-driven such that both the mapping parameters and the decaying/growing rate parameters are learned simultaneously with the rest of network parameters by back-propagation. This renders Tva-LSTM very expressive as it not only allows both decaying and growing of cell memory over time but also assigns different changing rates to different dimensions in the high-dimensional space. Following discounting, the hidden state and the regularized cell memory are passed to typical LSTM gates.

4.3 Multi-view Fusion

Another critical component of the NeuCredit model is the fusion strategy used to combine the main loan sequence and its related sub-sequences. The objective of fusion is to integrate the information heterogeneity maintained in different views of actions and more importantly, to model the mutual effects due to behavioral interactions. In this study, order and session sub-sequences are encoded via two Tva-LSTM, separately. The fusion is carried out at the issuance of each loan in the loan sequence.

Taking the fusion at loan as an example, the inputs of fusion are the loan vector , the final hidden state of the Tva-LSTM for the -th order sub-sequence , and the final hidden state of the Tva-LSTM for the -th session sub-sequence . One straightforward idea is to first concatenate the three vectors and then pass it through a fully connected neural network layer with a nonlinear activation function , i.e.,

(4)

Another approach is to capture the interactions of different groups of actions by exploiting the concept of Multi-view Machines (Cao et al. 2016). Here, we employ a Multi-view Machines layer (Cao et al. 2017) for fusion. The layer explicitly models the feature interactions so that it acquires non-linearity more efficiently in training. Besides, it captures full-order interactions from 0 to the number of input vectors. For the theories and details of Multi-view Machines, please refer to Cao et al. (2016, 2017). The formula of this layer is

(5)

where , , and are three trainable factor matrices for fusion. Their shapes are , , and , respectively. and are the number of hidden units in the Tva-LSTM for order sub-sequences and session sub-sequence, respectively. is the number of dimensions of the fused vector .

4.4 Hierarchical Network

In this part, the forementioned components are combined to present the hierarchical network proposed for sophisticated consumer behavior. The architecture is illustrated in Figure 4.

Figure 4: The architecture of the hierarchical network.

In the bottom-level layers, two separate Tva-LSTM recurrent units are used to encode the order sub-sequences and session sub-sequences. This avoids the difficulties of aligning different groups of actions that have distinct patterns of serial-dependency and frequencies of occurrence. For sub-sequences and , the encoding is done as follows,

(6)

where and are the hidden states of Tva-LSTM units, and denote the two Tva-LSTM units employed for order and session sub-sequences. The last hidden states and summarize the information in sub-sequences and and thus are regarded as their final representations.

In the up-level layer, the loan vector is first fused with and as in Equation (5). The procedure is denoted by the MvM Fusion unit in Figure 4. Following that, the fused vector is encoded by a up-level Tva-LSTM:

(7)

where is the hidden state of the up-level recurrent unit . is the number of hidden units. represents a summary of consumer behavior up to time-stamp in the loan sequence .

4.5 Conditional Loss

In the last section, we successfully obtain the representation of all historical events at time-stamp . Following that, is often used to fulfill some classification or regression tasks. For example, in credit management, a critical task for risk assessment is to predict whether a loan will default. A loan is considered as default if its repayment delays more than 90 days. The prediction can be implemented as follows,

(8)

In Equation (8), is a trainable vector that maps to one dimension, is a real-value bias, is the sigmoid activation function, and the predicted default probability is . The dissimilarity between and the real binary outcome is measured by a loss function . if loan defaults; otherwise, . The model parameters are learned by minimizing the loss function in training.

This approach is standard in classification problems. But it has one serious drawback in credit risk modeling. The predicted default probability is not interpretable. It neither distinguishes the sources of risk nor illuminates the contributions of different sources to default. Here, we propose to construct the default probability based on three major determinants of loan defaults (Lee 1991, Chehrazi and Weber 2015): the objective risk (the ability to repay), the subjective risk (the willingness to repay), and the behavioral risk (the risk neither objective nor subjective). In probability, we formulate the default probability as follows,

(9)

In Equation (9), is the default probability, is the default probability when the ability is , is the default probability when the willingness is , and is the default probability conditioned on and , i.e., the default risk caused by behavioral patterns other than the ability and the willingness to repay. In this way, the default probability becomes interpretable.

To simulate the construction of an interpretable default probability in neural networks, we first decompose into three vectors:

(10)

where are trainable parameters for decomposition, and , , and are hidden vectors containing the information for ability risk, willingness risk, and behavioral risk, respectively. Then, the hidden vectors are separately mapped to one dimension to predict , , and :

(11)

where are trainable parameters for mapping. Following that, the predicted default probability is supervised by as before.

In order to let , , and truly represent the meaning we imposed on them, it is imperative to supervise them independently by their own ground-truth in training. However, , , and are completely unobservable in practice. Therefore, we put forward a method to infer the values of and for a loan by carefully analyzing the repayment behavior on that loan.

Particularly, if a borrower defaults on a loan, although we are not sure about whether it is caused by a low ability or a low willingness to repay, we can still infer that one of them must be low enough to lead to the outcome. That is, in probability, the probability of the default that is caused by neither ability nor willingness to repay is very low.

On the contrary, if a borrower repays every installment on time and never defaults on that loan, it is certain that he has not only a high ability but also a high willingness to repay. Another interesting situation in between is that a borrower never defaults, but he is often delinquent (overdue) on the periodical installments of that loan. In this condition, the repaying ability of the borrower must be high as he is always able to complete the payment, but the willingness may be low because of her frequent delinquencies. Mathematically, the inference above can be summarized as

(12)

where is the proportion of the installments of loan that the borrower has been delinquent on. In this way, we inferred the values of and under different conditions. These inferred values can be used as teachers in training via a conditional loss function:

(13)

Note that

is conditioned on a binary variable

, we can write the two expressions into one and combine it with . In summary, the proposed loss function for the NeuCredit model is

(14)

where is the batch size used in mini-batch optimization and is the loan sequence length. The first part of Equation (14) is the conventional loss for classification. Here, we use binary cross-entropy as . The second and third parts of Equation (14) are the conditional loss hinging on the value of .

Following the computational graph, one can straightforwardly compute the gradients for all the network parameters in the NeuCredit model. Also, the error messages by weighing the predicted outputs with the observed loan outcomes can be back-propagated through the decomposition layers and fusion layers all the way to the very beginning to update the parameters in different branches of Tva-LSTMs. In that sense, the NeuCredit model is said to be an end-to-end deep neural network model that learns the dynamics of consumer behavior for interpretable credit risk modeling.

5 Experiment

In this section, we design and conduct experiments using both synthetic datasets and real-life datasets to address the following four groups of questions:

  • How much better are deep learning models than conventional machine learning models?

  • How much value is added by incorporating shopping behavior data when forecasting consumer credit risk?

  • Is it indeed important to model the irregular event time-internals?

  • Can we interpret the forecasted default probabilities into consumer’s ability to repay, willingness to repay, and their behavioral factors?

We use the synthetic dataset to demonstrate the superiority of the Tva-LSTM model over other competing ones on recovering the dynamics of complex patterns. The construction details is in Appendix Appendix B. Generation of the Synthetic Dataset. The synthetic dataset contains 10,000 sequences with a length of 50 for each sequence. Every data point in the dataset has 106 features and 1 label. Among the 106 features, only 5 are involved in the generation of the label, while the rest is all noise. Besides, to produce sequential dependencies, the 5 features at the current time-stamp in a sequence is generated by transforming the 5 features at the previous time-stamp in a highly non-linear manner. The label of each data point is a binary indicator which takes the value of 1 or 0. Among the 500,000 data points, 323,326 of them are positive instances, i.e., their labels equal to 1.

The real-life dataset contains 5,000 loan sequences with 38,182 loans in total. The average length of loan sequences is 7.64. Each loan possesses 15 features (). Among the 38,182 loans, 11,184 () of them default. For each loan, an order sub-sequence and a session sub-sequence are matched. Therefore, there are 5,000 order sub-sequences and 5,000 session sub-sequences. The dataset contains 499,572 orders and 356,338 sessions. On average, the length of an order sub-sequence is 13.08 and of a session sub-sequence is 9.33. Each order possesses 45 features () and each session possesses 16 features ().

In the experiments, sequences and sub-sequences with length less than 15 are padded to length of 15 using 0. The influence of padding is eliminated through masking both in training and testing. This treatment is a common practice in temporal data modeling, which allows us to handle variable length sequences in recurrent models. Features are standardized before passing to models.

As different group of questions require a different set of benchmark models, these models and their implementation details are left to be specified in corresponding subsections. All methods are evaluated using five-fold cross-validation (Kohavi et al. 1995). The Area-Under-ROC Curve (AUC) score is used as the primary performance metric in evaluation (Bradley 1997). Experiments are implemented using Python. Pandas333http://pandas.pydata.org/ and Numpy444http://www.numpy.org/ libraries are used to process the datasets. Scikit-Learn555http://scikit-learn.org/

and Tensorflow

666https://www.tensorflow.org/ libraries are used to implement the algorithms. The source code of all implementation will be publicly available after paper acceptance.

5.1 Deep Learning vs. Conventional Machine Learning Models

In this part, we test the performance improvements of our model over other conventional and competitive models. Specifically, does our model perform better in credit risk prediction than conventional models? Can a model with a similar structure but conventional units achieve comparable performance to our model? To answer these questions, the following methods are compared in experiments:

  • LR (loan): the Logistic Regression model trained on loans with the time interval as an extra feature. This is similar to the traditional consumer credit management scenario where only financing behavior can be observed.

  • LR (all): the Logistic Regression model trained on all three groups of data (loans, orders, and sessions). The features of sub-sequences are averaged along the timeline and concatenated with loan features. The time intervals are regarded as extra features.

  • RF (loan): the Random Forest model trained on loans with the time interval as an extra feature. This is similar to the traditional consumer credit management scenario where only financing behavior can be observed.

  • RF (all): the Random Forest model trained on all three groups of data (loans, orders, and sessions). The features of sub-sequences are averaged along the timeline and concatenated with loan features. The time intervals are regarded as extra features.

  • LSTM-w-dt (loan): the standard LSTM model trained on loans with the time interval as an extra feature.

  • MvM-Tva-LSTM (all): the model that employs the same hierarchical structure and fusion mechanism as Figure 4. The model is trained on all three groups of sequential data (loans, orders, and sessions).

The models are trained to predict loan defaults using binary cross-entropy loss. The number of hidden units () is set as 5 for the Tva-LSTM unit and the LSTM unit employed in the aforementioned methods. The number of output units () of the fully-connected fusion layer in the FC-LSTM model is set as 5. The number of output units () of the factor matrices in the MvM-Tva-LSTM model is set as 5. All neural network models are trained with a mini-batch stochastic Adam optimizer (Kingma and Ba 2014)

. The batch size is set as 1,000. The learning rate is 0.001. The number of epochs in training is determined using an early stopping criteria

(Caruana et al. 2001). The logistic regression models and the random forest models are trained with default parameter setting in Scikit-Learn. The AUCs of different models in five-fold cross-validation are shown in Table 4.

First, the conventional methods indeed cannot reach comparable performance to deep neural network methods. Second, compared with the FC-LSTM model that employs conventional units but uses the same hierarchical structure as that of our model, the MvM-Tva-LSTM model achieves better performance in experiments. An interesting finding is that the average AUC of the FC-LSTM is , which outperforms the average AUC of the FC-Tva-LSTM model in Section 5.2. This is inconsistent with our finding in Section 5.3 that the Tva-LSTM model is better at handling the time intervals and can outperform the conventional LSTM model without . The reason is that both the FC-LSTM model and the FC-Tva-LSTM model are trained in an end-to-end manner that requires a model to learn all the parameters from scratch (cold-start). While the units in the FC-LSTM model are conventional and easy to train, the units in the FC-Tva-LSTM model are much more complicated in design. It leads to insufficient training of the Tva-LSTM unit in the FC-Tva-LSTM model. This problem can be settled by using the pre-trained parameters to initialize the Tva-LSTM in FC-Tva-LSTM (warm-start). In general, these results demonstrate the feasibility and effectiveness of using shopping behavior to model credit risk for consumers. Also, it suggests that the proposed hierarchical architecture is better at capturing the underlying behavioral patterns of consumers than conventional methods.

Method/AUC (%) AUC-1 AUC-2 AUC-3 AUC-4 AUC-5 Avg. AUC S.D.
LR (loan) 63.59 64.68 66.33 60.47 63.96 63.80 0.0191
LR (all) 69.13 68.99 71.22 67.02 68.18 68.91 0.0138
RF (loan) 59.76 61.72 60.17 59.83 60.80 60.46 0.0073
RF (all) 68.59 67.16 69.28 66.97 68.53 68.11 0.0089
LSTM-w-dt (loan) 70.59 70.45 69.87 68.30 71.69 70.18 0.0111
MvM-Tva-LSTM (all) 74.25 73.22 75.86 72.37 73.98 73.94 0.0116
Table 4: AUCs Achieved with Different Models in Five-fold Cross-validation

5.2 The Importance of Adding Browsing and Purchasing Data

To better understand the roles played by different views of shopping behavior in default risk modeling, we train a Tva-LSTM model on each of the three types of temporal data. Besides, we study the importance of modeling the behavioral interactions and the necessity of multi-view fusion. Specifically, without borrowing data, are consumer behavior alone contain information in terms of predicting the outcomes of borrowing? If they are, does the multi-view fusion strategy successfully model the behavioral interactions in online shopping and uncover their contributions to credit risk prediction? Does the Multi-view Machines fusion layer behave better than the straightforward fully-connected fusion layer? To answer these questions, the following methods are implemented in experiments:

  • Tva-LSTM (loan): the Time-value-aware LSTM model trained on loan sequences.

  • Tva-LSTM (order): the Time-value-aware LSTM model trained on order sub-sequences, the hierarchical structure as Figure 4 is employed without fusion with other sequences/sub-sequences.

  • Tva-LSTM (session): the Time-value-aware LSTM model trained on session sub-sequences, the hierarchical structure as Figure 4 is employed without fusion with other sequences/sub-sequences.

  • FC-Tva-LSTM (all): the model that employs the same hierarchical structure as Figure 4 but uses a fully-connected layer instead of a Multi-view Machines layer for fusion. The model is trained on all three groups of sequential data (loans, orders, and sessions).

  • MvM-Tva-LSTM (all): the model that employs the same hierarchical structure and fusion mechanism as Figure 4. The model is trained on all three groups of sequential data (loans, orders, and sessions).

Method/AUC (%) AUC-1 AUC-2 AUC-3 AUC-4 AUC-5 Avg. AUC S.D.
Tva-LSTM (loan) 71.13 71.14 71.02 68.95 72.29 70.91 0.0108
Tva-LSTM (order) 72.53 71.28 72.29 69.87 72.42 71.68 0.0101
Tva-LSTM (click) 54.88 53.87 54.61 57.27 54.43 55.01 0.0118
FC-Tva-LSTM (all) 73.11 72.04 74.87 71.18 73.92 73.02 0.0131
MvM-Tva-LSTM (all) 74.25 73.22 75.86 72.37 73.98 73.94 0.0116
Table 5: AUCs Achieved with Different Real-Life Data Streams in Five-fold Cross-validation

The models are trained to predict loan defaults using binary cross-entropy loss. The number of hidden units () is set as 5 for all Tva-LSTM units employed in the aforementioned methods. The number of output units () of the fully-connected fusion layer in the FC-Tva-LSTM model is set as 5. The number of output units () of the factor matrices in the MvM-Tva-LSTM model is set as 5. All models are trained with a mini-batch stochastic Adam optimizer (Kingma and Ba 2014). The batch size is set as 1,000. The learning rate is 0.001. The number of epochs in training is determined using an early stopping criteria (Caruana et al. 2001). The AUCs of different models in five-fold cross-validation are shown in Table 5.

First, the average AUCs achieved with orders and sessions are and . Both of them are greater than , indicating that behavior in shopping is indeed useful in predicting one’s credit risk. Moreover, ordering actions seem more informative than borrowing actions as the average AUC achieved with orders is consistently higher than that achieved with loans. This finding provides the empirical foundation that supports the development of online credit shopping on e-commerce platforms. When all types of actions are considered, the performance increases. The AUCs of both the FC-Tva-LSTM model and the MvM-Tva-LSTM model are consistently higher than that of models with a single type of actions, indicating that the fusion layer indeed explored the macroscopic interactions cross different views of data, and the mutual effects it exposed are effective in evaluating consumer credit risk. Besides, the MvM-Tva-LSTM model performs better than the FC-Tva-LSTM model, which demonstrates Multi-view Machines fusion is better at capturing the interactions.

5.3 The Importance of Modeling Irregular Time-intervals of Events

In this part, we study the importance of handling irregularity in temporal data modeling. Specifically, is it indeed necessary to take time intervals into consideration? Is the proposed handling of time intervals, via the Tva-LSTM model, better at capturing the irregularities in behavior than other methods? To answer these questions, the following models are used for comparison:

  • LSTM: the standard LSTM model that ignores time intervals.

  • LSTM-w-dt: the standard LSTM model that takes time intervals into modeling as in (1).

  • T-LSTM: the Time-aware LSTM model proposed by Baytas et al. (2017) that takes time intervals into modeling via a present discounting function .

  • Tva-LSTM: the Time-value-aware LSTM model proposed in this study that handles the time intervals in a more expressive way.

The experiments are conducted on both the synthetic sequences and the loan sequences in the real-life data. The models are trained to predict loan defaults using binary cross-entropy loss. The number of hidden units (

) is set as 2 for all models in experiments with the synthetic data, and 5 for all models in experiments with the real-life data. All models are trained with a mini-batch stochastic RMSprop optimizer

(Mukkamala and Hein 2017). The batch size is set as 1,000. The learning rate is 0.001. The number of epochs in training is determined using an early stopping criteria (Caruana et al. 2001). The AUCs of different models in five-fold cross-validation are plotted in Figure 5. The average AUCs are shown in Table 6.

The proposed Tva-LSTM model achieves the best performance on both synthetic and real-life data. Models that incorporate time intervals achieve better average AUC in experiments. This implication is more evident in experiments with real-life data, where LSTM-w-dt, T-LSTM, and Tva-LSTM all outperform the conventional LSTM by more than three percentage points. These results demonstrate the necessity of taking the time intervals into consideration and the superiority of the proposed discounting mechanism in Tva-LSTM.

Figure 5: AUCs of Sequential Models in Five-fold Cross-validation
Data Synthetic Real-Life
Method/Metric Avg. AUC (99%+bps) S.D. Avg. AUC (%) S.D.
LSTM 69 0.0002 66.40 0.0135
LSTM-w-dt 69 0.0003 70.18 0.0111
T-LSTM 74 0.0005 70.03 0.0115
Tva-LSTM 82 0.0004 70.91 0.0108
Table 6: Prediction Performance of Sequential Models

5.4 Structural Interpretation of Forecasted Default Probabilities

Up to now, models such as FC-LSTM, FC-Tva-LSTM, or MvM-Tva-LSTM are all trained to predict loan defaults using binary cross-entropy loss. In this part, we turn to the interpretable conditional loss function and evaluate the complete NeuCredit model. Specifically, we want to address the following questions: Given the highly complicated structures and operations inside the NeuCredit model, does it converge properly in training? If it does, what is its performance? More importantly, are the values of predicted ability and predicted willingness consistent with our design? How does consumer behavior relate to the ability and the willingness of repaying?

The parameter setting of the NeuCredit model is the same as that of the MvM-Tva-LSTM model. The curve of training loss and the curve of training AUC are plotted in Figure 6. Here, only the curves for one of the five splits in five-fold cross-validation is presented for simplicity. As they show, the convergence of the NeuCredit model is not affected even if we employ many complicated units such as Tva-LSTM, Multi-view Machines Fusion, and Conditional Loss in the NeuCredit model. The loss is continuously decreasing and the AUC is continuously increasing as the training process proceeds.

Figure 6: The Loss Curves and the AUC Curves in Training and Validation

Besides, the prediction performance is presented in Table 7. The performance of the MvM-Tva-LSTM model is also presented in Table 7 for reference. Note that the performance of the NeuCredit model is inferior to MvM-Tva-LSTM. It is a reasonable result since while MvM-Tva-LSTM is trained to serve as an MLE (maximum likelihood estimator), i.e., directly optimizing the binary cross-entropy, the NeuCredit model needs to weigh between the prediction performance and the interpretability. This trade-off leads to a little performance decrease of the NeuCredit model in default risk prediction.

Method/AUC (%) AUC-1 AUC-2 AUC-3 AUC-4 AUC-5 Avg. AUC S.D.
NeuCredit 74.91 72.18 72.39 74.00 75.06 73.71 0.0122
MvM-Tva-LSTM 74.25 73.22 75.86 72.37 73.98 73.94 0.0116
Table 7: AUCs Achieved with the NeuCredit model in Five-fold Cross-validation

Next, we check if the predicted values of behavioral risk, ability risk, and willingness risk are consistent with our design. We interpret results from two perspectives. First, the three types of predicted risks are scattered against predicted default probabilities on Figure 7 to visualize the correlation between credit risk and its determinants. Together with equation (9), it is very interesting to notice how the ultimate default probability attributes to the three claimed risk types.

Figure 7: The Scatter for Predicted Default Determinants

We then use linear regression to test if the three types of risk are significantly correlated with the outcomes of loans, i.e., the default indicator and the delinquency ratio . The meaning of all regression variables is detailed in Table 8. The results are collected in Table 9, 10, and 11. The regressions in Table 9 reveals what we have encoded in the NeuCredit model. The regressions in Table 10 answers the question of whether the predicted values are significant in differentiating consumers with different risk levels. Finally, the regressions in Table 11 studies if the obtained factors are significant determinants in predicting consumer defaults.

As it shows, the behavioral risk and the willingness risk are indeed positively correlated with and . The predicted values of the willingness risk is consistent with our expectation. One thing worthy of future investigations is that the correlation between the ability risk and default or delinquency ratio is reversed. Also, the explanatory power of the predicted ability risk is low compared to the other two types of predicted risk. The reason could be that the model does not distinguish well the behavioral risk from the ability risk as they essentially share the same guidance in the training process. And the reason that the predicted behavioral risk is in line with our expectation is that it is supervised by which incorporates more information via . In general, both the scatter plots and the regressions demonstrate that the predicted values of three types of determinants are consistent with our design and they do reveal the sources of credit risk.

Group Variable Description
Real Outcome

the dummy variable that indicates default of a loan

Prediction of NeuCredit the predicted default probability of a loan
the predicted default probability of a loan with ability
the predicted default probability of a loan with willingness
the predicted default probability of a loan with behavioral risk
1 if is larger than the median of ; 0 otherwise
1 if is larger than the median of ; 0 otherwise
1 if is larger than the median of ; 0 otherwise
Loan Variable the natural logarithm of the principal of a loan (CNY)
the term of a loan (month)
the annualized interest rate of a loan
the time interval between the current and the previous loan issuance
the natural logarithm of the minimum payment of a loan installment (CNY)
Order Variable the natural logarithm of the average order amount (CNY)
the average discount rate of an order
the average quantity of items within an order
the average quantity of different items within an order
the average time interval between the current and the previous order
the average proportion of virtual goods within an order
the average proportion of self-selling goods within an order
the average proportion of free gifts within an order
the user level when placing an order
Session Variable the average number of clicks within a session
the average number of category visited
the average duration of a session (minute)
the average time interval between the current and the previous session
Table 8: Explanation of Regression Variables
Explanatory/Response
# of obs.
# of groups
Table 9: Regression: What we’ve encoded via the NeuCredit model?
Explanatory/Response
# of obs.
# of groups
Table 10: Regression: Are the predicted values significant in differentiating consumers with different risk levels?
Explanatory/Response
# of obs.
# of groups
Table 11: Regression: Are the obtained factors significant determinants in predicting consumer defaults?

6 Conclusion

In this paper, we take a data-driven bottom-up approach to model consumer credit risk with structural interpretability in the e-commerce scenario when a platform provides unsecured lending to finance consumer purchasing and needs to manage the resulting credit exposure. By zooming into the tick-level shopping behavior and the subsequent financing records of large population, we open a window to profile consumer credit at an unprecedented granular level. Deciphering them carefully would allow real-time assessment of future payment risk, particularly when payments are financed without posting collateral.

The structure of our deep neural network is novel. First, we propose Tva-LSTM recurrent unit to encode temporal shopping behavior that happen stochastically in time. Tva-LSTM unit effectively regularizes the time intervals in temporal data. The discounting mechanism in this unit is explainable as it is derived on mild assumptions. Then, the encoded representations are passed to a Multi-view Machines layer to do information fusion. The fusion strategy explicitly computes the interactions across different types of shopping behavior via tensor multiplication. Finally, the NeuCredit model organizes temporal data in a hierarchical structure which avoids dominant view problem and achieves real-time fusion of various types of information. Besides, we propose a novel conditional loss function that exploits repaying behavior to infer the values of determinants for credit risk. We decompose the consumer credit risk into three of its determinants: behavioral risk, ability-to-repay risk, and willingness-to-repay risk. The supervising of these risks are accomplished in training even if their ground-truths are not observable. In this way, the NeuCredit model is able to output interpretable credit risk predictions. Extensive experiments are conducted using both a synthetic dataset and a massive real-life dataset collected from one of the largest global e-commerce platforms. The out-of-sample forecasts of consumer default risk demonstrate the effectiveness of the methodology proposed in this paper, in terms of the superiority of our model over conventional machine learning models as well as other state-of-the-art deep learning models, as well as the interpretability of the model predictions.

In our opinion, there are three future directions that are very interesting to study further. First, the prediction performance can be further boosted. In this paper, we adopt an end-to-end learning schema that trains the parameters of a neural network from scratch. However, a more efficient way in training is to ’warm-up’ the network with pre-trained parameters. The pre-training can be done in a lot of ways, and some of which include the use of transfer learning algorithms. Thus, how to transfer richer information into the NeuCredit model to further improve its performance is an interesting direction. Second, in this paper, we propose a deep learning method to break down the credit risk into its determinants. Is there any other determinants can be incorporated into this framework? If there is, then how to? This is also an interesting problem to enrich the interpretability of the NeuCredit model. Third, as presented in this study, the NeuCredit model can output the predicted values of the determinants of credit risk. Therefore, they should not only be used for understanding the source of risk, but also be used for risk management. For example, based on the predictions of ability and willingness to repay, how to build more accurate models to fulfill tasks like debt collection or credit extension is also of great importance.

References

  • Baytas et al. (2017) Baytas IM, Xiao C, Zhang X, Wang F, Jain AK, Zhou J (2017) Patient subtyping via time-aware lstm networks. Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, 65–74 (ACM).
  • Bradley (1997) Bradley AP (1997) The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern recognition 30(7):1145–1159.
  • Cao et al. (2017) Cao B, Zheng L, Zhang C, Yu PS, Piscitello A, Zulueta J, Ajilore O, Ryan K, Leow AD (2017) Deepmood: Modeling mobile phone typing dynamics for mood detection. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 747–755 (ACM).
  • Cao et al. (2016) Cao B, Zhou H, Li G, Yu PS (2016) Multi-view machines. Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, 427–436, WSDM ’16 (New York, NY, USA: ACM), ISBN 978-1-4503-3716-8, URL http://dx.doi.org/10.1145/2835776.2835777.
  • Caruana et al. (2001)

    Caruana R, Lawrence S, Giles CL (2001) Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping.

    Advances in neural information processing systems, 402–408.
  • Chehrazi and Weber (2015) Chehrazi N, Weber TA (2015) Dynamic valuation of delinquent credit-card accounts. Management Science 61(12):3077–3096.
  • Gers et al. (1999) Gers FA, Schmidhuber J, Cummins F (1999) Learning to forget: Continual prediction with lstm .
  • Hochreiter and Schmidhuber (1997) Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural computation 9(8):1735–1780.
  • Khandani et al. (2010) Khandani AE, Kim AJ, Lo AW (2010) Consumer credit-risk models via machine-learning algorithms. Journal of Banking & Finance 34(11):2767–2787.
  • Kingma and Ba (2014) Kingma DP, Ba J (2014) Adam: A method for stochastic optimization. International Conference on Learning Representations.
  • Kohavi et al. (1995) Kohavi R, et al. (1995) A study of cross-validation and bootstrap for accuracy estimation and model selection. Ijcai, volume 14, 1137–1145 (Montreal, Canada).
  • Lee (1991) Lee SH (1991) Ability and willingness to service debt as explanation for commercial and official rescheduling cases. Journal of Banking & Finance 15(1):5–27.
  • Mukkamala and Hein (2017) Mukkamala MC, Hein M (2017) Variants of rmsprop and adagrad with logarithmic regret bounds. Proceedings of the 34th International Conference on Machine Learning-Volume 70, 2545–2553 (JMLR. org).
  • Ren et al. (2015) Ren M, Kiros R, Zemel R (2015) Exploring models and data for image question answering. Advances in neural information processing systems, 2953–2961.
  • Sirignano et al. (2016) Sirignano J, Sadhwani A, Giesecke K (2016) Deep learning for mortgage risk. arXiv preprint arXiv:1607.02470 .
  • Wang et al. (2016) Wang Y, Huang M, Zhao L, et al. (2016) Attention-based lstm for aspect-level sentiment classification.

    Proceedings of the 2016 conference on empirical methods in natural language processing

    , 606–615.
  • Yang et al. (2017) Yang M, Tu W, Wang J, Xu F, Chen X (2017) Attention based lstm for target dependent sentiment classification.

    Thirty-First AAAI Conference on Artificial Intelligence

    .

Appendix A. Derivation of the Discounting Factor

In Tva-LSTM, we assume that each element of the mapped cell memory is changing at a distinct rate every unit of time during the time interval . The changing rates for all the elements are denoted by a matrix that has the same size as . Therefore, the new cell memory after is

(15)

If this change is continuous during , that is decays/grows times in every unit of time and , we have

(16)

According to the definition of Euler’s number , Equation (16) can be simplified to

(17)

where is regarded as a discounting factor of the mapped cell memory . Based on that, we introduce basic changing rates for by setting up a bias matrix , which allows changing of the cell memory even when . Also, an activation function is used to add non-linearity. In summary, the discounting factor becomes the one we employed in the Tva-LSTM recurrent unit:

(18)

Appendix B. Generation of the Synthetic Dataset

The synthetic dataset contains 10,000 sequences with length of 50 for each sequence. Here, we denote a sequence as . Each data point possesses 106 features, i.e., . is the time interval between data point and . The value of is 0 when and is sampled from otherwise. Other features are sampled as