Fighting Money Laundering with Statistics and Machine Learning: An Introduction and Review

01/11/2022
by   Rasmus Jensen, et al.
71

Money laundering is a profound, global problem. Nonetheless, there is little statistical and machine learning research on the topic. In this paper, we focus on anti-money laundering in banks. To help organize existing research in the field, we propose a unifying terminology and provide a review of the literature. This is structured around two central tasks: (i) client risk profiling and (ii) suspicious behavior flagging. We find that client risk profiling is characterized by diagnostics, i.e., efforts to find and explain risk factors. Suspicious behavior flagging, on the other hand, is characterized by non-disclosed features and hand-crafted risk indices. Finally, we discuss directions for future research. One major challenge is the lack of public data sets. This may, potentially, be addressed by synthetic data generation. Other possible research directions include semi-supervised and deep learning, interpretability and fairness of the results.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/09/2019

The Future of Misinformation Detection: New Perspectives and Trends

The massive spread of misinformation in social networks has become a glo...
02/23/2021

Neural Ranking Models for Document Retrieval

Ranking models are the main components of information retrieval systems....
03/27/2020

word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings of Structured Data

Vector representations of graphs and relational structures, whether hand...
08/23/2019

Fairness in Deep Learning: A Computational Perspective

Deep learning is increasingly being used in high-stake decision making a...
06/11/2021

What Can Knowledge Bring to Machine Learning? – A Survey of Low-shot Learning for Structured Data

Supervised machine learning has several drawbacks that make it difficult...
05/16/2016

Identification of promising research directions using machine learning aided medical literature analysis

The rapidly expanding corpus of medical research literature presents maj...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Money laundering amounts to 2.1-4% of the world economy [Pietschmann2001], enabling corruption, drug dealing, and terrorism [McDowell2001]. Inadequate anti-money laundering (AML) systems contribute to the extent of the problem. Banks use these to flag suspicious behavior, traditionally relying on predefined and fixed rules [Verhage2009, Demetis2018]. Although the rules are formulated by experts, they are essentially just ‘if-this-then-that’ statements; easy to interpret but also inefficient. Indeed, over 98% of all AML alarms can be false positives [Richardson2019].

Statistics and machine learning have long promised efficient and robust techniques for AML. So far, though, these remain to manifest [Grint2001]. One reason is that the academic literature is small and fragmented [Leite2019, Ngai2011]. With this paper, we aim to do three things. First, we propose a unified terminology to homogenize and associate research methodologies. Second, we review selected, exemplary methods. Third, we present recent machine learning concepts that have the potential to improve AML.

The remainder of the paper is organized as follows. Section 2 introduces AML in banks. Section 3 presents our terminology. Sections 4 and 5 review the literature on client risk profiling and suspicious behavior flagging, respectively. Section 6 provides a discussion on future research directions and section 7 concludes the paper.

2 Anti-Money Laundering in Banks

The international framework for AML is based on recommendations by the Financial Action Task Force [FATF2021]. Banks are, among other things, required to:

  1. know the true identity of, and risk associated with, clients, and

  2. monitor, make limited inquiries into, and report suspicious behavior.

To comply with the first requirement, banks ask their clients about identity records and banking habits. This is known as know-your-costumer (KYC) information and is used to construct risk profiles. These are, in turn, often used to determine intervals for ongoing due diligence, i.e., checks on KYC information.

To comply with the second requirement, banks use electronic AML systems. These raise alarms for human inquiry. Bank officers then dismiss or report alarms to national financial intelligence units (i.e., authorities). The entire process is illustrated in figure 1. Notably, banks are not allowed to disclose information about alarms or reports. In general, they also receive little feedback on reports. Furthermore, money launderers change their behavior in response to AML efforts. It is, for instance, public knowledge that banks in the United States report all currency transactions over $10,000 [Sun2021]. In response, money launderers employ smurfing, i.e., split up large transactions. Finally, because money laundering has no direct victims, it may go undetected for longer than other types of financial crime (e.g., credit card or bank wire fraud).

Figure 1: Process of an AML alarm. First, an AML system (1) raises the alarm. A bank officer (2) then reviews it. Finally, it is either (3a) dismissed or (3b) reported to authorities.

3 Terminology

In AML, banks typically face two principal data analysis problems: (i) client risk profiling and (ii) suspicious behavior flagging. We use these to structure our terminology and review. A related topic, not discussed here, concerns how authorities treat AML reports.

3.1 Client Risk Profiling

Client risk profiling is used to assign clients general risk scores. Let

be a vector of features specific to client

and be a generic set. A client risk profiling is simply a mapping

where captures the money laundering risk associated with client . In the literature, is often a discrete set. For example, we may have , where symbolizes low risk, symbolizes medium risk, and symbolizes high risk.

Client risk profiling is characterized by working on the client, not transaction, level. Still, the concept is very general, and numerous statistical and machine learning methods may be employed. We distinguish between unsupervised and supervised methods. Unsupervised method utilize data sets on the form where denotes some number of clients. Supervised methods, by contrast, utilize data sets where some labels (e.g., risk scores) are given.

3.2 Suspicious Behavior Flagging

Suspicious behavior flagging is used to raise alarms on clients, accounts, or transactions. Consider a setup where client has accounts. Furthermore, let each account have transactions and let be some features specific to transaction . An AML system is a function

where indicates that an alarm is raised on transaction . Multiple methods can be used to construct an AML system. Again, we distinguish between unsupervised and supervised methods. Regardless of approach, all AML systems are, however, build on one fundamental premise. To cite Bolton and Hand [Bolton2002]: “… given that it is too expensive to undertake a detailed investigation of all records, one concentrates investigation on those thought most likely to be fraudulent.

” Thus, a good AML system needs to model the probability

where indicates that transaction should be reported for money laundering (with otherwise). We may then use an indicator function to raise alarms given some threshold value .

It can be difficult to determine if a transaction, in isolation, constitutes money laundering. As a remedy, the level of analysis is often changed. We may, for instance, consider account features that summarize all activity on account . Alternatively, we may consider the set of all feature vectors for transactions made on account . Defining in analogy to , we may then model

i.e., the probability that account should be reported for money laundering given, respectively, or . Similarly, we could raise alarms directly at the client level, modeling

where indicates (with ) that client should be reported for money laundering. Note, finally, that suspicious behavior flagging and client risk profiling may overlap at the client level. Indeed, we could use as a risk profile for client .

4 Client Risk Profiling

Studies on client risk profiling are characterized by diagnostics, i.e., efforts to find and explain risk factors. Although all authors use private data sets, most share a fair amount of information about features used in their studies. As we shall see later, this contrasts with the literature on suspicious behavior flagging.

4.1 Unsupervised Client Risk Profiling

Claudio and João [Claudio2015] employ -means clustering [Lloyd1982] to construct risk profiles. The algorithm seeks a clustering that assigns every client to a cluster . This is achieved by solving for

where denotes the mean of cluster and denotes the set of clients assigned to cluster . The problem is addressed in a greedy optimization fashion; iteratively setting and . To evaluate the approach, the authors employ a data set with approximately 2.4 million clients from an undisclosed financial institution. Disclosed features include the average size and number of transactions. The authors implement

clusters, designating two of them as risky. The first contains clients with a high number of transactions but low transaction values. The second contains clients with older accounts but larger transaction values. Finally, the authors employ decision trees (see section 4.2) to find classification rules that emulate the clusters. The motivation is, presumably, that bank officers find it easier to work with rules than with

-means.

Cao and Do [Cao2012] present a similar study, applying clustering with slope [Yang2002]. Starting with 8,020 transactions from a Vietnamese bank, the authors first change the level of analysis to individual clients. Features include the sum of in- and outgoing transactions, the number of sending and receiving third parties, and the difference between funds send and received. The authors then discretize features and build clusters based on cluster histograms’ height-to-width ratios. They finally simulate 25 accounts with money laundering behavior. Some of these can easily be identified in the produced clusters. We do, however, note that much may depend on the nature of the simulations.

Paula et al. [Paula2016]

use an autoencoder neural network to find outliers among Brazilian export firms. Neural networks can be described as directed, acyclic graphs that connect computational units (i.e., neurons) in layers. The output of a feedforward neural network with

layers is given by

where are weight matrices, are biases, and

are (non-linear) activation functions. Neural networks are commonly trained with iterative gradient-based optimization, for instance stochastic gradient descent

[Robbins1951]

and backpropagation

[Rumelhart1986] or more recent adaptive iterative optimization schemes like Adam [Kingma2015]

. The aim is to minimize a loss function

over all observations where is a target value or vector. Autoencoders, as employed by the authors, are a special type of neural networks that seek a latent representation of their inputs. To this end, they employ an encoder-decoder (i.e., “hourglass”) architecture and try to replicate their inputs in their outputs, i.e., have . The authors specifically use 5 layers with 18, 6, 3, 6, and 18 neurons. The first two layers (with 18 and 6 neurons) form an encoder. A latent representation is then obtained by the middle layer with 3 neurons. Finally, the last two layers (with 6 and 18 neurons) form a decoder. The approach is tested on a data set with firms. Features include information about debit and credit transactions, export volumes, taxes paid, and previous customs inspections. As a measure of risk, the authors employ the reconstruction error

, frequently used for anomaly or novelty detection in this setting (see, for instance,

[Chen2017]). This way, they identify 20 high-risk firms; some of which third-party experts confirmed to be fraudulent.

4.2 Supervised Client Risk Profiling

Colladon and Remondi [Colladon2017]

combine social network analysis and logistic regression. Using 33,670 transactions from an Italian factoring firm, the authors first construct three graphs;

, and . All share the same nodes, representing clients, while edges represent transactions. In , edges are weighted relative to transaction size. In , they are weighted relative to connected clients’ business sectors. Finally, in , they are weighted relative to geographic factors. Next, a set of graph metrics are used to construct features for every client. These include in-, out-, and total-degrees, closeness, betweenness, and constraint. A label is also collected for clients, denoting (with ) if the client can be connected to a money laundering trial. The authors then employ a logistic regression model

where denotes the learnable coefficients. The approach achieves an impressive performance. Results indicate that in-degrees over and total-degrees over are associated with higher risk. By contrast, constraint over and closeness over are associated with lower risk.

Rambharat and Tschirhart [Rambharat2015] use panel data from a financial institution in the United States. The data tracks risk profiles , assigned to clients over periods. Specifically, represents low, medium, and two types of high-risk profiles. Period-specific features

include information about clients’ business departments, four non-specified “law enforcement actions”, and dummy (one-hot encoded) variables that capture the time dimension. To model the data, the authors use an ordinal random effects model. If we let

denote the Gaussian cumulative distribution function

, the model can be expressed as

where denotes a random client effect, denotes coefficients, and represents a cut-off value transforming a continuous latent variable into . Specifically, we have if and only if . The level of confidentiality makes is hard to generalize results from the study. The study does, however, illustrate that banks can benefit from a granular risk rating of high risk clients.

Martínez-Sánchez et al. [Sanchez2020] use decision trees to model clients of a Mexican financial institution. Decision trees [Breiman1984] are flowchart-like models where internal nodes split the feature space into mutually exclusive sub-regions. Final nodes, called leaves, label observations using a voting system. The authors use data on 181 clients, all labeled as either high risk or low risk. Features include information about seniority, residence, and economic activity. Notably, no train-test split is used. This makes the focus on diagnostics apparent. The authors find that clients with more seniority are comparatively riskier.

Badal-Valero et al. [BadalValero2018] combine Benford’s Law and four machine learning models. Benford’s Law [Benford1938] gives an empirical distribution of leading digits. The authors use it to extract features from financial statements. Specifically, they consider statements from

suppliers to a company on trial for money laundering. Of these, 23 suppliers have been investigated and labeled as colluders. All other (non-investigated) suppliers are treated as benevolent. The motivating idea is that any colluders, hiding in the non-investigated group, should be misclassified by the employed models. These include a logistic regression, feedforward neural network, decision tree, and random forest. Random forests

[Breiman2001], in particular, combine multiple decision trees. Every tree uses a random subset of features in every node split. To address class imbalance, i.e., the unequal distribution of labels, the authors investigate weighting and synthetic minority oversampling [Chawla2002]. The former weighs observations during training, giving higher importance to data from the minority class. The latter balances the data before training, generating synthetic observations of the minority class. According to the authors, synthetic minority oversampling works the best. However, the conclusion is apparently based on simulated evaluation data.

González and Valásquez [Gonzalez2013]

employ a decision tree, feedforward neural network, and Bayesian network to model Chilean firms using false invoices. Bayesian networks

[Pearl1985]

, in particular, are probabilistic models that represent variable dependencies via directed acyclic graphs. The authors use data on 582,161 firms, 1,692 of which have been labeled as either fraudulent or non-fraudulent. Features include information about previous audits and taxes paid. Because the majority of the firms are unlabeled, the authors first use unsupervised learning to characterize high-risk behavior. To this end, they employ self-organizing maps

[Kohonen2004] and neural gas [Martinetz1991]. Both are neural network techniques that build on competitive learning [Rumelhart1985] rather than the error-correction (i.e., gradient-based optimization). While the methods do produce clusters with some behavioral patterns, they do not appear useful for false invoice detection. On the labeled training data, the feedforward neural network achieves the best performance.

5 Suspicious Behavior Flagging

The literature on suspicious behavior flagging is characterized by a large proportion of short and suggestive papers. This includes applications of fuzzy logic [Chen2011], autoregression [Kannan2017], and sequence matching [LiuXuan2008]. Non-disclosed features also make it difficult to compare studies. Finally, many authors employ hand-crafted risk indices.

5.1 Unsupervised Suspicious Behavior Flagging

Larik and Haider [Larik2011]

flag transactions with a combination of principal component analysis and

-means. Given data on approximately 8.2 million transactions, the authors first seek to cluster clients. To this end, principal component analysis [Jolliffe2002] is applied to client features ,

. The method seeks lower-dimensional, linear transformations

,

, that preserve the greatest amount of variance. Let

denote the data covariance matrix. The first coordinate of , called the first principal component, is then given by where is determined by

s.t.

By analog, the ’th principal component is given by where maximizes subject to and orthogonality with the previous principal components

. Principal components are commonly obtained by the eigenvectors of

corresponding to maximal eigenvalues. Next, the authors use a modified version of

-means to cluster . The modification introduces a parameter to control the maximum distance between an observation and the mean of its assigned cluster. A hand-crafted risk index is then used to score and flag incoming transactions. The index compares the sizes and frequencies of transactions within assigned client clusters. As no labels are available, evaluation is limited.

Rocha-Salazar [RochaSalazar2021] mix fuzzy logic, clustering, and principal component analysis to raise alarms. With fuzzy logic [Zadeh1983], experts first assign risk scores to feature values. These include information about client age, nationality, and transaction statistics. Next, strict competitive learning, fuzzy -means, self-organizing maps, and neural gas are used to build client clusters. The authors find that fuzzy -means [Dunn1973], in particular, produces the best clusters. This algorithm is similar -means but uses scores to express degrees of cluster membership rather than hard assignments. The authors further identify one high-risk cluster. Transactions in this cluster are then scored with a hand-crafted risk index. This builds on principal component analysis, weighing features relative to their variances. Data from a Mexican financial institution is used to evaluate the approach. Training is done with 26,751 private and 3,572 business transactions; testing with 1,000 private and 600 business transactions. The approach shows good results on balanced accuracy (i.e., the average of the true positive and true negative rate).

Raza and Haider [RAZA2011987] propose a combination of clustering and dynamic Bayesian networks. First, client features are clustered with fuzzy -means. For each cluster, a -step dynamic Bayesian network [DAGUM199241] is then trained on transaction sequences . Individual transaction features include information about transaction amount, period, and type. At test time, incoming transactions (along with the previous

transactions) are passed through the network. A hand-crafted risk index, building on outputted posterior probabilities, is then calculated. The approach is implemented on a data set with approximately 8.2 million transactions (presumably the same data used by Larik and Haider

[Larik2011]). However, as no labels are available, evaluation is limited.

Camino et al. [Camino2017]

flag clients with three outlier detection techniques: an isolation forest, one-class support vector machine, and Gaussian mixture model. Isolation forests

[Liu2008] build multiple decision trees using random feature splits. Observations isolated by comparatively few feature splits (averaged over all trees) are then considered outliers. One-class support vector machines [Bernhard2001]

use a kernel function to map data into a reproducing Hilbert space. The method then seeks a maximum margin hyperplane that separates data points from the origin. A small number of observations are allowed to violate the hyperplane; these are considered outliers. Finally, Gaussian mixture models

[Reynolds2009]

assume that all observations are generated by a number of Gaussian distributions. Observations in low-density regions are then considered outliers. The authors combine all three techniques into a single ensemble method. The method is tested on a data set from an AML software company. This contains one million transactions with client level features recording summary statistics. The authors report positive feedback from the data supplying company, but, otherwise, evaluation is limited.

5.2 Supervised Suspicious Activity Flagging

Deng et al. [Deng2009] combine logistic regression, stochastic approximation, and sequential

-optimal design for active learning. The question is how we sequentially should select new observations for inquiry (revealing

) and use in the estimation of

The authors employ a data set with 92 inquired accounts and two highly engineered features. The first feature captures the velocity and size of transactions; the second captures peer comparisons. Assuming that is an increasing function in both features, the authors further define a synthetic variable for . Finally, is subject to a univariate logistic regression on . This allows a combination of stochastic approximation [Wu1985] and sequential -optimal design [Neyer1994] for new observation selection. The approach significantly outperforms random selection. Furthermore, simulations show that it is robust to underlying data distributions.

Zhang and Trubey [Zhang2019] employ six machine learning models to predict the outcome of AML alarm inquiries. We note that the setup can be used both to qualify existing and raise new alarms under appropriate assumptions. Indeed, let indicate (with that client is flagged by a traditional AML system. Assuming that and are conditionally independent given , we have that

If we also assume that for all , we can use a model, only trained on previously flagged clients, to raise new alarms. The authors use a data set with 6,113 alarms from a financial institution in the United States. Of these, 34 alarms were reported to authorities. Ten non-disclosed features are used. In order to address class imbalance, the authors investigate random over- and undersampling. Both techniques, in particular, increase the performance of a support vector machine [Cortes1995]. This model seeks to maximize the margin between feature observations of the two classes and a class separating hyperplane (possibly in transformed space). However, a feedforward neural network, robust to both sampling techniques, shows the best performance.

Jullum et al. [Jullum2020]

use gradient boosted trees to model AML alarms. The approach additively combines

regression trees (i.e., decision trees with continuous outputs) and is implemented with XGBoost

[Chen2016]. Data comes from a Norwegian bank and contains:

  1. 16,192 non-flagged (i.e., “normal”) transactions,

  2. 14,932 flagged transactions, dismissed after small inquiries,

  3. 1260 flagged transactions, thoroughly inquired and subsequent dismissed, and

  4. 750 flagged transactions, thoroughly inquired and subsequent reported.

The authors perform binary classification, treating all transactions in (1)-(3) as benevolent while transitions in (4) are treated as malicious. Features include information about client background, behavior, and previous AML alarms. To compare model performance with traditional AML systems, the authors propose an evaluation metric called “proportion of positive predictions” (PPP). This records the proportion of positive predictions when classification thresholds are adjusted to obtain a pre-specified true positive rate. Results, in particular, indicate that the inclusion of non-inquired (i.e., type 1) transactions improve model performance.

Tertychnyi et al. [Tertychnyi2020] propose a two-layer approach to flag suspicious clients. In the first layer, a logistic regression is used to filter out clients with transaction patterns that are clearly non-illicit. In the second layer, remaining clients are subject to gradient boosted trees implemented with CatBoost [Prokhorenkova2018]. The approach is tested on a data set from an undisclosed bank. This contains approximately 330,000 clients from three countries. About 0.004% of the clients have been reported for money laundering while the remaining are randomly sampled. Client level features include demographic data and transactions statistics. The authors find that model performance varies significantly over the three countries. However, performance decreases when each country is modelled separately. The authors also report that initial experiments with undersampling, oversampling, and synthetic majority oversampling were unsuccessful.

Weber et al. [Weber2019]

use graph convolutional neural networks to flag suspicious bitcoin transactions. An open data set is provided by Elliptic, a private cryptocurrency analytics company. The data set contains a transaction graph

with nodes and

edges. Nodes represent bitcoin transactions while edges represent directed payment flows. Using a heuristic approach, 21% of the nodes are labeled as licit; 2% as illicit. For all nodes, 166 features are recorded. Of these, 94 record local information while the remaining 72 record one-hop information. Graph convolutional neural networks

[Kipf2016] are neural networks designed to work on graph data. Let denote the normalized adjacency matrix of graph . The output of the network’s ’th layer is obtained by

where is a weight matrix, is the output from layer (initiated with feature values), and is an activation function. While the best performance is achieved by a random forest model, the graph convolutional neural network proved competitive. Utilizing a time dimension in the data, the authors also fit a temporal graph convolutional neural network [Pareja2019]. This outperforms the simple graph convolutional neural network. However, it still falls short of the random forest model.

We finally highlight two recent studies that use the Elliptic data set [Weber2019]. Alarab et al. [Alarab2020] propose a neural network structure where graph convolutional embeddings are concatenated with linear embeddings of the original features. This increases model performance significantly. Lorenz et al. [Lorenz2020]

experiment with unsupervised anomaly detection. The authors try seven different techniques. All of them fail. As noted by the authors, this contradicts previous literature on unsupervised behavior flagging (see, for example,

[Camino2017]). One possible explanation is that the Elliptic data, constructed over bitcoin transactions, is qualitatively different from bank transaction data. The authors, further, follow Deng et al. [Deng2009] and experiment with four active learning strategies combined with a random forest, gradient boosted trees, and logistic regression model. Two of the active learning strategies build on unsupervised techniques: elliptic envelope [Rousseeuw1999] and isolation forest [Liu2008]. The remaining two build on supervised techniques: uncertainty sampling [Lewis1994] and expected model change [Settles2008]. Results show that the supervised techniques perform the best.

6 Future Research Directions

Our review reveals two central challenges in AML research: class imbalance and the lack of public data sets. Both may motivate the use of synthetic data. We also note how banks hold vast amounts of high-dimensional and unlabeled data [Sudjianto2010]

. This may motivate the use of dimension reduction and semi-supervised techniques. Other possible research directions include data visualization, deep-learning, and interpretable and fair machine learning. In the following, we introduce each of these topics. We also provide brief descriptions of seminal methods and techniques within each topic.

6.1 Class Imbalance, Evaluation Metrics, and Synthetic Data

Due to class imbalance, AML systems tend to label all observations as benevolent. This implies that accuracy is a poor evaluation metric. Instead, we emphasize receiver operating characteristic (ROC) and precision-recall (PR) curves. Both are applicable for binary models and are based on the relative ranking of predictions. ROC curves [FAWCETT2006861]

plot true positive versus false positive rates for varying classification thresholds. The area under a ROC curve, called ROCAUC (or sometimes just AUC), is a measure of separability. It is equal to 1 for perfect classifiers and 0.5 for naive classifiers. PR curves

[Saito2015] plot precision versus true positive rates. This is particularly relevant when class imbalance is severe and true positive rates are of high importance. For multi-class models, Cohen’s [McHugh2012] is appealing. This evaluates agreement between two labelings, accounting for agreement by chance. Note, finally, that none of the presented metrics consider calibration, i.e., that model outputs may reflect true likelihoods.

To combat class imbalance, data augmentation can be used. Simple approaches include under- and oversampling (see, for instance, [Lemaitre2017]). Synthetic minority oversampling (SMOTE) by Chawla et al. [Chawla2002] is another option for vector data. The technique generates convex combinations of minority class observations. Extensions include borderline-SMOTE [Han2005] and borderline-SMOTE-SVM [Nguyen2011]. These generate observations along estimated decision boundaries. Another SMOTE variant, ADASYN [He2008], generates observations according to data densities. For time series data (e.g., transaction sequences), there is relatively little literature on data augmentation [Wen2021]. Some basic transformations are:

  1. window cropping, where random time series slices are extracted,

  2. window wrapping, compressing (i.e., down-sampling) or extending (i.e., up-sampling) time series slices,

  3. flipping, where the signs of time series are flipped (i.e., multiplied with ), and

  4. noise injection, where (typically Gaussian) noise is added to time series.

A few advanced methods also bear mentioning. Teng et al. [Teng2020] propose a wavelet transformation to preserve low-frequency time series patters while noise is added to high-frequency patters. Iwana and Uchida [Iwana2021] utilize the element alignment properties of dynamic time wrapping to mix patterns; features of sample patterns are wrapped to match the time steps of reference patterns. Finally, some approaches combine multiple transformations. Cubuk et al. [Cubuk2020] propose to combine transformations at random. Fons et al. [Fons2021a] propose two adaptive schemes; the first weighs transformed observations relative to a model’s loss, the second selects a subset of transformations based on rankings of prediction losses.

Simulating known or hypothesized money laundering patterns from scratch may the only option for researchers with no available data. Used together with private data sets, the approach may also ensure some reproducibility and generalizability. We refer to the work by Lopez-Rojas and Axelsson [LopezRojas2012] for an in-depth discussion of simulated data for AML research. The authors specifically develop a simulator for mobile phone transfers. Weber et al. [Weber2018] and Suzumura and Kanezashi [AMLSim] augment this, tailing it to a more classic bank setting.

We have found only one public data set within the AML literature: the Elliptic data set [Weber2019]. This contains a graph over bitcoin transactions. We do, however, note that graph-based approaches may be difficult to implement in a bank setting. Indeed, any bank only knows about transactions going to or from its own clients. Instead, graph approaches may be more relevant for authorities’ treatment of AML reports; see work by Savage et al. [Savage2016], Drezewski et al. [Drezewski2015], or Li et al. [Li2017].

6.2 Visualization, Dimension Reduction, and Semi-supervised Techniques

Visualization techniques may help identify money laundering [Singh2019]. One option is t-distributed stochastic neighbor embedding [Maaten2008] and its parametric counterpart [Maaten2009]

. The approach is often used for 2- or 3-dimensional embeddings, aiming to keep similar observations close and dissimilar observations distant. First, a probability distribution over pairs of observations is created in the original feature space. Here, similar observations are given higher probability; dissimilar observations are given lower. Next, we seek projections that minimize the Kullback-Leibler divergence

[Kullback1951] to a distribution in a lower-dimensional space. Another option is ISOMAP [Tenenbaum2000]. This extends multidimensional scaling [Mead1992], using the shortest path between observations to capture intrinsic similarity.

Autoencoders, as discussed in section 4.1, can be used both for dimension reduction, synthetic data generation, and semi-supervised learning. The latter is relevant when we have data sets with many unlabeled (but also some labeled) observations. Indeed, we may train an autoencoder with all the observations. Lower layers can then be reused in a network trained to classify labeled observations. A seminal type of autoencoder was proposed by Kingma and Welling

[Kingma2014]: the variational autoencoder. This is a probabilistic, generative model that seeks to minimize a loss function with two parts. The first part employs the normal reconstruction error. The second part employs the Kullback-Leibler divergence to push latent feature representations towards a Gaussian distribution. An extension, conditional variational autoencoders [Sohn2015]

take class labels into account, modeling a conditional latent variable distribution. This allows us to generate class specific observations. Generative adversarial networks

[Goodfellow2014] are another option. Here, two neural networks compete against each other; a generative network produces synthetic observations while a discriminative network tries to separate these from real observations. In analogy with conditional variational autoencoders, conditional generative adversarial nets [Mirza2014] take class labels into account. Specifically, class labels are feed as inputs to both the discriminator and generator. This may, again, be used to generate class specific observations. While most generative adversarial network methods have been designed to work with visual data, methods which can be applied to time-series data have recently been proposed [Brophy2021, Yoon2019].

6.3 Neural Networks, Deep Learning, and Transfer Learning

The neural networks used in current AML research are generally small and shallow. Deep neural networks, by contrast, employ multiple layers. The motivating idea is to derive higher-level features directly from data. This has, in particular, proved successful for computer vision

[Zeng2018, Hu2018]

, natural language processing

[Radford2018, Devlin2019], and high-frequency financial time-series analysis [Tsantekidis2017, Zhang2019DeepLOB, Ntakaris2019].

State-of-the-art deep neural networks use multiple methods to combat unstable gradients. This includes rectified [Nair2010, Xu2015] and exponential [Clevert2016, Klambauer2017] linear units. Weight initialization is done with Xavier [Glorot2010], He [He2016], or LeCun [LeCun2012]

initialization. Batch normalization

[Ioffe2015]

is used to standardize, re-scale, and shift inputs. For recurrent neural networks (see below), gradient clipping

[Pascanu2013] and layer normalization [Ba2016] are often used. Finally, residual or skip connections [He2015Res, Huang2017] feed intermediate outputs multiple levels up a network hierarchy.

State-of-the-art networks also use regularization techniques to combat overfitting. Dropout [Hinton2012, Srivastava2014] temporarily removes neurons during training, forcing non-dropped neurons to capture more robust relationships. L1 and L2 regularization [Goodfellow2016] limit network weights by adding penalty terms to a model’s loss function. Finally, max-norm regularization [Srebro2005] restricts network weights directly during training.

Multiple deep learning methods have been proposed for transfer learning. We refer to the work by Weiss

et al. [Weiss2016] for an extensive review. The general idea is to utilize knowledge across different domains or tasks. One common approach starts by training a neural network on some source problem. Weights (usually from lower layers) are subsequently transferred to a new neural network that is fine-tuned (i.e., re-trained) on another target problem. This may work well when the first neural network learns to extract features that are relevant to both the source and target problem [Yosinski2014]. A sub-category of transfer learning, domain adaption explicitly tries to alleviate distributional differences across domains. To this end, both unsupervised and supervised methods may be employed (depending on whether or not labeled target data is available). For example, Ganin and Lempitsky [Yaroslav2015] propose an unsupervised technique that employs a gradient reversal layer and backpropagation to learn shift invariant features. Tzeng et al. [Tzeng2015] consider a semi-supervised setup where little labeled target data is available. With unlabeled target data, the authors first optimize feature representations to minimize the distance between a source and target distribution. Next, a few labeled target observations are used as reference points to adjust similarity structures among label categories. Finally, we refer to the work by Hedegaard et al. [Hedegaard2021] for a discussion and critic of the generic test setup used in the supervised domain adaptation literature and a proposal of a fair evaluation protocol.

Deep neural networks can, like their shallow counterparts, model sequential data. Here, we provide brief descriptions of simple instantiations of such networks. We use the notation introduced in section 3 and only describe single layers. To form deep learning models, one stacks multiple layers; each layer receives as input the output of its predecessor. Parameters (across all layers) are then jointly optimized by an iterative optimization scheme, as described in section 3. Recurrent neural networks are one approach to model sequential data. Let denote some layer input at time . We can describe the time output of a basic recurrent neural network layer with neurons by

where is an input weight matrix, is an output weight matrix,

is a bias vector, and

is an activation function. Advanced architectures use gates to regulate the flow of information. Long short-term memory (LSTM) cells

[Hochreiter1997, sak2014long, zaremba2015recurrent] are one option. Let denote element-wise multiplication and

the standard sigmoid function. At time

, an LSTM layer with neurons is described by

  1. an input gate ,

  2. a forget gate ,

  3. an output gate ,

  4. a main transformation ,

  5. a long-term state , and

  6. an output ,

where and in denote input weight matrices, and in denote output weight matrices, and and in denote biases. Cho et al. [cho2014learning]

propose a simpler architecture based on gated recurrent units (called GRUs). An alternative to recurrent neural networks, the temporal neural bag-of-features architecture has proved successful for financial time series classification

[Passalis2020]

. Here, a radial basis function layer with

neurons is used

where and in are weights that describe the ’th neuron’s center and width, respectively. Next, an accumulation layer is used to find a constant length representation in ,

Another type of neural networks that can model time domain information is bilinear neural networks. Let be a matrix with columns for . A temporal bilinear layer with neurons can then be described as

where and are weight matrices and is a bias matrix. Notably, models feature interactions at fixed time points while models feature changes over time.

Attention mechanisms have recently become state-of-the-art. These allow neural networks to dynamically focus on relevant sequence elements. Bahdanau et al. [bahdanau2016neural]

consider a bidirectional recurrent neural network

[Schuster1997]

and propose a mechanism known as additive or concatenative attention. The mechanism assumes an encoder-decoder architecture. During decoding, it computes a context vector by weighing an encoder’s hidden states. Weights are obtained by a secondary feedforward neural network (called an alignment model) and normalized by a softmax layer (to obtain attention scores). Notably, the secondary network is trained jointly with the primary network. Luong attention

[luong2015effective] is another popular mechanism, using the dot product between an encoder’s and a decoder’s hidden states as a similarity measure (the mechanism is also called dot-product attention). Vaswani et al. [Vaswani2017] propose the seminal transformer architecture. Here, an encoder first applies self-attention (i.e., scaled Luong attention). As before, let denote our matrix of sequence elements. We can describe a self-attention layer as

where

  1. , called the query matrix, is given by with a weight matrix ,

  2. , called the key matrix, is given by with a weight matrix , and

  3. , called the value matrix, is given by with a weight matrix .

Note that the softmax function is applied row-wise. It outputs a matrix; every row captures how much attention we pay to in relation to . During decoding, the transformer also applies self-attention. Here, the key and value matrices are taken from the encoder. In addition, the decoder is only allowed to attend to earlier output sequence elements (future elements are masked, i.e., set to

before softmax is applied). Notably, the authors apply multiple parallel instance of self-attention. The approach, known as multi-head attention, allows attention over many abstract dimensions. Finally, positional encoding, residual connections, layer normalization, and supplementary feedforward layers are used. As a last attention mechanism, we highlight temporal attention augmented bilinear layers

[Tran2019]. With the notation used to introduce temporal bilinear layers above, we may express such a layer as

  1. ,

  2. ,

  3. ,

where and denote the ’th element of and , respectively, is a weight matrix with fixed diagonal elements equal to , and is a scalar allowing soft attention. In particular, is used to express the relative importance of temporal feature instances (learned through ), while contain our attention scores.

6.4 Interpretable and Fair Machine Learning

Advanced machine learning models often outperform their simple statistical counterparts. Their behavior can, however, be much harder to understand and explain. This is, in itself, a potential problem but can also make it difficult to ensure that machine learning models are fair.

Notably, ”fairness” is an ambiguous concept in machine learning with many different and overlapping definitions [Mehrabi2022]

. The equalized odds definition states that different protected groups (e.g., genders or races) should have equal true and false positive rates. The conditional statistical parity definition takes a set of legitimate discriminative features into account, stating that the likelihood of a positive prediction should be the same across protected groups given the set of legitimate discriminative features. Finally, the counterfactual fairness definition is based on a notation that a prediction is fair if it remains unchanged in a counterfactual world where some features of interest are changed. Approaches for fair machine learning also vary greatly. In an exemplary paper, Louizos

et al. [louizos2017variational] consider the use of variational autoencoders to ensure fairness. The authors treat sensitive features as nuisance or noise variables, encouraging separation between these and (informative) latent features by using factorized priors and a maximum mean discrepancy penalty term [Gretton2007]. In another exemplary paper, Zhang et al. [Zhang2018] propose the use of adversarial learning. Here, a primary models tries to predict an outcome variable while minimizing an adversarial model’s ability to predict protected feature values. Notably, the adversarial model takes as inputs both the primary model’s predictions and other relevant features, depending on the fairness definition of interest.

Regarding interpretability, we follow Du et al. [Du2019]

and distinguish between intrinsic and post-hoc interpretability. Intrinsically interpretable models are, by design, easy to understand. This includes simple decision trees and linear regression. Notably, attention mechanisms also exhibit some intrinsic interpretability; we may investigate attention scores to see what part of a particular input sequence a neural network focuses on. Other models and architectures work as “black boxes” and require post-hoc interpretability methods. Here, it is useful to distinguish between global and local interpretability. The former is concerned with overreaching model behavior; the latter with individual predictions. LIME

[Ribeiro2016] is a technique for local interpretability. Consider a situation where a black box model and single observation is given. LIME first generates a data set of permutated observations (relative to the original observation) with accompanying black box model outputs. An intrinsically interpretable model is then trained on the synthetic data. Finally, this model is used to explain the original observation’s black box prediction. Gradient-based methods (e.g., [simonyan2014deep]), use the gradients associated with a particular observation and black box model to capture importance. The fundamental idea is that larger gradients (either positive or negative) imply larger feature importance. Individual conditional expectation plots [Goldstein2015] are another option. These illustrate what happens to a particular black box prediction if we vary one feature value of the underlying observation. Similarly, partial dependence plots [Friedman2001] may be used for global interpretability. Here, we average results from feature variations over all observations in a data set. This may, however, be misleading if input features are highly correlated. In this case, accumulated local effects plots [apley2019visualizing] present an attractive alternative. These rely on conditional feature distributions and employ prediction differences. For counterfactual observation generation, numerous methods have been proposed [Akula_Wang_Zhu_2020, Cheng2021, Gomez2020]. While these generally need to query an underlying model multiple times, efficient methods utilizing invertible neural networks have also been proposed [hvilshoej2021ecinn]. A related problem concerns the quantitative evaluation of counterfactual examples; see the work by Hvilshøj et al. [hvilshoej2021quantitative] for an in-depth discussion. Finally, we highlight Shapley additive explanations (SHAP) by Lundberg and Lee [Lundberg2017]. The approach is based on Shapley values [Shapley1953] with a solid game-theoretical foundation. For a given observation, SHAP values records average marginal feature contributions (to a black box model’s output) over all possible feature coalitions. The approach allows both local and global interpretability. Indeed, every observation is given a set of SHAP values (one for each input feature). Summed over the entire data set, the (numerical) SHAP values show accumulated feature importance. Although SHAP values are computationally expensive, we note that Lundberg and Lee [Lundberg2017] propose a fast approximation scheme for tree-based methods.

7 Conclusion

Based on a description of AML in banks, we proposed a unified terminology structured around two central tasks: (i) client risk profiling and (ii) suspicious behavior flagging. A review reveals that the literature on client risk profiling is characterized by diagnostics, i.e., efforts to find and explain risk factors. The literature on suspicious behavior flagging is characterized by non-disclosed features and hand-crafted risk-indices. Two fundamental challenges plague the literature: class imbalance and a lack of public data sets. Both may, as discussed by us, potentially be addressed by synthetic data generation. Other potential research directions include semi-supervised techniques, data visualization, deep-learning, interpretable, and fair machine learning.