Towards Novel Insights in Lattice Field Theory with Explainable Machine Learning

03/03/2020 ∙ by Stefan Bluecher, et al. ∙ 0

Machine learning has the potential to aid our understanding of phase structures in lattice quantum field theories through the statistical analysis of Monte Carlo samples. Available algorithms, in particular those based on deep learning, often demonstrate remarkable performance in the search for previously unidentified features, but tend to lack transparency if applied naively. To address these shortcomings, we propose representation learning in combination with interpretability methods as a framework for the identification of observables. More specifically, we investigate action parameter regression as a pretext task while using layer-wise relevance propagation (LRP) to identify the most important observables depending on the location in the phase diagram. The approach is put to work in the context of a scalar Yukawa model in (2+1)d. First, we investigate a multilayer perceptron to determine an importance hierarchy of several predefined, standard observables. The method is then applied directly to the raw field configurations using a convolutional network, demonstrating the ability to reconstruct all order parameters from the learned filter weights. Based on our results, we argue that due to its broad applicability, attribution methods such as LRP could prove a useful and versatile tool in our search for new physical insights. In the case of the Yukawa model, it facilitates the construction of an observable that characterises the symmetric phase.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Lattice simulations of quantum field theories have proven essential for the theoretical understanding of fundamental interactions from first principles, perhaps most prominently so in quantum chromodynamics. However, an in-depth understanding of the emergent dynamics is often difficult. In cases where such an understanding remains elusive, it may be instructive to search for so far unidentified structures in the data to better characterise the dynamics.

In this quest towards new physical insight, we turn to machine learning (ML) approaches, in particular from the subfield of deep learning LeCun2015 . These methods have proven capable of efficiently identifying high-level features in a broad range of data types—in many cases, such as speech or image recognition, with spectacular success graves2013speech ; NIPS2012_4824 ; NIPS2015_5638 ; he2016deep

. Accordingly, there is growing interest in the lattice community to harness the capabilities of these algorithms, both for high energy physics and condensed matter systems. Applications include predictive objectives, such as detecting phase transitions from lattice configurations, as well as generative modeling

Wang2016 ; Broecker2016 ; Wetzel2017 ; Wetzel2017a ; Cristoforetti:2017naf ; Chng2017 ; Hu2017 ; Liu2017 ; Carleo2017 ; Nieuwenburg2017 ; Ponte2017 ; Carrasquilla2017 ; Morningstar2017 ; Tanaka2017 ; Huang2017 ; Liu2017a ; Wu_2019 ; Zhou_2019 ; Shanahan2018 ; Suchsland2018 ; Urban2018a ; Noe2019 ; Nicoli2019comment ; Albergo2019 ; Kades2019 ; Kashiwa2019 ; Liu2019 ; Casert2019 ; Rzadkowski2019 ; Nicoli2019asymptotically . We recommend Mehta2019 as an introduction to ML for physicists and Carleo2019 as a general review for ML applications in physics.

One ansatz for the identification of relevant observables from lattice data is through representation learning, i.e. by training on a pretext task. The rationale behind this approach is that the ML algorithm learns to recognise patterns which can be leveraged to construct observables from low-level features that characterise different phases. However, solving a given task does by itself not lead to physical insights, since the inner structure of the algorithm typically remains opaque. This issue can at least partially be resolved by the use of “explainable AI” techniques, which have recently attracted considerable interest in the ML community and beyond. In this work, we focus on layer-wise relevance propagation (LRP) BachPLOS15 . It is one of several popular post-hoc attribution methods that propagate the prediction back to the input domain, thereby highlighting features that influence the algorithm towards/against a particular classification decision.

We test this approach in the context of Yukawa theory in (2+1) dimensions, using inference of an action parameter as a pretext task in order to identify relevant observables. In a first step, we demonstrate that this is at least partially possible by training a multilayer perceptron (MLP) on a set of standard observables. Here, we show that the relevance of features in different phases, as determined by LRP, agrees with physical expectations. We benchmark our results with a similar method based on random forests. Subsequently, we demonstrate that the action parameter can be inferred directly from field configurations using a convolutional neural network (CNN). We use LRP to identify relevant filters and discuss how these align with physical knowledge. This also allows us to construct an observable that appears to be a distinctive feature of the paramagnetic phase.

The paper is organised as follows. In Section II we briefly review scalar Yukawa theory on the lattice and define important quantities. Section III serves as an introduction to the topic of explainable AI and discusses LRP in order to convey the rationale behind our approach. Numerical results for the MLP and a random forest benchmark are presented in Section IV.1. In Section IV.2 we conduct an analysis of the CNN and subsequently demonstrate how all order parameters, as well as the aforementioned observable relevant for the paramagnetic phase, can be extracted from the filters. We discuss our findings and possible future work in Section V.

Ii Yukawa theory

We consider a scalar Yukawa model defined on a (2+1)d cubic lattice with periodic boundary conditions. The theory is comprised of a real-valued scalar field with quartic self-interaction coupled to Dirac fermions. The action for the scalar field can be cast into the following dimensionless form,

(1)

where denotes the set of all lattice sites. Here, is called the hopping parameter and takes the role of the coupling constant.

In order to ensure positivity of the partition function, one needs a minimum of two degenerate fermion flavors. Due to their bilinear contributions to the action, the fermionic d.o.f. can be integrated out, yielding the determinant of the discretised Dirac operator,

(2)

as a multiplicative contribution to the statistical weight. The Euclidean Dirac -matrices are absorbed by the staggered transformation, yielding the scalars

that mix the spatial and spinor degrees of freedom. They are given by

and . denotes the fermion mass and is the Yukawa coupling to the bosonic field. The expectation value of an observable can then be expressed as the path integral

(3)

where denotes the partition function.

Figure 1: Slice of the phase diagram for fixed Yukawa coupling using normalised values of and . Phase transitions are highlighted by the shaded bars. We distinguish an antiferromagnetic (AFM), a paramagnetic (PM) and a ferromagnetic (FM) phase.

Important observables characterising phases and critical phenomena in scalar -theory include the magnetisation,

(4)

as well as the staggered magnetisation

(5)

which is relevant for negative . The scalar part of Equation 1 features the additional staggered symmetry

(6)

which connects both magnetisations. The fermionic part explicitly breaks this symmetry.

A slice of the phase diagram at fixed Yukawa coupling is shown in Figure 1. The theory exhibits an interesting structure, with two broken phases of ferromagnetic (FM) and antiferromagnetic (AFM) nature, where and respectively acquire non-zero expectation values. They are separated by a symmetric, paramagnetic (PM) phase, where both quantities vanish.

We also consider the connected two-point correlation function

(7)

While the expectation value of the magnetisation can be estimated from a single field configuration at reasonable lattice sizes, signals of n-point correlators are naturally much more suppressed due to statistical noise and cannot be reasonably approximated from one sample. Therefore, we introduce the time-sliced correlator

, which is defined by

(8)

where the sum runs over spacelike components. It measures correlations only in the temporal direction, which leads to a better signal-to-noise ratio due to the averaging procedure.

Some aspects of the derivation and simulation are given in Appendix A. For a comprehensive treatment of Yukawa theory on the lattice, we recommend Montvay1994 .

Iii Insights from Explainable AI

Simple methods from statistics and ML often lack the capability to model complex data, whereas sophisticated algorithms typically tend to be less transparent. A commonly used example is principal component analysis (PCA). It has been successfully applied to the extraction of (albeit already known) order parameters for various systems

Wang2016 ; Wetzel2017a ; Hu2017 . However, its linear structure prohibits the identification of complex non-linear features, e.g. Wilson loops in gauge theories. Hence, we require tools capable of modeling non-linearities, such as deep neural networks Carrasquilla2017 . They allow for a more comprehensive treatment of complex systems, which has been demonstrated e.g. for fermionic theories in Broecker2016 ; Chng2017 . The approach also enables novel procedures, such as learning by confusion and similar techniques, to locate phase transitions in a semi-supervised manner Nieuwenburg2017 ; Rzadkowski2019 . For lattice QCD, action parameters can be extracted from field configurations Shanahan2018 . Overall, deep learning tools seem particularly well-suited to grasp relevant information about quantum field dynamics in a completely data-driven approach, by learning abstract internal representations of relevant features.

However, their lack of transparency is frequently a major drawback of using such methods, which prohibits access to and comprehension of these representations. A unified understanding of how and what these architectures learn, and why it seems to work so well in a wide range of applications, is still pending. To better understand the processes behind neural network-driven phase detection in lattice models, multiple proposals have been made, such as pruning Wetzel2017 ; Kashiwa2019 ; Suchsland2018

, utilising (kernel) support vector machines

Ponte2017 ; Liu2019 , and saliency maps Casert2019 .

Also, in the broader scope of ML research, there has been growing interest in interpretability approaches, most of them focusing on post-hoc explanations for trained models. So-called attribution methods typically assign a relevance score to each of the input features that quantifies—depending on the attribution algorithm—which features the classifier was particularly sensitive to, or influenced the algorithm towards/against an individual classification decision. In the domain of image recognition, such attribution maps are typically visualised as heatmaps overlaying the input image. The development of attribution algorithms is a very active field of research in the ML community. Therefore, we refer to dedicated research articles for more in-depth treatments

DBLP:journals/corr/abs-1711-06104 ; MonDSP18 . Very broadly, the most important types of such local interpretability methods can be categorised as: 1. Gradient-based, such as saliency maps Simonyan2013 obtained by computing the derivative of a particular class score with respect to the input features or integrated gradients Sundararajan2017AxiomaticAF . 2. Decomposition-based, such as layer-wise relevance propagation (LRP) BachPLOS15 or DeepLift pmlr-v70-shrikumar17a . 3. Perturbation-based, as in DBLP:conf/eccv/ZeilerF14 , investigating the change in class scores when occluding parts of the input features.

Figure 2:

Sketch of LRP through the last two layers of a classification network that predicts one-hot vectors. Relevance is indicated by arrow width. The conservation law requires the sum of widths to remain constant during backpropagation. Diagram adapted from

Heatmapping .

Figure 3: Sketch of our strategy to learn meaningful structures from the simulation data by analysing the networks trained for action parameter inference. Field configurations used for training are either preprocessed into observables for the MLP (Approach A) or directly operated upon with a CNN (Approach B). Obtaining accurate predictions for the parameters indicates approximate cycle consistency in the above diagram, which supports the notion that the networks have successfully identified characteristic features. These can then be extracted in a subsequent interpretation step using LRP.

In this work, we focus on LRP, as mentioned a particular variant of decomposition-based attribution methods. However, we stress that the qualitative finds do not rely on the specific choice of method. The general idea of LRP is to start from a relevance assignment in the output layer and subsequently propagate this relevance back to the input in a layer-wise fashion using certain propagation rules, see the sketch in Figure 2 and Appendix C

for details. In this way, the method assigns a relevance score to each neuron, where positive (negative) entries strongly influence the classifier towards (against) a particular classification decision.

Iv Results

In this section, numerical results are presented which corroborate our rationale. We train a multilayer perceptron (MLP) and a convolutional neural network (CNN) to infer the associated hopping parameter from a set of known observables (Approach A), as well as solely from the raw field configurations (Approach B), akin to Shanahan2018 . In the first case, without providing any prior knowledge of the phase boundaries, LRP manages to reveal the underlying phase structure and returns a phase-dependent importance hierarchy of the observables that is in accordance with physical expert knowledge. In the second case, by calculating the relevances of the learned filters, we can associate each of them with one of the physical phases and thereby extract the known order parameters. Moreover, it facilitates the construction of an observable that characterises the symmetric phase. Both variants of our strategy are sketched in Figure 3

. Predicting action parameters from samples is an ill-conditioned inverse problem, since the same field configuration can in principle be sampled from different combinations of couplings. Hence, the optimisation objective is formulated in terms of maximum likelihood estimation. Assuming a Gaussian distribution with fixed variance, this objective reduces to minimising the mean squared error (MSE), which we use as loss function in the following. In addition, we apply weight regularisation, see

Appendix E for details.

Figure 4: Results for the MLP. Top: predictions, bottom: normalised LRP relevances of individual features. Error bars here and throughout this work are obtained with the statistical jackknife method.
Figure 5: Benchmark results for the random forest. Top: predictions, bottom: nominal contributions of individual features.

iv.1 Importance Hierarchies of Known Observables

In Section II we introduced a set of standard observables, consisting of the normal and staggered magnetisation as well as the time-sliced two-point correlation function.111We use a slightly modified definition of the time-sliced correlator in order to remove lattice artifacts from the data, see Appendix B. It seems reasonable to assume that much of the relevant information characterising the phase structure and dynamics of the theory is encoded in these quantities. To check this, we create an ordered dataset of measurements of these quantities at various, evenly spaced values of (see Appendix B

for details on the dataset) and use it to perform a regression analysis. We employ a MLP, also called fully-connected neural network (see

Appendix E

for details on the specific architecture). The method is compared against a random forest regressor as a baseline, which is a standard method based on the optimisation of decision trees

Breiman2001 (see Appendix D for details). The results for both approaches, shown in Figures 5 and 4, will be discussed in the following.

We observe qualitatively similar accuracy on the training and test data in the broken FM and AFM phases. This is expected, since we know from Figure 1 that always one of the two types of magnetisations is strictly monotonic in the respective phase and can therefore determine uniquely. However, both approaches yield at best mediocre performance in the symmetric PM phase. Here, both magnetisations tend to zero and therefore do not contain much relevant information. Moreover, the two-point correlator exhibits approximately symmetric properties around . Therefore, it also does not provide a unique mapping. This issue is resembled in the prediction for both methods. The random forest yields a symmetric discrepancy around . In comparison, the MLP shows an improved performance for , albeit at the price of a larger variance for . At this point, we can already see that the chosen set of observables suffices to characterise the theory only in the broken phases, whereas in the symmetric phase, additional information appears to be necessary.

Before we embark on the search for the missing piece, let us first examine the results further to verify that the learned decision rules conform to the physical interpretation given above. We begin with the relevances as determined by LRP, shown in Figure 4 (bottom), and later compare to the random forest benchmark below. As expected, and are relevant in the FM and AFM phases, respectively. There, considerable relevance is also assigned to the observable . However, the contribution appears to diminish when going deeper into the broken phases. Its comparably large relevance in the symmetric PM phase shows that it contains most of the information used for the noisy prediction. As described above, the mediocre performance in this phase indicates that although the network seems to find weak signals, the chosen set of observables cannot be optimal.

The interpretation sketched above is further supported by the results obtained through random forest regression Breiman2001 . Analogously to the previously introduced relevance for LRP, we can determine nominal contributions of input features to the prediction and hence a measure of local feature importance (see Appendix D for details), which is shown in Figure 5 (bottom). In the broken FM and AFM phases, the respective contributions of and demonstrate a linear dependence on . Again, this clearly indicates that these quantities characterise the associated phases. For the symmetric PM phase, the situation appears more challenging, since no such clear dependence is observed for any of the observables. The non-zero contributions of features in the PM phase imply that they add some valuable information to the decision here. However, this has to be weighted against the observation that the accuracy in this region is poor. This further confirms our previous conclusion that relevant information to characterise this phase is largely lost in the preprocessing step, assuming that it was initially present in the raw field configurations. It is worthwhile stressing that this analysis represents an independent confirmation of the results obtained above. Both algorithms (MLP vs. random forest) rely on fundamentally different principles. We use a model-intrinsic interpretability measure for the random forest, whereas for the MLP we rely on LRP, i.e. a post-hoc attribution method.

iv.2 Extracting Observables from Convolutional Filters

In the previous section, we used a dataset of known observables to reconstruct

. Calculating such quantities corresponds to heavy preprocessing of the high-dimensional field configuration data. The resulting low-dimensional features are far less noisy, implying distillation of relevant information. This is a common procedure in the field of data science, and may become unavoidable for large lattices and/or theories with more degrees of freedom. E.g. in state-of-the-art simulations of lattice QCD, the required memory to store a single field configuration can easily reach

floating point numbers. Nevertheless, using preprocessed data in the form of standard observables introduces strong biases towards known structures. If our perception of the problem or generally our physical intuition is flawed, machine learning cannot help us—the relevant information may very well be lost in the preprocessing step. In the present case specifically, it appears that important features in the PM phase are neglected by this procedure, assuming that structures characterising this phase do in fact exist. Therefore, it is instructive to search for signals of such structures by training neural networks directly on field configurations.

Figure 6: Results for the CNN. Top: prediction, bottom: normalised relevances of individual filters. The dashed curve corresponds to the cumulative relevance of filter 3 and 4.

As a starting point for this search, we first perform a PCA on the field configuration dataset. As previously mentioned, this has been done before with promising results Hu2017 ; Wang2016 ; Wetzel2017a , albeit not in exactly the same physical setting. PCA immediately identifies the normal and staggered magnetisations as dominant features, essentially reproducing the work of Wetzel2017a . All higher order principal components show a vanishing explained variance ratio, implying that no other relevant, purely linear features are present in the data. This observation indicates that, if a quantity exists which parametrises the symmetric PM phase, it cannot simply be a linear combination of the field variables.

Our improved approach is based on a convolutional neural network (CNN). The training procedure is largely equivalent to that for the MLP in the previous section, with the observable dataset replaced by the full field configurations. We train a CNN using five convolutional filters with a shape of

and a stride of

. In order to support explainability, we encourage weight sparsity by adding the norm to the loss—also known as LASSO regularisation—as suggested in Casert2019 (see Appendix E for details). Due to the nature of the convolution operation, learned filters have a direct interpretation i.t.o. first-order linear approximations of relevant observables. Hence, we expect the CNN to reproduce the PCA results at the very least, and aim for the identification of other, non-linear quantities, which the network can encode in subsequent layers. It is important to understand this difference between the approaches, even though both extract only linear signals in a first approximation.

The model predictions are shown in Figure 6 (top). We can immediately observe a superior performance in the PM phase compared to our previous results. The CNN succeeds to consistently infer from the field configuration data with high accuracy. This indicates that it indeed manages to construct internal representations suitable not only to discern the different phases, which would be sufficient for classification purposes, but also for an ordering of data points within each phase.

Figure 7: Learned weights of convolutional filters. Left to right: (no.1, PM); (no.2, AFM); (no.3, FM); (no.4, FM). The colour map is symmetric around zero. Red (blue) corresponds to positive (negative) weights.
Figure 8: Comparison of PM filters for three independent training runs of the CNN.

In order to interpret the predictions and extract knowledge about the learned representations, we have to customise LRP to our needs. In image recognition, as previously mentioned, one mostly aims at highlighting important regions in the input domain, leading to superimposed heatmaps. This is based on the inherent heterogeneity common to image data, where relevant features are usually localised. For field configurations on the lattice, due to the translational symmetry of the action and the resulting homogeneity, no particularly distinguished, localised region should be apparent in any given sample. However, each convolutional filter encodes an activation map that is in fact sensitive to a specific feature present in a lattice configuration. In contrast to the usual ansatz, the spatial homogeneity promotes global pooling over the relevances associated with each filter weight. Hence, instead of assigning relevances to input pixels, we are interested in the cumulative filter relevance which indicates their individual importance for a particular prediction. Analogously to the rationale of the previous section, we can use this approach to build importance hierarchies of filters, thereby facilitating their physical interpretation as signals of relevant observables.

Figure 6 (bottom) shows each filter relevance as a function of . We can recognise some similarities to the relevances in Figure 4, highlighting the underlying phase structure of the Yukawa theory. It appears that the model can parametrise each phase individually using one or a small subset of filters, while the others show small or insignificant relevances in the respective region. The learned weight maps are shown in Figure 7, where we also assign names to the filters depending on the corresponding associated phase, with the exception of filter no.0 because it exhibits completely vanishing weights and relevance. It seems to have been dropped entirely by the network, indicating that four filters are sufficient to characterise all phases seen in the data. This reduction is an effect of the regularisation, and constitutes a recurring pattern also when more filters are initially used, providing a first hint towards the number of independent quantities utilised by the network.

Figure 9: Normalised observables reconstructed from the learned filters. The quantities associated with the FM and AFM phases are compared to and . and are related by Equation 6 and exhibit an approximate mirror symmetry around .

Let us begin by examining the results that directly correspond to known quantities. We observe that the FM1 and FM2 filters have entries of roughly uniform magnitude with a globally flipped sign. Accordingly, we can identify them as signals of the negative and positive branches of the magnetisation , respectively. This is corroborated by their dominating relevances in the FM phase. The AFM filter exhibits alternating entries of uniform magnitude and therefore corresponds to the staggered magnetisation , which accordingly dominates the AFM phase. Hence, both order parameters can be explicitly reconstructed from the CNN. The appearance of two filters for the magnetisation is easily understood by inspection of the network architecture in Table 3

, the crucial point being the application of a ReLU activation after the convolution operation. Consider the action of a positively-valued filter to negatively magnetised field configurations, or vice versa. The resulting negative activation map is subsequently defaulted to zero by the ReLU. Hence, in order to take both branches of

into account, two equivalent filters with opposing signs are required. The comparably large error bars in this region stem from the presence of positively and negatively magnetised samples in the dataset, which lead to a higher per-filter variance. Therefore, we additionally plot the cumulative relevance of both filters.

We now discuss the main object of interest, namely the PM filter. It supplies the dominant signal for the characterisation of this phase. A linear application of this filter to the configurations, as done for the FM and AFM filters, does not produce a monotonic quantity, which would be required for a unique ordering. This further supports the aforementioned evidence gathered by PCA for the absence of an additional, purely linear observable. Hence, the simple reconstruction scheme outlined in the previous paragraphs cannot be applied in this case. Instead, we undertake a heuristic attempt to reconstruct the relevant quantity. To this end, we note that the ReLU activation applied to the convolutional layer’s output can effectively correspond to the absolute value function, albeit with less statistics, if the entries of the activation map are distributed accordingly. Inspired by this observation, we define the following observable,

(9)

As with and , we obtain the corresponding staggered form by applying the transformation given in Equation 6. The resulting pair of quantities is visualised by the following idealised filters.

Figure 10: Convolutional filters corresponding to the observable defined in Equation 9 (left) and its corresponding staggered counterpart (right).

The observable defined in Equation 9 is the sum over all lattice sites of the lattice derivative in the diagonal direction of blocks in the direction. This already explains the modulus, as otherwise would be the sum over all sites of a total derivative, which vanishes identically. We also remark that can be made isotropic by summing over all directions.

We now discuss the properties of the theory that are measured by : In the continuum limit, naively tends towards the volume integral over . Due to the modulus of the derivative, carries the same information as the expectation value of the kinetic term.

The blocking in the -direction leads to a sensitivity of to sign flips of nearest-neighbours. While no continuum observable is sensitive to these sign flips, the continuum limit of maintains this information. Accordingly, exhibits a distinct behavior in the presence of localised, (anti-)magnetised regions, even if the expectation values vanish globally. Possible local field alignments resulting in different values of , but not of the standard derivative, are visualised in Figure 11.

Figure 11: Qualitative visualisation of local structures in field configurations operated upon by the PM filter. Sign is encoded by arrow orientation/colour. Diagonal neighbours tend to share the same sign everywhere in the phase diagram. On the contrary, nearest neighbours show a preference towards either same (left) or opposite (right) orientations. is particularly sensitive towards the local presence/absence of such sign flips in the PM phase, without the need for a globally non-zero expectation value of the magnetisations.

The construction and discussed sensitivities of demonstrates again the usefulness of LRP: we can identify the learned representation as a feature of the dataset arising from the lattice discretisation. and as functions of are shown in Figure 9 together with the other reconstructed observables and their respective analytical counterparts. A monotonic, roughly linear dependence is observed in the PM phase, indicating that the quantity indeed provides a unique mapping which aids the inference. In fact, if is included in the set of predefined observables for the inference approach detailed in the previous section, the prediction accuracy of the MLP accordingly becomes comparable to the CNN in this phase.

In conclusion, we find that the CNN characterises the PM phase by additionally measuring kinetic contributions in the described manner, rather than only expectation values of the condensate like in the broken phases. Still, and are being utilised as well, judging from the comparably large relevances of the FM filters in this region. Due to the opacity of the fully-connected layers following the convolution, some ambiguity remains regarding the precise decision rules that the network implements based on these quantities. This residual lack of clarity can likely be resolved by manually enforcing locality in the internal operations, e.g. by introducing artificial bottlenecks into the network. Of course, the form of

is also not exactly equivalent to the operations of the CNN, even though they share many important features. In particular, there is a mismatch between the averaging procedure and the MaxPool layer. Effects associated with the choice of different activation functions and pooling layers, which may be tailored more specifically towards certain types of observables, should be investigated in the future. However, our analysis shows that the overlap with the learned internal representation is significant.

V Conclusions and Outlook

We have investigated the application of interpretability methods to deep neural network classifiers as a general-purpose framework for the identification of physical features from lattice data. The approach facilitates an interpretation of a network’s predictions, permitting a quantitative understanding of the internal representations that the network learns in order to solve a pretext task—in this case, inference of action parameters. This culminates in the extraction of relevant observables from the data, leading to insights about the phase structure.

First, both types of magnetisations and the time-sliced, connected two-point correlator were used as training data for a MLP (see Figure 4). Inference of the hopping parameter was shown to work in each of the two broken phases, respectively. However, in the symmetric phase, the network was observed to suffer from bad accuracy. This indicates that the amount of relevant information present in the dataset is insufficient for the network to fully capture the dynamics of the theory. Using layer-wise relevance propgation, we determined a -dependent importance hierarchy of the observables. Using this approach we were able to confirm our physics expectations about order parameters being relevant within their associated phases. Moreover, while the two-point correlation function is sensitive to the PM phase, this singal is insufficient for attaining high accuracy for the MLP. Our numerical results and interpretation thereof were further verified by a random forest regression benchmark performed on the same dataset, which demonstrated qualitatively comparable accuracy (see Figure 5).

Next, we trained a CNN directly on the field configurations. In contrast to aforementioned results, the CNN was shown to yield superior accuracy for the same inference task (see Figure 6). Therefore, the set of observables chosen previously must have neglected important information, which the network managed to distill from the raw data. Employing LRP, a cumulative relevance was assigned to the individual convolutional filters, revealing a distinctive pattern that explains the decision process. In particular, we observed that the network specifically assigned filters to the each of the phases of the theory, with small to vanishing relevances in the remaining phases. This also indicates where phase transitions are located. We confirmed that the learned filters correspond to representations of the known order parameters by examining the weight maps (see Figures 8 and 7), essentially reproducing previous results.

Guided by the filter analysis, we constructed an observable that characterises the symmetric phase. In a heuristic attempt to find the exact form of this quantity, we defined in Equation 9 and showed that it exhibits several interesting properties (see Figure 9). We interpreted this quantity as a particular measure of local fluctuations that is also sensitive to nearest-neighbour sign flips. This further validates our physical intuition, since in the PM phase, we expect that relevant information for its characterisation is encoded by kinetic contributions. As discussed in detail below Equation 9, the naive continuum limit of is simply the volume integral of , Hence, it has lost the information about nearest-neighbour sign flips, while the continuum limit of its expectation value, , keeps its sensitivity towards this property. Accordingly, the construction of this observable guided by the filter analysis is non-trivial evidence for the potential power of the present approach: the results demonstrate that we can identify relevant structures which may otherwise stay hidden. At this point, LRP has indeed facilitated a deeper understanding of the CNN, by explaining the origin of its comparably high accuracy w.r.t. the MLP. With these results, we have conclusively established the value of interpretability methods in deep learning analyses of lattice data.

In the present work, the emphasis was put on the methodological aspects of the analysis in order to form a comprehensive basis for future efforts. Many interesting aspects, such as an investigation of the fermionic sector, were barely discussed. Instead, we have focused on the inference of the hopping parameter. Including other action parameters into the labels, such as the Yukawa coupling or chemical potential, is a promising endeavour for the future, as it will likely lead to an improvement in comparison to the current results. This is necessary in order to pave the way towards an application to more interesting scenarios, such as QCD at finite density or competing order regimes in the Hubbard model. Moreover, the introduced ML pipeline has the potential to provide insight also in various other areas of computational physics.

Acknowledgements

We thank M. Scherzer, I.-O. Stamatescu, S. J. Wetzel and F. P.G. Ziegler for discussions. This work is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy EXC 2181/1 - 390900948 (the Heidelberg STRUCTURES Excellence Cluster) and under the Collaborative Research Centre SFB 1225 (ISOQUANT), EMMI, the BMBF grants 05P18VHFCA, 01IS14013A (Berlin Big Data Center), and 01IS18037I (Berlin Center for Machine Learning).

Appendix A Theory and Simulation Details

a.1 Dimensionless Form
of the Klein-Gordon Action

The lattice action for real, scalar -theory in dimensions is defined as

(10)

where is the lattice spacing, correspond to the bare field, mass and coupling constant, and is the unit vector in -direction. The action can be cast into a dimensionless form through the following transformation:

(11)

Here, is commonly called the hopping parameter and now takes the role of the coupling constant. Applying this transformation results in

(12)

a.2 Simulating Fermions

Calculating the determinant of the dicretised Dirac operator (Equation 2) exactly and repeatedly, which is in principle necessary for importance sampling, is computationally intractable even for moderate lattice sizes. The usual approach is to approximate its value stochastically, e.g. by introducing auxiliary bosonic field variables (commonly called pseudo-fermions), which guarantees an asymptotically exact distribution. Simulations based on the numerical solution of differential equations, such as the Hybrid Monte Carlo (HMC) algorithm or Langevin dynamics, can exploit the comparably low cost of computing only the matrix inverse with the conjugate gradient method. In this work, we exclusively employ the HMC algorithm to generate data.

Appendix B Lattice Datasets

#samples per
16 1.1 20 0.25 0.005

200 - Training set

100 - Test set

Table 1: Action/simulation parameters used for training and test dataset.

All field configurations composing the datasets used in this work are generated with the parameters listed in Table 1. A single, labeled sample is given by the mapping

(13)

In order to explicitly enforce symmetry onto the neural networks, we use the same configurations twice in the dataset, just with a globally flipped sign. This raw data is directly used to train the CNN. For the MLP, the samples are preprocessed by computing the chosen set of observables for each configuration,

(14)

In this case, we can simply take the modulus of the magnetisations without losing information, since only two branches with exactly opposite signs are present in the phase diagram. Due to the finite expectation value of the staggered magnetisation, the AFM phase contains unphysical negative correlations. In order to remove these lattice artifacts, we adapt the usual time-sliced two-point correlator to

(15)

Generally, LRP is designed for classification problems. Therefore, we discretise to facilitate the formulation of the inference objective as a classification task. All values of are transformed into individual bins and the networks are tasked to predict the correct bin. In order to retain a notion of locality, the true bins are additionally smeared out with a Gaussian distribution, resulting in the target labels

(16)

Here, denotes the bin number, and the variance was set to . In combination with a MSE loss, we obtain qualitatively similar prediction results compared to a standard regression approach.

Appendix C Propagation Rules

This section contains a summary of the mathematical background of LRP, in particular regarding the propagation rules. Generally, the relevance depends on the activation of the previous layer . Given some input to the network, its predicted class is identified by the output neuron with the largest response. This neuron’s activation , along with for all other classes , defines the relevance vector. This output layer relevance can then be backpropagated through the whole network, which results in the aforementioned heatmap on the input. Importantly, the propagation rules are designed such that the total relevance is conserved,

(17)

where the index can indicate any layer. This conservation law ensures that explanations from all layers are closely related and prohibits additional sources of relevance during the backpropagation. A Taylor expansion of this conservation law yields

(18)

Here, we choose to be a so-called root point, which corresponds to an activation with vanishing consecutive layer relevance . By definition, it is localised on the layer’s decision boundary, which constitutes a hypersurface in the activation space. Hence, the root point is not uniquely defined and we need to impose an additional criterion. However, given such a point, we can identify the first order term as the relevance propagation rule . The remaining root point dependence gives rise to a variety of possible propagation rules. For instance, the rule minimises the Euclidean distance between neuron activation and the decision boundary in order to single out a root point. Visualisations of root points, as well as essential derivations and analytical expressions for propagation rules, can be found in MonDSP18 .

Appendix D Random Forest Details

Random forests Breiman2001 denote a predictive ML approach based on ensembles of decision trees. They utilise the majority vote of multiple randomised trees in order to arrive at a prediction. This greatly improves the generalisation performance compared to using a single tree. The elementary building block is a node performing binary decisions based on a single feature criterion. New nodes are connected sequentially with so-called branches. A single decision tree is grown iteratively from a root node to multiple leaf nodes. A concrete prediction corresponds to a unique path from the root to a single leaf. Each node on the path is associated with a specific feature. Hence, we can sum up the contributions to the decision separately for each feature by moving along the path,

(19)

Here, the bias corresponds to the average prediction at the root node.

We employ the scikit-learn implementation scikit-learn in combination with a TreeInterpreter extension Saabas2015 . The latter reference also provides an excellent introduction to the concept of feature contributions. The random forest is initialised with 10 trees and a maximum tree depth of 10. This parameter is essential for regularisation, since an unconstrained depth causes overfitting and thus results in poor generalisation performance.

Appendix E Network Architectures and Implementation Details

We use the PyTorch framework PyTorch . The machinery of LRP is included by defining a custom torch.nn.Module and equipping all layers with a relevance propagation rule. Furthermore, all biases are restricted to negative values in order to ensure the existence of a root point. For training, we employ the Adam optimiser Adam

with default hyperparameters and an initial learning rate of

, using a batch size of 16.

For both networks, the first layer undergoes least absolute shrinkage and selection operator (LASSO) regularisation during training, which encourages sparsity and thereby enhances interpretability. This corresponds to simply adding the norm of the respective weights to the MSE loss, which accordingly takes the form

(20)

Here, denote the prediction and ground truth labels, and the input and output nodes of the first layer. The quantity parametrises the strength of the regularisation.

The network architectures used in this work are given in the following tables.

Layer Specification Propagation rule
Linear in=18, out=256 – rule
ReLU
Linear in=256, out=128 – rule
ReLU
Linear in=128, out=140 – rule
LeakyReLU negative slope=0.01
Table 2: Network architecture of the MLP. The first layer undergoes regularisation with .
Layer Specification Propagation rule
Conv3d kernel=B, strides=A – rule
ReLU
MaxPool3d kernel=B, strides=B – rule
Linear in=1715, out=256 – rule
ReLU
Linear in=256, out=140 – rule
ReLU
Table 3: Network architecture of the CNN, with , . The first layer undergoes regularisation with .

References