I LHCb framework and data description
LHCb is an experiment located in the LHC accelerator at CERN, Geneve, mainly dedicated to the study of the physics of  and quarks produced in protonproton collisions. The LHCb detector includes a highprecision tracking system, that provides the measurement of the momentum of charged particles, and a particle identification system that distinguishes different types of charged hadrons, photons, electrons and muons Aaij et al. (2015a). The energy of charged and neutral particles is measured by electromagnetic and hadronic calorimeters.
LHCb is fully instrumented in the phase space region of protonproton collisions defined by the pseudorapidity () range [2,5], with defined as
where is the angle between the particle momentum and the beam axis. The direction of particles momenta can be fully identified by and by the azimuthal angle , defined as the angle in the plane transverse to the beam axis. The projection of the momentum in this plane is called transverse momentum (). In the following we work with physics natural units.
At LHCb jets are reconstructed using a Particle Flow algorithm A (1994) for charged and neutral particles selection and using the anti algorithm Cacciari et al. (2008) for clusterization. The jet momentum is defined as the sum of the momenta of the particles that form the jet, while the jet axis is defined as the direction of the jet momentum. The particles that form the jet are contained in a cone of radius , where and are respectively the pseudorapidity difference and the azimuthal angle difference between the particles momenta and the jet axis. For each particle inside the jet cone the momentum relative to the jet axis () is defined as the projection of the particle momentum in the plane transverse to the jet axis.
A topic of great interest for the experiment is the identification of the charge of the quark that generated a jet, i.e. or . Such identification can be used in many physics measurements, and it is the core of the determination of the charge asymmetry in pairs production, which is sensitive to physics beyond the Standard Model Murphy (2015).
The separation between  and jets is a highly difficult task because the quark fragmentation produce dozens of particles via nonperturbative Quantum Chromodynamics processes, resulting in nontrivial correlations between them and the original particle. The algorithms used to identify the charge of the quarks are called tagging methods. Two categories of tagging algorithms exist: based on one single particle inside the jet, muon, and/or inclusively exploiting the jet substructure, i.e. information on all the jet constituents, as shown in Fig. 2.
The tagging algorithm performance is typically quantified with the tagging power . This tagging power represents the effective fraction of jets that contribute to the statistical uncertainty in an asymmetry measurement D0 collaboration et al. (2006); CDF collaboration et al. (2006). Thus, the tagging power takes into account the efficiency , i.e. the fraction of jets where the classifier takes a decision, and the prediction accuracy , i.e. the fraction of classified jet where the right decision is taken,
LHCb measured the forwardcentral asymmetry using the dataset collected in the LHC Run I Aaij et al. (2014) using the muon tagging approach: In this method, the muon with the highest momentum in the jet cone is selected, and its electric charge is used to decide on the quark charge. In fact, if this muon is produced in the original semileptonic decay of the hadron, its charge is totally correlated with the quark charge. Up to date, the muon tagging method gives the best performance on the  vs jet discrimination. Although this method can distinguish between  and quark with good accuracy, its efficiency is low as it is only applicable on jets where a muon is found and it is intrinsically limited by the hadrons branching ratio in semileptonic decays. Additionally, the muon tagging may fail in some scenarios, where the selected muon is produced not by the decay of the hadron but in other decay processes. In these cases, the muon may not be completely correlated with the quark charge.
The LHCb simulation datasets used for our analysis are produced with a Monte Carlo technique using the framework GAUSS Clemencic et al. (2011), which makes use of PYTHIA 8 Sjöstrand et al. (2008) to generate protonproton interactions and jet fragmentation and uses EvtGen Lange (2001) to simulate hadrons decay. The GEANT4 software Agostinelli et al. (2003); Allison et al. (2006) is used to simulate the detector response, and the signals are digitized and reconstructed using the LHCb analysis framework.
Ii Machine Learning Methods
The identification of the quark charge described in sec. I can be formulated in terms of a supervised learning problem. As described in detail in App. A, we implemented a TTN as a classifier and applied it to the LHCb problem analysing its performance; the same is done for a DNN and both algorithms are compared with the muon tagging approach. Both, the TTN and the DNN, use as input for the supervised learning features of the jet substructure. The features are determined as follows: the muon with the highest among all other detected muons in the jet cone is selected and the same is done for the highest kaon, pion, electron, and proton. In this way, 5 particles of different types are selected. For each particle, three observables are considered: (i) The momentum relative to the jet axis (), (ii) the particle charge (), and (iii) the distance in the (,) space between the particle and the jet axis (), resulting in observables. If a particle type is not found in a jet, the related features are set to . The th feature is the total jet charge , defined as the weighted average of the particles charges inside the jet, using the particles as weights:
The used dataset contains and jets produced in protonproton collisions at a centerofmass energy of TeV Aaij et al. (2020a, b). First, pairs of jets and jets are selected by requiring a jet greater than 20 GeV and in the range [2.2,4.2] for both jets. Then, the dataset of about jets (samples) is split into two datasets: of the samples are used in the training process while the remaining are used as test set to evaluate and compare the different methods.
We train the TTN as described in App. A and analyse the data with different bond dimensions . The auxiliary dimension controls the number of free parameters within the variational TTN ansatz. While the TTN is able to capture more information from the training data with increasing bond dimension , choosing too large may lead to overfitting and thus can worsen the results in the test set. For the DNN we use an optimized network with three hidden layers of 96 nodes (see App. E for details). Hereafter, we aim to compare the best possible performance of both approaches therefore, we optimised the hyperparameters of both methods in order to obtain the best possible results from each of them, TTN and DNN.
For each event prediction, both methods give as output the probability
to classify a jet as generated by a  or a quark. This probability (i.e. the confidence of the classifier) is normalized in the following way: for values of probability () a jet is classified as generated by a quark (quark), with an increasing confidence going to (). Therefore a completely confident classifier returns a probability distribution peaked at
and for jets classified as generated by  and quark respectively.Iii Jet classification performance
In the following, we present the jet classification performance for the TTN and the DNN applied to the LHCb dataset, comparing both ML techniques also with the muon tagging approach.
We introduce a threshold symmetrically around the prediction confidence of in which we classify the event as unknown. We optimise the cut on the predictions of the classifiers (i.e. their confidences) to maximise the tagging power for each method based on the training samples. In the following analysis we find () for the TTN (DNN). Thereby, we predict for the TTN (DNN) a quark with confidences (), a quark with confidences () and no prediction for the range in between.
Applying both ML approaches after the training procedure on the test data, we obtain similar performances in terms of the prediction accuracy. Taking the threshold for classifying data as unknown into account, the TTN takes a decision in of the cases with an overall accuracy of , while the DNN decides in of the samples with (see App. H for further details). We further checked both approaches for biases in physical quantities to ensure that both methods are able to properly capture the physical process behind the problem and thus that they can be used as valid tagging methods for LHCb events (see App. F).
In Fig. 3 we present the tagging power of the different approaches with respect to the jet transverse momentum . Evidently, both Machine Learning methods perform significantly better than the muon tagging approach for the complete range of jet transverse momentum , while the TTN and DNN both show comparable performances within the statistical uncertainties.
In Figs. 3 and 3 we present the histograms of the confidences for predicting a flavored jet for all samples in the test data set for the DNN and the TTN respectively. Interestingly, even though both approaches give similar performances in terms of overall precision and tagging power, the prediction confidences are fundamentally different. For the DNN, we see a Gaussianlike distribution with, in general, not very high confidences for each prediction. Thus, we obtain less correct predictions with high confidences, but at the same time, fewer wrong predictions with high confidences compared to the TTN. On the other hand, the TTN shows a flatter distribution including more predictions  correct and incorrect  with higher confidence. Remarkably though, we can see peaks for extremely confident predictions (around 0 and around 1) for the TTN . These peaks can be traced back to the presence of the muon; noting that the charge of which is a welldefined predictor for a jet generated by a quark. The DNN lacks these confident predictions exploiting the muon charge.
Finally, in Fig. 3 the scatter plot of TTN and DNN is presented: for each jet classified as coming from a quark (green dots) or quark (red dots) we relate the outputs of the two classifiers to spot any correlation between them. The graph shows that despite the insights of the different confidence distributions the outputs of the two classifiers are linearly correlated, with a Pearson correlation factor .
In conclusion, the two different approaches result in similar outcomes in terms of prediction performances. However, the underlying information used by the two discriminators is inherently different. For instance, the DNN predicts more conservatively, in the sense that the confidences for each prediction tend to be lower compared with the TTN. Additionally, the DNN does not exploit the presence of the muon as strongly as the TTN, even though the muon is a good predictor for the classification.
Iv Exploiting insights into the data with TTN
The TTN analysis allows to efficiently measure the captured correlations and the entanglement within the classifier. These measurements give insight into the learned data and can be exploited to identify the most important features typically used for the classifications.
Therefore, we interpret the TTN classifier as a set of quantum manybody wavefunctions  one for each of the class label (see App. A for further details). To perform the classification, each feature is encoded by the local feature map
(1) 
thus each feature is represented by a quantum spin. Accordingly, each sample is mapped into a product state . Alongside, when we classify a sample , we compute the overlap for all labels with the product state resulting in the weighted probabilities
for each class. We stress, that we can encode the input data in different nonlinear feature maps as well (see App. A.4).
We can now calculate the correlation functions
for each pair of features (located at site and ), to gain an insight into the different information the features provide. In case of maximum correlation or anticorrelation among them for all classes , the information of one of the features can be obtained by the other one and thus one can be neglected. In case of no correlation among them, the two features may provide fundamentally different information for the classification. For both labels () the results are very similar, thus in Fig. 4 we present only (see App. B.1 for further discussion on the correlation measurements).
The correlation analysis presented above allows pinpointing if two features give independent information. However, the correlation itself does not tell if this information is important for the classification. We thus, computed the entanglement entropy of each feature, as reported in Fig. 4. The entanglement entropy reflects the shared information between two TN bipartitions. The entanglement is measured via the Schmidt decomposition, that is, decomposing into two bipartitions and Nielsen and Chuang (2000) such that
where
are the Schmidtcoefficients (nonzero, normalised singular values of the decomposition). The entanglement entropy is then defined as
. Consequently, the minimal entropy is obtained only if we have one single nonzero singular value . In this case, we can completely separate the two bipartitions as they share no information. On the contrary, higher mean that information is shared among the bipartitions.In the Machine Learning context, the entropy can be interpreted as follows: If the features in one bipartition provide no valuable information for the classification task, the entropy will be zero. On the other hand, increases the more information between the two bipartitions are exploited. This analysis can be used to optimize the learning procedure: whenever , the feature can be discarded with no loss of information for the classification. Thereby, a new model with fewer features and fewer tensors can be introduced. The new more efficient model results in the same predictions in less time. On the contrary, whenever a bipartition entropy is high, highlights which features  or combination of features  are important for the correct predictions. In conclusion, if the entropy of a feature bipartition is low, we can discard one of them providing negligible loss of information. Moreover, if the bipartition entropy is significantly large, features can be reordered for a better representation of the classifying wavefunction. Finally, if two features are completely (anti)correlated we can neglect at least one.
Driven by the previous analysis, we introduce the Quantum Information Postlearning feature Selection (QuIPS) algorithm, which combines the insights of both of these measurements  correlations and entropy  to rank the input features according to their importance for the classification (see App. B). Employing QuIPS, we discarded half of the features by selecting the 8 most important ones: i.iii. charge, momenta, and distance of the muon; iv.vi. charge, momenta, and distance of the kaon; vii. charge of the pion; viii. total detected charge. To test the QuIPS performance, we compared it with an independent but more timeexpensive analysis on the importance of the different particle types (see App. G): the two approaches perfectly matched. Finally, we studied two new models, one composed of the 8 most important features proposed by the QuIPS, and, for comparison, another with the 8 discarded features. In Fig. 4 we show the tagging power for the different analyses with the complete 16sites (model ), the best 8 (), the worst 8 () and the muon tagging. Remarkably, we see that the models and give comparable results, while model results are even worse than the classical approach. These performances are confirmed by the prediction accuracy of the different models: While we only lose less than of accuracy from to , the accuracy of the model drastically drops to around  that is, almost random predictions. Finally, in this particular run, the model has been trained times faster with respect to model and predicts
times faster as well (The actual speedup highly depends on the bonddimension and other hyperparameters, see App.
D for details).A critical point of interest in high energy physics applications is the prediction time. Indeed, short prediction times are necessary to perform realtime event selection. In the LHCb Run 2 datataking, the highlevel software trigger takes a decision approximately every s Aaij et al. (2015a) and higher rates are expected in future Runs. Consequently, we aim to exploit the QuIPS to efficiently reduce the prediction computational time while maintaining a comparable high prediction power. Another step we can undertake to reduce the prediction time is to reduce the bond dimension after the training procedure. Here, we introduce the Quantum information Adaptive Network Optimization (QIANO) performing this truncation in a way ensuring to introduce the least infidelity possible (see App. C). In other words, QIANO can adjust the bond dimension to achieve a targeted prediction time while keeping the prediction accuracy reasonably high. We stress that this can be done without relearning a new model, as it would be the case with NN.
Finally, we apply QuIPS and QIANO to reduce the information in the TTN in an optimal way for a targeted balance between prediction time and accuracy. In Fig. 4 we show the tagging power taking the original TTN and truncate it to different bonddimensions . We can see, that even though we compress quite heavily, the overall tagging power does not change significantly. In fact, we only drop about in the overall prediction accuracy, while at the same time improving the average prediction time from s to s. Applying the same idea to the model we can reduce the average prediction time efficiently down to s (see App. D for more details), compatible to current realtime classification rate.
V Conclusions
We analysed an LHCb dataset for the classification of  and jets with two different ML approaches, a DNN and a TTN. We showed that we obtained with both techniques a tagging power about one order of magnitude higher than the classical muon tagging approach, which up to date is the bestpublished result for this classification problem. We pointed out that, even though both approaches result in similar tagging power, they treat the data very differently. In particular, TTN efficiently recognises the importance of the presence of the muon as a strong predictor for the jet classification.
We further explained the crucial benefits of the TTN approach over the DNNs, namely (i) the ability of efficiently measuring correlations and the entanglement entropy, and (ii) the power of compressing the network while keeping a high amount of information (to some extend even lossless compression). We showed how the former quantuminspired measurements help to set up a more efficient ML model: in particular, by introducing an informationbased heuristic technique, we can establish the importance of single features based on the information captured within the trained TTN classifier only. Using this insight, we introduced the QuIPS, which can significantly reduce the model complexity by discarding the leastimportant features maintaining high prediction accuracy. This selection of features based on their informational importance for the trained classifier is one major advantage of TNs targeting to efficiently decrease training and prediction time. Regarding the latter benefit of the TTN, we introduced the QIANO, with which once we learned a TTN, we can decrease its prediction time by optimally decreasing its representative power based on information from the quantum entropy, ensuring that each truncation introduces the least infidelity possible. In contrast to DNNs, with the QIANO we do not need to set up a new model and train it from scratch, but we can optimise the network postlearning adaptively to the used CPU and required prediction time of the final application.
Finally, given the importance of prediction time in the LHCb experiment, we showed that using QuIPS and QIANO we can efficiently compress the trained TTN to target a given prediction time. In particular, we decreased our prediction times from to . Finally, while we used only one CPU for the predictions, by parallelising the tensor contractions on GPUs one can obtain a speedup from to times Milsted et al. (2019). Thus, we are confident that it is possible to reach the MHz prediction rate while still obtaining results significantly better than the classical muon tagging approach.
Further applications of our approach in the LHCb experiment is the discrimination between jets,
jets and light flavour jets, which was already tackled by a Machine Learning approach using Boosted Decision Tree classifiers
Aaij et al. (2015b). A fast and efficient realtime identification of  and jets can be the key point for several studies in high energy physics, ranging from the search for the rare Higgs boson decay in two quarks, up to the search for new particles decaying in a pair of heavyflavour quarks ( or ). Given the optimal performance of the presented method, we envisage a multitude of possible future applications in highenergy experiments at CERN and in other fields of science.Vi Data availability
Vii Acknowledgments
We are very grateful to Konstantin Schmitz for valuable comments and discussions on the Machine Learning comparison. We thank Miles Stoudenmire for fruitful discussions on the implementation of the Tensor Networks Machine Learning code.
This work is partially supported by the Italian PRIN 2017 and Fondazione CARIPARO, the Horizon 2020 research and innovation programme under grant agreement No 817482 (Quantum Flagship  PASQuanS), the QuantERA projects QTFLAG and QuantHEP, and the DFG project TWITTER. We acknowledge computational resources by CINECA, the Cloud Veneto and by the BwUniCluster.
We acknowledge the LHCb Collaboration for the valuable help and the Istituto Nazionale di Fisica Nucleare and the Department of Physics and Astronomy of the University of Padova for the support.
Appendix A Tree Tensor Networks
Tensor Networks (TNs) have been developed for decades to investigate quantum manybody systems on classical computers. They provide an efficient representation of a quantum wavefunction in a compact form and thereby, they have proven to be an essential tool for a broad range of applications Schollwöck (2011); Silvi et al. (2019); McCulloch (2007); Singh and Vidal (2013); Felser et al. (2019); Dalmonte and Montangero (2016); Gerster et al. (2017)
. In a mathematical context, a TN approximates a highorder tensor by a set of loworder tensors that are contracted in a particular underlying geometry and have common roots with other decompositions, such as the Singular Value Decomposition (SVD) or Tucker decomposition
Tucker (1966). The accuracy of the TN approximation can be controlled with the socalled bonddimension , an auxiliary dimension for the indices of the connected local tensors. Among others, some of the most successful TN representations are the Matrix Product State (MPS)  or Tensor Trains Östlund and Rommer (1995); Schollwöck (2011); Oseledets (2011); Stoudenmire and Schwab (2016), the Tree Tensor Network (TTN)  or Hierarchical Tucker decomposition Gerster et al. (2014); Hackbusch and Kühn (2009); Liu et al. (2019), and the Projected Entangled Pair States (PEPS) Verstraete and Cirac (2004); Orús (2014).In the following, we briefly describe the main principle of Tensor Networks and the concepts we refer to within the paper or later on in the appendix. For a more detailed insight into Tensor Networks, we refer to more comprehensive reviews and text books Orús (2014); Schollwöck (2011); Silvi et al. (2019); Novikov et al. (2016); Montangero (2018).
a.1 Graphical representation of Tensor Networks
Within the original Tensor Network development and applications in physics, a graphical representation of the underlying mathematical tensor notation has been established for the sake of compactness. In a nutshell, we represent TNs with circles  for the tensors  and lines connecting the different tensors. Each line, as well referred to as link, indicates a contraction of the two connected tensors over a coinciding index. Fig. 5 shows the graphical representation of Tensors with different ranks: A vector, a matrix, and a rank3 tensor from left to right at the top, and a general rankN tensor at the bottom.
Within TN algorithms, the tensors are constantly manipulated. The most important operations thereby are the contraction of two tensors, reshaping a tensor and performing a factorisation. The contraction of two tensors generalises the linear algebra wellknown matrixmatrixmultiplication to tensors with arbitrary rank. As matrices are a special form of tensors, the matrix multiplication can be identified in terms of TN as a contraction of two ranktwo tensors over one coinciding link (see Fig. 6). Generalising this statement for arbitrary tensors and , the tensor contraction can be performed over several coinciding links . The resulting tensor given by the summation over all indices for coinciding links as follows.
Therefore, the links and are the remaining indices after the contraction.
For a general TN, a link connecting two tensors always indicates a contraction of both tensors. The algorithmic complexity of such a tensor contraction scales with the dimension of all involved links to (although the scaling can be reduced, when carried out as optimised matrixmatrix multiplication). Due to this complexity, the contractions play a crucial role in the efficiency of algorithms for Tensor Networks.









a.2 Tree Tensor Network representation
In its original idea, the TTN represents an arbitrary pure quantum state as a decomposition of the complete exponentially large tensor . The corresponding separable Hilbert space of the system consists of local subspaces , where each local state space shall be dimensional. The most general pure state in such a system can be written
(2) 
where describes the local state space of the site . This complete representation seeks for coefficients describing each possible combination of the local states. These coefficients can be recast in a tensor of rank , where each leg of this tensor corresponds to a local Hilbert space . The normalisation of the state thereby coincides with the Frobenius norm of the rank tensor. The TTN further decomposes this rank tensor into a set of hierarchically connected rank tensors.
a.3 Machine Learning with Tree Tensor Network
Even though DNNs have been highly developed in recent decades by industry and research, the first approaches of Machine Learning with TN yield already to comparable results when applied to standardised datasets Stoudenmire and Schwab (2016); Stoudenmire (2018); Glasser et al. (2018a). In particular, we implemented a TTN as a classifier for supervised learning problems. In this section, we give insights into the TTN Machine Learning algorithm.
As for a general supervised learning problem, the data samples are given as input vectors . Each sample is encoded into a higher dimensional feature space by a feature map , and subsequently classified with the decision function .
(3) 
In general, the complete weight tensor can be used as a classifier, however, this tensor becomes exponentially large with increasing numbers of features given in the dataset. Therefore, we represent as a quantuminspired Tensor Network, in particular a Tree Tensor Network, building on the idea proposed for an MPS in Stoudenmire and Schwab (2016).
A TN with sites addresses a global space spanned by a tensor product of local subspaces . Each subspace can in general have a different dimension . For the application in Machine Learning, a natural feature map suited for a TN is a product of local feature maps where and . All local feature maps together determine the global feature map
(4) 
To point out the connection with quantum mechanics in this Machine Learning ansatz, we can describe the TTN classifier as a set of quantum manybody wavefunctions  one for each of the class label . Consequently, when we predict a sample we calculate its overlap for all labels with the product state given by a global feature map. The final prediction output for each class is then given by the weighted probabilities
(5) 
For the identification of jets, the features are the detected physical observables from the LHCb simulation described in Section I. We rescale each of the features to with respect to the corresponding maximum value of all samples within the complete training set. We encode the rescaled features  following the inspiration of quantum spins  by choosing the local feature map
(6) 
In this way, we can think about each single feature being represented by a quantum spin (where is mapped to a spin down and to a spin up). Accordingly, each sample is mapped to the product state . After the transformation, the th feature is addressed by the th site of the TTN. In general, we can exploit different, more expressive feature maps then the chosen one of Eq. 6 (see App. A.4).
For the learning procedure of the TN, we aim to minimise the quadratic cost function
where the index runs over all training samples and is a Kronecker delta with being the correct label for the th sample. Thus , if the label equals the known label for the supervised learning. We optimise the complete network by subsequently performing a gradient descent on local tensors until the cost function converges. We sweep through the network from the bottom to the top, so that after one sweep every tensor has been optimised once, concluding one learning iteration. In contrast to Ref. (Stoudenmire and Schwab, 2016), we keep the label fixed at the top tensor and optimise each tensor separately rather than optimising in the space of two tensors at once.
Furthermore, we initialise the TTN by performing the unsupervised learning proposed in Ref.
Stoudenmire (2018) up to the topmost layer in the tree and adding a random tensor on top connecting the remaining two bipartitions with the label for the classification. We start with optimising the random toptensor via conjugate gradient descent and afterward start iteratively sweeping through the network from the bottom to top.a.4 HigherDimensional Local Feature Maps
In Eq. (6), we presented the local feature map as a 2dimensional vector inspired by the quantum spin representation. In general, we are not restricted to this feature map, as the different samples can be mapped by using more expressive feature maps, e.g., taking polynomial orders (e.g. ) or higher order spherical feature maps defined as
We analysed the data with different orders of the spherical feature map and presented in Sec. III the results obtained by the 5th order map , as this order lead to the best prediction accuracy. Anyhow, the different feature maps all result in similar prediction accuracies in the end and the fundamental insights we obtained did not change. As an example, for the dimensional feature map we obtained an accuracy of in contrast to for (both after applying the cuts).
In Sec. IV we used the dimensional feature map for the insights into the TTN by measuring the correlations and entropy. We stress that the operators used to measure correlations have to be adapted to the local Hilbert space as well. For spherical feature maps, we can exploit the PauliMatrices for , the Gellmatrices for , or higher representations of the Lie group in order to investigate correlations in the classification.
a.5 Isometrisation
Here, we restrict ourselves to rank3 tensors for the sake of compactness and since the Tree TN is composed out of rank3 tensors only. The generalisation to rank tensors is straightforward.
A tensor of the TTN is isometrised with respect to the links
if it is a unitary matrix when combining the two indices
and , that is it obeys the isometry condition(7) 
Hence, one isometrised tensor performs a unitary transformation on two subspaces
. We can isometrise an arbitrary TTN by iteratively performing QRdecompositions
Silvi et al. (2019) on each tensor of the tree from the bottom to the top. In particular, when going from the bottom to the top, we set the unitary tensor as the original tensor and contract the tensor upwards with the connected tensor over the link . This procedure results in all tensors being isometrised, except for the upmost one. The TTN is then isometrised towards the upmost tensor. In the same manner, we can isometrise the TTN as well to different tensors within the network.After we train a TTN in the Machine Learning application, we isometrise the complete TTN towards the upmost tensor for the predictions. Consequently, the prediction is a real space renormalisation of two neighboring sites for each layer within the tree Montangero (2018); Tagliacozzo et al. (2009). Each tensor simply performs a unitary transformation together with a truncation of two sites originating from the input sample. Consequently, when assessing the tensor entries we know exactly how the data will be processed for the general prediction.
a.6 Schmidtdecomposition
In a loopfree TN state every link between two tensors in the network bipartites the underlying system into the subsystems and . This allows rewriting the TN at every link in terms of the Schmidtdecomposition
(8) 
with being the bond dimension of the link and the Schmidtcoefficients (or nonzero, normalised singular values of the decomposition). Thus, at each bipartition, the bond dimension of the link provides an upper bound for the Schmidt rank and consequently for the bipartite entanglement the TN is able to capture (compare App. B.2). Each of the two Schmidt vector sets forms an orthonormal basis for the associated subspaces Nielsen and Chuang (2000). In practice, we exploit the isometry condition for the tensors within the TTN by isometrising to one of the tensors attached to the link of the desired bipartition. The Schmidtvalues then correspond to all nonzero singular values of an SVD decomposition with respect to attached to the link on the tensor the TTN is isometrised towards.
As we will see in the next section, this Schmidtdecomposition will allow us to calculate the information encoded in the TNs and based on this information to efficiently reduce the complexity of the Machine Learning model.
Appendix B Quantum Information Postlearning feature Selection (QuIPS)
In this section, we introduce the Quantum Information Postlearning feature Selection (QuIPS), a protocol that exploits the information encoded in the TTN to reduce the input features in the model to the most valuable ones for the classification process. In particular, the quantuminspired measurements of correlations and entropy are used to determine the importance of the different input features after the learning procedure based on the information they provide for the classification. Finally, the QuIPS allows us to rank all the input features according to their importance and to use this ranking to efficiently reduce the model by discarding the least important features.
In the following we first describe the two exploited quantuminspired measurements, the correlations and the von Neumann entropy, and finally give an algorithmic protocol for the QuIPS.
b.1 Correlation measurements
We can measure the correlations captured within the TTN classifier by exploiting the quantumcorrelation measurements. As we chose the local map to represent the input features in quantum spins, we will measure the correlations in the basis for each pair of features (located at site and ), defined as follows:
The correlation results in if the TTN recognizes that the two local features and are completely correlated  such that the rescaled input of always equals the rescaled input . We obtain for the two local features being completely anticorrelated  such that . Finally, we obtain in case of the two local features being completely uncorrelated. Thus in case of no correlation, we know that the two features may provide fundamentally different information for the classification. In any way, we can not tell if this information given by the two features is actually important for the classification itself. On the contrary, in the case of complete (anti)correlation, the two features provide the same information and we can drop one of them in further analysis. As an example, we can take a look at learning from pictures, where the first two pixels are always white in the complete data. Thus, we will measure that both pixels are totally correlated and we know we can discard at least one of them as we can always reconstruct the information from the other one. But, in this case, both pixels would give us no valuable information for the actual classification problem, and we could even go further and discard both. Measuring the correlation only, we have no insight into this information. Anyway, when we take the entropy measurement in the subsequent section into account, we can measure the information provided for the classification and efficiently discard both pixels in this scenario.
This idea for the correlation measurement can be extended to the use of different local feature maps (see App. A.4) by using different operators as correlators. We mention as well, that this correlation measurement is purely based on the information within the TTN captured after the learning procedure.
In principle, we can further measure correlations or local expectation values in the basis. This can help to find further correlations within the data and can give insights on choosing the local feature map. If we find higher correlations in this basis for certain features, it might be interesting to actually change the input basis of the local feature map from a spin in to the basis.
In the paper, we presented the correlations for the quark classification. In Fig. 7 (a) we show the correlations as well for the quark for sake of completeness. As mentioned before, both cases are very similar and only differ slightly in the magnitude of the single correlations.
We can further generalise the correlation measurement and compute the crosscorrelations of the two feature spaces and between two different classes and
b.2 Entropy measurements
Within the TTN, we have also access to the entanglement entropy. This expresses the correlations within two general bipartitions of the whole system. To compute it, we bring the state represented by the TTN into an orthogonal form with the Schmidt decomposition (see App. A.6). The von Neumann entropy is then defined by the Schmidtcoefficients as . Consequently, the minimal entropy is obtained only if we have one single nonzero singular value of . In this case, we can completely separate the two bipartitions as they share no information. This idea can be interpreted the Machine Learning context: If the features in one bipartition provide no valuable information for distinguishing between the different classes, the entropy is zero. On the other hand, the entropy increases the more information between the two bipartitions is used for the classification. This criterion can be used to optimize the learning procedure: In the first scenario, we can efficiently discard the features with no  or negligible less  information and introduce a new model with fewer features and less tensor respectively. With this more compact model, we are able to obtain the same predictions while requiring fewer contractions and thereby less time. The second scenario, where the entropy is high, helps us to understand which features  and more general which combination of features  are critically important for the correct predictions.
In fact, we can further exploit the mutual information
of two different features and if they are attached to the same tensor by measuring the entanglement entropy on a higher layer within the tree as illustrated in Fig. 8 as an example of the classification. Here, () is the corresponding entropy for the two features () and the entropy of the combined features. If we see, that e.g. two features provide the same entropy , and additionally the coarsegrained space consisting of both features provides the exact same entropy , we know that the information we obtain from the two features is equivalent. Thus, in this case, we can neglect one of the two (in agreement with the correlation measurement). The same idea can be extended to the mutual information of different clusters of input features and opens a very promising direction for a deeper understanding of information captured within the TTN.
b.3 QuIPS Protocol
Both abovementioned measurements, the correlations together with the entanglement entropy, leave us with the following insights and receipt for increasing the model efficiency using QuIPS:

If two learned features are completely correlated we can neglect at least one of them without loss of classification accuracy.

If the entropy for a bipartition (set of features) is low, we can discard all the features.

If the entropy is significantly large, we may reorder the features for a better representation of the classifying wavefunction.
Therefore, the QuIPS can be described by the following algorithmic idea for setting up the new model with a targeted number of total features:

Add the feature with highest entropy to the new model

while ( numberOfFeatures < targetNumber )

take next feature with highest entropy

if ( new model )

add feature j


end if


end while
Resuming, the QuIPS offers valuable insights into the learned data, in particular, it provides an informationbased heuristic for the importance of the single features based on the information within the trained TTN. Thereby, we can significantly reduce the model complexity by discarding the leastimportant features while still maintaining high prediction accuracy. As an outlook on further investigation in this direction, we can include the mutual information into the heuristic and put more value as well on the ordering of the features. As the TTN classifier is breaking the symmetry of the position of input samples, this ordering of features can lead to critical gains in the performance as well.
Furthermore, an interesting approach is to exploit different metrics for measuring and describing the captured information. For instance, we might use the KullbackLeibler Divergence
Kullback and Leibler (1951), which is a more prominent measurement in Machine Learning.Appendix C Quantum Information Adaptive Network Optimization (QIANO)
We introduce the Quantum Information Adaptive Network Optimization (QIANO) which performs a reduction of free parameters in the network in a way that ensures to keep the highest amount of information possible. In other words with QIANO we can adjust the bond dimension of the TTN classifier targeting a certain prediction time while controlling the prediction accuracy to stay reasonably high. Moreover, this adjustment can be done without the need of relearning a new model, as it would be the case with neural networks. The underlying procedure for this truncation of a TTN is the Singular Value Decomposition (SVD) which will be applied to the tensors within the network.
c.1 Singular Value Decomposition
Every complex matrix can be decomposed into a matrix featuring orthonormal columns, a diagonal matrix and a matrix with orthonormal rows such that
The orthonormal columns of  also referred to as left singular vectors of  obey . Analogously, the orthonormal rows of  or right singular vectors of  entail . The diagonal matrix contains the singular values of (real, nonnegative). The number of nonzero singular values equals the Schmidt rank of the matrix . With a descending order of the singular values, the matrix is unique for given , which is in general not the case for and .
The SVD provides the best possible approximation of a matrix by a matrix with lower rank with respect to its Frobenius norm . Indeed, performing the SVD on leads to
with denoting the Schmidt rank of the matrix . Thus the squared Frobenius Norm equals the sum of the squared nonzero singular values . Consequently, the error in made by the approximation is minimal for taking the highest singular values into account, together with their corresponding singular vectors in and . If the singular values are arranged in descending order, the error in the Frobenius norm can be calculated by the discarded values Schollwöck (2011).
The SVD is generalised straightforward to the general tensor algebra by splitting the tensor with respect to two different sets of its links.
c.2 TTN truncation
When we truncate one link in the TTN to a lower bond dimension
, we can isometrise the network forming the Schmidtdecomposition by an SVD on a certain tensor. We truncate it by throwing away the smallest singular values. The error made within this truncation can be estimated accurately since the induced state fidelity explicitly depends on the sum of the squared discharged singular values. We can proceed by truncating all the links in the network in the same manner reducing the sizes of the tensors and the total space of the TTN representation.
In this process of truncation, we stress again that the local truncation of the links ensures the mathematically best possible approximation within the lower subspace regarding the total fidelity of the TTN state. Anyhow, for the global truncation we point out that as we iterate threw the complete network with local truncations, the ordering of the local truncation may play a role in the final approximation. Even though, this approach has proven to be extremely efficient (see application on the LHCb data in App. D), the global truncation might be performed in an even further optimised way.
c.3 QIANO Protocol
The QIANO implements a truncation of the TTN as mentioned above after the training procedure which thereby reduces the free parameters in the network while introducing the least possible infidelity for each truncation step. In other words, one adjusts the bond dimension targeting a certain prediction time while keeping the critical amount of information captured by the TTN  and thereby a high prediction accuracy  within the smaller subspace we represent our quantum wave function classifier in. A prediction with the truncated TTN equals to perform a set of contractions  or vectormatrixmultiplications  on lower dimensional tensors and thus can be executed in lower computational time. Performing a QIANO, we represent the TTN classifier more compactly based on the quantuminformation captured within the TN ansatz, resulting in a more efficient classifier concerning the prediction time versus its prediction accuracy. We can perform this reduction of free parameters without the need of relearning a new model, as it would be the case with neural networks when we target a certain prediction time. We train the TTN once with a maximum bond dimension and then truncate it depending on the CPU architecture to the dimension in order to obtain the targeted .
This way of reducing the information in the TTN for the sake prediction time maintaining an optimal balance between prediction time and accuracy is of extreme value in a broad range of Machine Learning applications: The TTN offers the flexibility to adjust the prediction time depending on the requirement and the architecture of the computational system. For the analysis on the CERN data we provide in App. D a deeper insight into the actual speedup, including the measured prediction times for different bond dimension (see, Tab. 1).
c.4 Equivalence with DNNs
It has been shown that artificial NNs can be mapped into TNs Glasser et al. (2018a); Robeva and Seigal (2017); Chen et al. (2018); Glasser et al. (2018b)
. Following the idea of the graphbased mapping, the postlearning reduction of free parameters by truncating the TTN is equivalent to reducing the number of free parameters in the DNN in the way to optimally preserve the amount of represented information within the TTN. Thus, this would not only include the dropout of different weights and neurons in the DNN, but furthermore to some extend restructuring the neuronal connections of the DNN resulting in a different model. With the TTN this postlearning reduction of free parameters can be done without the need of relearning a new model, as it would in general be necessary for DNNs.
Appendix D Efficient Prediction time speedup with QuIPS and QIANO
In the following, we present the application of the QuIPS and QIANO protocoll on the LHCb problem in more detail.
d.1 Speedup for CERN data
Here, we consider both approaches to analyse the speedup: Discharting features with QuIPS and optimizing the representation power of the TTN with QIANO. In Tab. 1 we show the different results for the complete 16 features and the best 8 features, both varying the bond dimension . Interestingly, when we truncate the TTN classifier from high bonddimension we lose no prediction accuracy. In fact, we actually increase the accuracy in the test set, which might be due to the tendency to overfitting with increasing . We notice a bigger drop in the prediction accuracy only when we truncate down to , thus gaining from to almost an order of magnitude in speedup without decreasing the precision. Interestingly, when we train from the beginning with bond dimension , we are not able to achieve the same accuracy as we get after truncating from the learned model to .
The presented calculations were done at the CINECA Marconi Knights Landing cluster on one single processor core (Intel Xeon 8160, SkyLake at 2.10 GHz).
16 feat. (s)  Accuracy (%)  8 feat. (s)  Accuracy (%)  
    
    
    
    
    
Furthermore, the performance of tensor contractions can be improved by a factor ranging from to bonddimension depending when executed in parallel on GPUs rather than CPUs Milsted et al. (2019). Thus combining the TTN truncation and the postanalysis feature selection, together with a CPU, GPU or FPGA architecture optimally designed for TNs we are confident that we can reach the order of prediction times required for realtime applications in the LHCb experiment.
d.2 Mimimum CPU time
During the prediction, we can perform the contractions in each layer in parallel. So in the following calculation of a lower boundary for the CPUtime we assume a perfect parallelisation and thus only take the contractions of one tensor within each layer into account. This contraction consists of two matrixvector multiplications; the first with a matrix and a dimensional vector; the second with a matrix and vector, where is the actual dimension of the downwards directed links in the th layer (the input layer is ; the link upwards of the last layer has dimension of the classification problem).
Therefore, the number of floatingpoint operations which cannot be parallelised on a higher level, but only within the execution of the single matrixvector multiplications, is
Thus, taking, for instance, a TTN with 8 input features and bond dimension , we can calculate the number of floatingpoint operations required for predicting a sample. In this case, the 4 tensors in the lowest level each address two different dimensional local input feature spaces and merge them to a dimensional space. Here we require FLOPS. In the next layer, we work with dimensions and , resulting in FLOPS. The last contraction projects the samples onto the dimensional output space with FLOPS. Totalling FLOPS which can be computed by an ordinary CPU with a lower performance of GFLOPS/s (higher performance with GFLOPS/s) within ns (ns).
This lower bound for the prediction time is very encouraging in high energy physics for LHCb experiments in the realtime event selection. In particular, the highlevel software trigger in the Run 2 datataking has to take a decision approximately every 1 s Aaij et al. (2015a) and higher rates in the range of ns are expected in future Runs. Thus with the bonddimension of this calculation example, we can reach these time scales and we showed in the paper that using QuIPS and QIANO, we still can obtain results about times more efficient than the muon tagging for such a TTN.
Finally, let us mention that we did not consider the parallelisation of the actual vectormatrix multiplication in this lower bound for the CPU time. Furthermore, this is a completely theoretical bound neglecting, for instance, the time to copy data into the cache and any overhead from the implementation.
Appendix E Detailed description of the DNN
The DNN networks studied here were implemented using the Keras
Chollet et al. (2015)framework with the Tensorflow
Abadi et al. (2015)backend. The network was built alternating a Dropout layer after each Dense layer starting with a Batch Normalization layer. As mentioned before, we use 60% of the total data (totalling about 700k samples) for the learning procedure and 40% of it for the final testing and estimation of the performance. We further divide the learning data into training (60%) and validation set (40%). The hyperparameters of the network (Depth, number of nodes per layer, dropout rate, normalization moment, and kernel initialization) were tuned using the hyperopt
Bergstra et al. (2013)package exploring different parameter spaces in order to maximise the accuracy in the validation dataset. The ReLu activation was used for the hidden layer while a sigmoid is used for the output node. The network was trained with both ADAM and SGD optimiser optimizing their learning rate.
The final chosen network architecture (see Fig. 10
) consists of three couples of dense plus a dropout layers with 96 nodes per layer and 0.1 dropout rate. The optimization was done with ADAM and learning rate 0.0001. The model was trained for a maximum of 250 epochs, early stopping with a patience parameter of 25 epochs on the loss in the validation set was used. The model used for evaluating the performance on the test set is the model with the best performance on the validation set.
Further, we investigated the use of different cost functions for the network optimisation. We performed the DNN optimisation for the crossentropy loss function and with the Mean Squared Error (MSE), with which the TTN is trained as well. In the end, both cost functions lead to similar results in the prediction accuracy, the tagging power and the probability distribution (see Fig.
11). Introducing the cuts in the evaluation of the tagging power, we obtain with for the crossentropy loss and with for the MSE.Appendix F Check for biases in physical quantities
Once the performances of the TTN and DNN algorithms have been established, it is necessary to check for biases in describing the main physical quantities related to jets physics: in this way, we can probe the feasibility of these new tagging methods to perform physical analysis. Typical quantities describing jets are the transverse momentum (which is the momentum perpendicular to the beam axis direction) and the pseudorapidity (defined as where is the polar angle); some cuts are applied to these quantities: GeV and , to ensure that both jets are contained in the LHCb acceptance. Every simulated event contains two jets generated by a  and a quark, therefore labelled as and . To perform this check it is sufficient to require the TTN and the DNN to classify the flavour of just one jet: once one jet’s flavour has been established we are sure what the other jet’s flavour is.
In Figs. 12 and 12 jet distributions are shown while in Figs. 12 and 12 distributions are shown for jets generated by  and quark respectively, all normalized to one. Results are shown for the TTN and DNN methods, compared to the socalled Monte Carlo truth (MC), which is the set of true known features of the jets resulting from the simulation process. From the plots it is clear that no evident biases are present, therefore we can conclude that not only the TTN and the DNN methods perform better than the usual muon tagging method (as seen by plotting the tagging power), but they also describe properly the physics behind the studied processes. This quick check allows us to possibly use a TTN to perform physics analyses, such as measuring the charge asymmetry in events.
Appendix G Analysis with single particles
In the following, we present our study on the importance of the different particle types (which we recall to be electron, pion, proton, kaon and muon and their antiparticles respectively) for the final classification. In particular, we are looking at the contribution from every single particle, including all of the corresponding three features (relative momenta , charge and relative distance
), to the tagging power. This study has been performed for our TTN algorithm and provides (i) a deeper insight on the interesting features from the physical point of view, (ii) further information to validate our feature reduction using the QuIPS protocol which, combining correlation and entropy arguments, reduces the number of features considered without sensibly decreasing the final accuracy and tagging power.
In the first part of this study, we discard one particle type only in the analysis. Thus we learn using only the remaining 4 particles and the total charge, resulting in 5 different simulations corresponding to discarding each particle correspondingly. In order to deselect just the th particle we set all its input features , , and explicitly to zero; both in the train and in the test dataset.
In Fig. 13 the tagging power of the TTN analysis including all particles, and discarding the features of one particle type are shown. It is evident that by removing the kaon for the classification, we have a clear loss in the tagging power for the complete range of transverse momentum of the complete jet. This loss is even more significant for low jet transverse momentum . For other particles, a clear loss exceeding the statistical uncertainty can only be found for certain , such as the pion for high , or the muon for the middle range of . In order to properly understand the importance of the different particles, we can further compute the tagging power by considering only the features of one particle at a time.
In Fig. 13 we show the tagging power for using just one of the particle types for the classification: the contribution of the kaon at low jet is evident, meaning that in order to get high values of the tagging power we need to exploit the features of the kaon. Here, we can see again, even more clearly, the contribution of the pion for high transverse momentum of the jet, and of the muon for the middle range of , while the features of the proton do not play an important role in the tagging power. Interestingly, the TTN analysis using only the muon (lime curve) is still more efficient than the usual muon tagging: this is due to the fact that while the muon tagging algorithm considers only the charge of the muon, the TTN also considers other features such as and .
Concluding this study, we figured out that the kaon is the most crucial particle for the classification followed by the muon or by the pion for very high transverse momentum of the detected jet. These insights perfectly align with the quantuminformation based insights provided by our QuIPS protocol: Next to the total charge of the jet , the QuIPS suggests for this problem to use all available features of the kaon and muon, together with one feature (the charge) of the pion when reducing the total number of features from 16 to 8.
Appendix H Other comparisons between TTN and DNN
When investigating the performances of the TTN and DNN methods, the two algorithms gave the same performances (as their accuracy and tagging power is the same within the statistical error) but we showed by the different shapes of the TTN and DNN confidence distributions that they exploit different kinds of information. Further, we found some correlation between the outputs of the two algorithms. Here, we further investigate this aspect of correlation in the predictions of the different methods by computing the socalled confusion matrix, a graphic comparison between the outputs of the two classifiers.
In Fig. 14 the confusion matrix for the TTN and the DNN methods is shown: the output of every jet analysed by the DNN (whose output could be () for a jet generated by a quark (quark) or NC for a nonclassified jet) is compared to the output of the same jet classified by the TTN. The fact that a jet is classified as NC comes from applying cuts to the confidence distributions to maximize the tagging power. Despite the output (i.e. the confidence distributions) of the TTN and the DNN being different, the two algorithms tend to classify jets in the same way: whenever the DNN classify a jet as generated by a quark it is very unlikely that the TTN wrongly classifies it as generated by a quark (it happens of the times) and viceversa; moreover in of the cases a jet is classified by just one classifier, while the other one does not classify it. As a last remark, when one classifier does not classify a jet (e.g. for the TTN this corresponds to the central row) the other one does not classify the same jet in of the cases, meaning that the TTN and the DNN classify (and do not classify) jets for the majority of the data in the same way. This aspect is also confirmed by considering Fig. 14, where no cuts on the confidence distributions are considered: when the TTN classify a quark (quark) so does the DNN 95% (94%) of the times.
We further checked the results of the two classifiers for the true labels coming from the MC simulation (i.e. the accuracy of the algorithm). In Figs. 14 and 14 confusion matrices between true labels and DNN and TTN respectively (without cuts on confidence distributions) are shown: in 64% of the times the jet’s flavour is correctly classified, both for the TTN and the DNN. In Fig. 14 the same comparison is shown for the muon tagging approach: the jet’s flavour is correctly classified 75% of the times but it is evident that the number of classified jet has been reduced.
In Figs. 15 and 15 confusion matrices between the two classifiers and the true labels are shown, with cuts maximizing the tagging power applied to the confidence distributions: when applying the cuts the we correctly classify between  of the times, both for the TTN and the DNN. Therefore we can conclude that both the TTN and the DNN have almost the same accuracy compared to the muon tagging algorithm, but they are able to process a bigger amount of jets, resulting in greater values for efficiency and therefore greater values of tagging power.
As a last comparison between the two classifiers we consider the Receiving Operator Characteristic (ROC) curve, in order to check the ability of the TTN and the DNN to classify  and jets as the discrimination threshold is varied. Therefore, we plot the rate of correctly tagged jets (defined as True Positive Rate, TPR) against the rate of wrongly tagged jets (defined as False Positive Rate, FPR). In Fig. 16 the two ROC curves are plotted and compared with the so called line of nodiscrimination, which represents a randomly guessing classifier: the two ROC curves for tTN and DNN are perfectly coincident, and the Area Under the Curve (AUC) for the two classifiers is the almost same ( and ).
This last check further confirms the similarity between the TTN and DNN in tagging  and jets despite relying on totally different confidence distributions.
References

Bishop (1996)
C. M. Bishop,
Neural Networks for Pattern Recognition
(Oxford University Press, 1996).  Haykin et al. (2009) S. S. Haykin et al., Neural networks and learning machines, Vol. 3 (Pearson, 2009).

Nielsen (2015)
M. A. Nielsen,
Neural networks and deep learning
(Determination press, 2015).  Goodfellow et al. (2016) I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT press, 2016).
 LeCun et al. (2015) Y. LeCun, Y. Bengio, and G. Hinton, Nature 521, 436 (2015).
 Silver et al. (2016) D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al., Nature 529, 484 (2016).
 Carleo et al. (2019) G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. VogtMaranto, and L. Zdeborová, Reviews of Modern Physics 91 (2019), 10.1103/revmodphys.91.045002.
 Deng et al. (2017) D.L. Deng, X. Li, and S. Das Sarma, Physical Review B 96 (2017), 10.1103/physrevb.96.195145.
 Nomura et al. (2017) Y. Nomura, A. S. Darmawan, Y. Yamaji, and M. Imada, Physical Review B 96 (2017), 10.1103/physrevb.96.205152.
 Carleo and Troyer (2017) G. Carleo and M. Troyer, Science 355, 602–606 (2017).
 Schuld and Petruccione (2018) M. Schuld and F. Petruccione, Supervised Learning with Quantum Computers (Springer, Cham, 2018).
 Das Sarma et al. (2019) S. Das Sarma, D.L. Deng, and L.M. Duan, Physics Today 72, 48–54 (2019).
 Stoudenmire (2018) E. M. Stoudenmire, Quantum Science and Technology 3, 034003 (2018).
 Collura et al. (2019) M. Collura, L. Dell’Anna, T. Felser, and S. Montangero, “On the descriptive power of neuralnetworks as constrained tensor networks with exponentially large bond dimension,” (2019), arXiv:1905.11351 [quantph] .
 Chen et al. (2018) J. Chen, S. Cheng, H. Xie, L. Wang, and T. Xiang, Physical Review B 97 (2018), 10.1103/physrevb.97.085104.
 Levine et al. (2017) Y. Levine, D. Yakira, N. Cohen, and A. Shashua, “Deep learning and quantum entanglement: Fundamental connections with implications to network design,” (2017), arXiv:1704.01552 [cs.LG] .
 McCulloch (2007) I. P. McCulloch, Journal of Statistical Mechanics: Theory and Experiment 2007, P10014 (2007).
 Schollwöck (2011) U. Schollwöck, Annals of Physics 326, 96–192 (2011).
 Singh and Vidal (2013) S. Singh and G. Vidal, Phys. Rev. B 88, 115147 (2013).
 Dalmonte and Montangero (2016) M. Dalmonte and S. Montangero, Contemporary Physics 57, 388 (2016).
 Gerster et al. (2017) M. Gerster, M. Rizzi, P. Silvi, M. Dalmonte, and S. Montangero, Phys. Rev. B 96, 195123 (2017).
 Bañuls et al. (2019) M. C. Bañuls, R. Blatt, J. Catani, A. Celi, J. I. Cirac, M. Dalmonte, L. Fallani, K. Jansen, M. Lewenstein, S. Montangero, C. A. Muschik, B. Reznik, E. Rico, L. Tagliacozzo, K. V. Acoleyen, F. Verstraete, U. J. Wiese, M. Wingate, J. Zakrzewski, and P. Zoller, “Simulating lattice gauge theories within quantum technologies,” (2019), arXiv:1911.00003 [quantph] .
 Silvi et al. (2019) P. Silvi, F. Tschirsich, M. Gerster, J. Jünemann, D. Jaschke, M. Rizzi, and S. Montangero, SciPost Phys. Lect. Notes , 8 (2019).
 Felser et al. (2019) T. Felser, P. Silvi, M. Collura, and S. Montangero, “Twodimensional quantumlink lattice quantum electrodynamics at finite density,” (2019), arXiv:1911.09693 [quantph] .
 Montangero (2018) S. Montangero, Introduction to Tensor Network Methods (Springer International Publishing, 2018).
 Bañuls and Cichy (2020) M. C. Bañuls and K. Cichy, Reports on Progress in Physics 83, 024401 (2020).
 Stoudenmire and Schwab (2016) E. Stoudenmire and D. J. Schwab, in Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Curran Associates, Inc., 2016) pp. 4799–4807.
 Novikov et al. (2016) A. Novikov, M. Trofimov, and I. Oseledets, “Exponential machines,” (2016), arXiv:1605.03795 [stat.ML] .
 Khrulkov et al. (2017) V. Khrulkov, A. Novikov, and I. Oseledets, “Expressive power of recurrent neural networks,” (2017), arXiv:1711.00811 [cs.LG] .
 Liu et al. (2019) D. Liu, S.J. Ran, P. Wittek, C. Peng, R. B. García, G. Su, and M. Lewenstein, New Journal of Physics 21, 073059 (2019).
 Alves Jr. et al. (2008) A. A. Alves Jr. et al. (LHCb collaboration), JINST 3, S08005 (2008).
 Aaij et al. (2020a) R. Aaij et al. (LHCb collaboration), “LHCb open data website,” http://opendata.cern.ch/docs/aboutlhcb (2020a).
 Aaij et al. (2020b) R. Aaij et al. (LHCb collaboration), “Jet tagging dataset samples for LHCb quark flavour identification studies,” 10.7483/OPENDATA.LHCB.N75T.TJPE (2020b).
 Aaij et al. (2015a) R. Aaij et al. (LHCb collaboration), Int. J. Mod. Phys. A30, 1530022 (2015a), arXiv:1412.6352 [hepex] .
 A (1994) N. I. M. A (ALEPH collaboration), Nucl. Instrum. Meth. A 360, 481 (1994).
 Cacciari et al. (2008) M. Cacciari, G. P. Salam, and G. Soyez, JHEP 04, 063 (2008), arXiv:0802.1189 [hepph] .
 Murphy (2015) C. W. Murphy, Phys. Rev. D92, 054003 (2015), arXiv:1504.02493 [hepph] .
 D0 collaboration et al. (2006) V. A. D0 collaboration et al., Phys. Rev. D74, 112002 (2006), arXiv:0609034v1 [hepex] .
 CDF collaboration et al. (2006) T. A. CDF collaboration et al., CDF Note 8235 (2006).
 Aaij et al. (2014) R. Aaij et al. (LHCb collaboration), Phys. Rev. Lett. 113, 082003 (2014), arXiv:1406.4789 [hepex] .
 Clemencic et al. (2011) M. Clemencic et al., J. Phys. Conf. Ser. 331, 032023 (2011).
 Sjöstrand et al. (2008) T. Sjöstrand, S. Mrenna, and P. Skands, Comput.Phys.Commun. 178, 852 (2008), arXiv:0710.3820 [hepph] .
 Lange (2001) D. J. Lange, Nucl. Instrum. Meth. A462, 152 (2001).
 Agostinelli et al. (2003) S. Agostinelli et al. (Geant4 collaboration), Nucl. Instrum. Meth. A506, 250 (2003).
 Allison et al. (2006) J. Allison, K. Amako, J. Apostolakis, H. Araujo, P. Dubois, et al. (Geant4 collaboration), IEEE Trans.Nucl.Sci. 53, 270 (2006).
 Nielsen and Chuang (2000) M. Nielsen and I. Chuang, eds., Quantum Computation and Quantum Information (Cambridge University Press, 2000).
 Milsted et al. (2019) A. Milsted, M. Ganahl, S. Leichenauer, J. Hidary, and G. Vidal, “Tensornetwork on tensorflow: A spin chain application using tree tensor networks,” (2019), arXiv:1905.01331 [condmat.strel] .
 Aaij et al. (2015b) R. Aaij et al. (LHCb), JINST 10, P06013 (2015b), arXiv:1504.07670 [hepex] .
 Tucker (1966) L. R. Tucker, Psychometrika 31, 279 (1966).
 Östlund and Rommer (1995) S. Östlund and S. Rommer, Physical Review Letters 75, 3537–3540 (1995).
 Oseledets (2011) I. V. Oseledets, SIAM Journal on Scientific Computing 33, 2295–2317 (2011).
 Gerster et al. (2014) M. Gerster, P. Silvi, M. Rizzi, R. Fazio, T. Calarco, and S. Montangero, Phys. Rev. B 90, 125154 (2014).
 Hackbusch and Kühn (2009) W. Hackbusch and S. Kühn, Journal of Fourier Analysis and Applications 15, 706 (2009).
 Verstraete and Cirac (2004) F. Verstraete and J. I. Cirac, “Renormalization algorithms for quantummany body systems in two and higher dimensions,” (2004), arXiv:condmat/0407066 [condmat.strel] .
 Orús (2014) R. Orús, Annals of Physics 349, 117–158 (2014).
 Glasser et al. (2018a) I. Glasser, N. Pancotti, and J. I. Cirac, “From probabilistic graphical models to generalized tensor networks for supervised learning,” (2018a), arXiv:1806.05964 [quantph] .
 Tagliacozzo et al. (2009) L. Tagliacozzo, G. Evenbly, and G. Vidal, Physical Review B 80 (2009), 10.1103/physrevb.80.235127.
 Kullback and Leibler (1951) S. Kullback and R. A. Leibler, Ann. Math. Statist. 22, 79 (1951).
 Robeva and Seigal (2017) E. Robeva and A. Seigal, “Duality of graphical models and tensor networks,” (2017), arXiv:1710.01437 [math.ST] .
 Glasser et al. (2018b) I. Glasser, N. Pancotti, M. August, I. D. Rodriguez, and J. I. Cirac, Physical Review X 8 (2018b), 10.1103/physrevx.8.011006.
 Chollet et al. (2015) F. Chollet et al., “Keras,” https://keras.io (2015).
 Abadi et al. (2015) M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Largescale machine learning on heterogeneous systems,” (2015), software available from tensorflow.org.
 Bergstra et al. (2013) J. Bergstra, D. Yamins, and D. D. Cox, in Proceedings of the 30th International Conference on International Conference on Machine Learning  Volume 28, ICML’13 (JMLR.org, 2013) p. I–115–I–123.
Comments
There are no comments yet.