Multimodal data visualization, denoising and clustering with integrated diffusion

02/12/2021 ∙ by Manik Kuchroo, et al. ∙ Yale University 0

We propose a method called integrated diffusion for combining multimodal datasets, or data gathered via several different measurements on the same system, to create a joint data diffusion operator. As real world data suffers from both local and global noise, we introduce mechanisms to optimally calculate a diffusion operator that reflects the combined information from both modalities. We show the utility of this joint operator in data denoising, visualization and clustering, performing better than other methods to integrate and analyze multimodal data. We apply our method to multi-omic data generated from blood cells, measuring both gene expression and chromatin accessibility. Our approach better visualizes the geometry of the joint data, captures known cross-modality associations and identifies known cellular populations. More generally, integrated diffusion is broadly applicable to multimodal datasets generated in many medical and biological systems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recently there has been a profusion of multimodal data measured in parallel on the same system. Some examples include multiple modalities of data collected on biological specimens, such a single cell RNA-sequencing or single cell ATAC-sequencing, or multiple measurements collected on hospitalized patients, such as lab tests and continuous monitoring systems. There is a dire need for integration of this data in order to perform a wide variety of downstream tasks such as clustering, differential or comparative analysis, denoising and cross-modality correlations between features. We believe that the key to integrating data is to discover which entities are similar to each other across modalities by creating a data affinity graph on the basis of information from all modalities available. However, it is not immediately clear how to determine distances or similarities between entities on the basis of multiple modalities of data, which could be measured on entirely differently scales and suffer from different amounts of noise and sparsity. This is particularly problematic in the biomedical domain, where issues of ‘drop out’ or undersampling make correlation analysis in single cell technologies extremely difficult. In order to address this, we turn to the framework of data diffusion that was developed by Coifman and Lafon (2006).

According to the data diffusion framework, we can implicitly learn the intrinsic space of the data by powering a Markov transition matrix to a power , which implicitly calculates a

-step random walk on the data graph. This process accumulates probabilities in paths that traverse through relatively dense regions of the data and diminish in sparse outlier regions. In

Coifman and Lafon (2006)

, the powered diffusion operator is eigendecomposed to uncover intrinsic data dimensions. Since that seminal work, the diffusion operator, a Markov matrix describing affinities between datapoints as transition probabilities on a data graph, has been shown to be useful in a myriad of data processing tasks

Moon et al. (2018), including clustering Burkhardt et al. (2020), denoising Van Dijk et al. (2018) and dimensionality reduction Moon et al. (2019).

Here, we define an integrated diffusion operator

for multiple data modalities. We show that naive fusions of modalities via affinity addition, distance addition or feature concatenation yield poor results. We instead define an integrated diffusion operator that accounts for local noise and intrinsic dimensionality of each modality. Conceptually diffusion probabilities in our integrated operator are computed by taking several steps in the data graph from one modality, and several steps on the data graph defined by the other modality. The number of steps is carefully chosen based on eigenvalue entropy or

spectral entropy of each operator. Furthermore, we emphasize dominant directions in the diffusion operator by locally denoising using PCA-based low-rank approximations.

Empirically we show that our method yields more accurate visualizations, more faithful denoising and more accurate clustering on both datasets where ground truth is known and exploratory biomedical datasets, as compared to a variety of alternative methods for combining multimodal data. These comparisons include diffusion-based methods, such as combining features, diffusion operators, distance matrices and affinity graphs, as well as non diffusion-based methods for multimodal learning, such as cycleGAN Zhu et al. (2020)

, autoencoders, feature-concatenated PCA, and canonical corelation analysis (CCA)

Hao et al. (2020a).

Figure 1: Overall workflow of integrated diffusion. First, local low rank approximation is performed on each dataset independently, removing local noise. Next, the intrinsic dimensionality of each modality is calculated by the spectral entropy of each operator to determine the ideal number of -steps to place in each modality. Each operator is then powered to a ratio of the calculated -steps and finally multiplied together to simulate steps in a random walk. The resulting diffusion operator can help denoise, visualize and cluster.

2 Related Work

Recently, there has been greater focus on integrating different modalities of data collected on the same or similar systems. When analyzing and integrating information from different measurement modalities, two primary lines of work have been established: domain transfer learning and data fusion. Domain transfer learning helps predict or impute a modality of data that is entirely missing either based on supervised training examples or based on unsupervised data alignment techniques. These include neural network-based techniques such as cycleGAN, MAGAN

Amodio and Krishnaswamy (2018), and harmonic alignment Stanley et al. (2020). Instead we are focusing on the problem of accurately analyzing datasets when both modalities are generally available, yet suffer from real world problems with data collection. We believe our problem has not been brought to attention because it is often assumed that naive modality concatenations can offer a viable solution.

The second line of research, data fusion, seeks to learn a common latent space from both modalities of data. This field includes techniques such as CCA, autoencoders, and alternating diffusion Katz et al. (2019); Yang et al. (2021)

. Here, we improve upon this line of research by accurately representing each modality in a way such that modality specific local and global noise is corrected. Furthermore, the particular latent space that we choose, the data diffusion operator, is a widely usable latent space which can be used for many supervised and unsupervised learning applications.

3 Preliminaries

Manifold learning

High dimensional data can often be modeled as a sampling of a dimensional manifold that is mapped to dimensional observations via a nonlinear function . Intuitively, although measurement strategies, modeled here via

, create high dimensional observations, the intrinsic dimensionality, or degrees of freedom with in the data, is relatively low. This manifold assumption is at the core of the vast field of manifold learning

(e.g., Moon et al., 2018; Coifman and Lafon, 2006; Van Der Maaten et al., 2009; Izenman, 2012, and references therein), which leverages the intrinsic geometry of data, as modeled by a manifold, for exploring and understanding patterns, trends, and structure that displays significant non-linearity.

Diffusion geometry

In Coifman and Lafon (2006), diffusion maps were proposed as a robust way to capture intrinsic manifold geometry in data by eigendecomposing a powered diffusion operator. Using -step random walks that aggregate local affinity, Coifman and Lafon (2006) were able to reveal nonlinear relations in data and allow their embedding in low dimensional coordinates. These local affinities are commonly constructed using a Gaussian kernel:

(1)

where is an Gram matrix whose entry is denoted by to emphasize the dependency on the data based on bandwidth parameter

, which controls local neighborhood sizes. A diffusion operator is defined as the row-stochastic matrix

where is a diagonal matrix with . The matrix , or diffusion operator, defines single-step transition probabilities for a time-homogeneous diffusion process, or a Markovian random walk, over the data. Furthermore, as shown in Coifman and Lafon (2006), powers of this matrix , for , can be used to simulate multi-step random walks over the data, helping understand multiscale organization of , which can be interpreted geometrically when the manifold assumption is satisfied.

Alternating diffusion

Prior work involving data diffusion for multimodal data has centered around the notion of alternating diffusion Katz et al. (2019). Intuitively, this generalizes the random walk to “hop” between different metric spaces; mathematically, this is expressed by taking a matrix product of the markov transition matrices:

(2)

Finally, is powered to stimulate “hopping” across modalities. As explained by Katz et al. (2019), the diffusion distances in this joint space constitute the joint diffusion map embedding, which captures information shared between modalities but removes modality-specific information.

4 Method

Figure 2:

Generation of multimodal data from ground truth MNIST and artificial tree data. A) Gaussian noise is added to the entire MNIST datasets to simulate global noise. Visualizations of modalities and joint diffusion operators are produced by PHATE

Moon et al. (2019). B) Gaussian noise is added to branches of the artificial tree dataset to simulate local noise. Visualizations of modalities and joint diffusion operators are produced by PHATE. C) Classification accuracy for predicting MNIST digits from integrated embeddings created by a variety of diffusion-based multimodal techniques with increasing differences in global noise. D) DeMAP correlations between ground truth tree distances with integrated embedding distances created by a variety of diffusion-based multimodal techniques with increasing differences in local noise.

4.1 Problem Formulation

Let and be two sets of data, perhaps with different dimensionalities, capturing two modalities gathered (e.g., via different measurement techniques) from the same underlying system. We consider a setting where the underlying system of the data can be modeled via a d-dimensional manifold (with ) that is embedded in a high dimensional ambient space given by both modalities, but is only partially captured by each individual dataset. Here, we describe an unsupervised approach to integrate information from such multimodal settings based on the principles of data diffusion in order to recover the underlying joint manifold. By utilizing methods that capture both local and global manifold geometric information, our method is robust to vastly differing quantities of noise. Our method allows for amenable visualization, data denoising, and clustering of this jointly recovered manifold.

4.2 Approach

0:  Data set consisting of points obtained via a single measurement modality
0:  Locally denoised
1:  Compute the affinity matrices on as per Eq. (1)
2:  Partition the datasets into

neighborhoods via spectral clustering

3:  For every partition,

, compute PCA via singular value decomposition (SVD) of the neighborhood

4:

  Estimate local intrinsic dimensionality using eigengap heuristic on singular values

5:  Compute a low-rank approximation off the SVD matrix to reconstruct a denoised set of data
Algorithm 1 Local Denoising Procedure

Neighborhood low rank approximation for local noise correction

We begin by estimating a measure of local signal in various neighborhoods in the dataset. To do this, we first run spectral clustering on each modality to obtain partitions , written as submatrices of (and accordingly for ). Next, we compute SVD on the centered points in the partition where is a matrix with all rows containing the partition center,

consist of left singular vectors,

right singular vectors and contains singular values. In order to denoise a neighborhood, we estimate the intrinsic dimensionality using an eigengap heuristic, counting the first k+1 singular values. Finally, we obtain a low rank approximation of the data in each local partition by using a truncated SVD, i.e., where only takes the first (most significant) singular values, and vectors , consist of the first columns of , (correspondingly). It is important that this method be highly local so as not to destroy the manifold structure via elimination of linear dimensions in the data.

Modality specific diffusion time scale calculation via spectral entropy

In addition to correcting for varying local noise within a single modality, it is crucial to estimate the intrinsic dimensionality of each modality to understand how much information it contains. Previous implementations data diffusion methods, such as alternating diffusion and diffusion maps, provide no means of calculating correct timescale. Here, we apply spectral entropy, computed on the diffusion operator, to estimate ideal number of -steps to take in each modality. This refers to the theory of graph signal processing Shuman et al. (2013)

where the eigenvectors of the diffusion operator (equivalent to the eigenvectors of the graph Laplacian in reverse order) form frequency harmonics on a data graph. The spectral entropy of the operator is then the amount of variability explained by each frequency in the graph spectrum, i.e., the diffusion dimension.

To quantify the significance of each diffusion dimension in describing the data geometry, we can observe the corresponding eigenvalues

. Quantitatively, this is given by the spectral entropy defined on probability distribution of eigenvalues normalized by their sum,

. This is parameterized by the diffusion timescale as this spectrum changes with the powering of the diffusion operator .

(3)

When the diffusion operator is powered to a value , there is an application of a low-pass filter to the eigenspectrum of the operator such that the eigenvalues corresponding to higher frequencies are diminished (see Supplement). Thus the spectral entropy decreases with subsequent powering of the operator —but not steadily. For low values of the spectral entropy rapidly decreases and then stabilizes to create an elbow. We believe this elbow refers to the elimination of noise, with further powering removing signal. We find the elbow of this operator for the modality-specific operators. In this manner, the higher frequency components of the data graph, corresponding to noise dimensions will be eliminated in a frequency-specific manner globally on the graph, as opposed to locally in a vertex-specific manner using local PCA. We note that a similar heuristic is used in Moon et al. (2019) where any value beyond an elbow is chosen for visualization using PHATE. In cases where the intrinsic dimensionality is known, it may be possible to set to a value that makes the spectral entropy equivalent to intrinsic dimensionality.

Fusion of operators

We compute using the spectral entropy heuristic for each modality taken independently, giving us an estimate for the relative quantities of information present between modalities. While the absolute degree of information within each view is informative, a ratio of information is perhaps more meaningful. We raise each modalities diffusion operator to the lowest possible multiple of the ideal view specific computed via spectral entropy. For example, if we obtain time values of 2 and 8 for two individual modes, then we will assume a ratio of 1:4 of information. Intuitively, this ratio indicates that for every diffusion “step” taken in modality 1, four diffusion steps will be taken in modality 2. More generally, we can write our joint diffusion operator, , to reflect the differing levels of global information between views as follows:

(4)

where and are integer values obtained from the reduced ratio as described above, and and are modality specific diffusion operators. We use the reduced ratio instead of directly applying the values of t obtained from the spectral entropy heuristic, as this joint operator is then powered once more to correct for spurious noise generated when integrating the datasets (i.e., noise present from one measurement modality affecting signal present from another measurement modality). Powering directly by and would lead to an oversmoothing effect in the final computed manifold which would collapse independent clusters together. We determine the adequate timescale for powering this joint diffusion via the same spectral entropy approach and calculate an embedding using the method of Coifman and Lafon (2006) or Moon et al. (2019), whose utility we explore on a variety of classification and visualization tasks.

0:   modality-specific diffusion operators
0:  Integrated Diffusion Operator
1:  For each modality-specific diffusion operator, identify optimal diffusion timescale using spectral entropy heuristic
2:  Compute the joint operator by powering each modality by the appropriate timescale ratio and taking the matrix product
3:  Compute the optimal diffusion timescale for this joint operator via spectral entropy heuristic and power accordingly
Algorithm 2 Integrated Diffusion Operator Computation

5 Experimental Results

In the following experiments we evaluate integrated diffusion’s ability to visualize, denoise and cluster high dimensional multimodal data. To create multimodal synthetic data for our tasks, we generated two versions of the MNIST handwritten digits by adding different amounts of Gaussian noise to the MNIST pixels. We also created two versions of synthetically generated tree datasets by adding differing amounts of Gaussian noise to specific branches to simulate local noise. These datasets mimic the structure of the information contained in biological measurements due to their high dimensionality, similar global structures, and vastly different amounts of noise. In the Supplement we show an example of integrating three different modalities to show the generality of the method.

First, we generate multiple modalities of MNIST handwritten digits by adding Gaussian noise to the images, where each pixel value , where changes based on the level of noise. To showcase the ability of our method to handle modalities with significant differences in global noise, we add a fixed amount of Gaussian noise to simulate one data modality and increasing amount of Gaussian noise to simulate the second data modality (Figure 2A). Next, we generate multiple modalities of high dimensional artificial trees with varying amounts of global noise and local noise specific to branches. Similar to our MNIST multimodal datasets, we add a small amount of fixed noise to each tree before adding increasing amounts of noise to differing branches (Figure 2B). Finally, we apply our integrated diffusion approach to real world single cell biological data measuring RNA-sequencing, or gene expression, and ATAC-sequencing, or chromatin accessibility. With these datasets, we compare integrated diffusion to diffusion-based and non-diffusion based multimodal learning approaches on visualization, denoising and clustering tasks.

5.1 Visualization

To quantify the differences in visualizations produced from differing multimodal integration strategies, we performed two separate comparisons. We compared diffusion operator constructions based off of multimodal feature concatenation, distance addition, affinity addition, affinity multiplication and alternating diffusion. We then performed an ablation study, comparing these techniques to various diffusion operators: alternating diffusion with local low rank approximation, alternating diffusion with modality specific powering of diffusion operators via spectral entropy ratio, and finally our integrated diffusion approach.

For our MNIST comparisons, we integrated both modalities of data and calculated the first 20 diffusion map components by eigendecomposition of the powered diffusion operator. From this embedding we train a kNN-classifier to predict MNIST digit of origin from the visualization. As we are distorting the global and, more importantly, local geometries for our tree comparisons and trying to reconstruct the original noiseless tree, we try to determine how successful our reconstruction is using DeMAP (Denoised Manifold-Affinity Preservation) proposed in

Moon et al. (2019). DeMAP takes in a noiseless dataset, in this case the noiseless ground truth tree, as well as an embedding, in our case diffusion map of either of our comparison methods or the embedding layer of a neural network. DeMAP then correlates geodesic distances on noiseless dataset with euclidean distance in diffusion map space, trying to determine if the embedding accurately maintains known point to point distances in compressed space.

width= Noise PCA CCA Autoencoder D.T. cycle GAN Integrated Diffusion (our method) 1 .9428 .8872 .0926 .1886 .7896 .9242 2 .8080 .6481 .0926 .1027 .3906 .8114 3 .5539 .4141 .0926 .1111 .2593 .7946 4 .3383 .2559 .0926 .0993 .1886 .8030 5 .2172 .2121 .0926 .1010 .1549 .8064 6 .1818 .1953 .0926 .0909 .1532 .7912 7 .1380 .1902 .0926 .0892 .1077 .8114 8 .1212 .1514 .0926 .0824 .0976 .7828 9 .1145 .1263 .0926 .0993 .1077 .7828 10 .1313 .1515 .0926 .1061 .1044 .7879

Table 1: Classification accuracy of MNIST digits from embedding generated by a variety of multimodal integration strategies.

All strategies performed comparably when both modalities had a similar degree of local and global noise. As the difference in global noise increased in our MNIST embedding classification task, however, strategies that accounted for global information with modality specific diffusion operator powering outperformed strategies that did not (Figure 2C). When embedding trees with varying degrees of local branch specific noise, methods that perform local correction significantly outperformed methods that did not (Figure 2D). Finally, we compared out integrated diffusion approach to other non-diffusion based multimodal integration strategies, including cycle GANS, domain transfer autoencoders (DT), CCA, autoencoders trained on concatenated features, and PCA on concatenated features. Across both local noise comparisons in Table 2 and global noise comparisons in Table 1, integrated diffusion significantly outperformed other multimodal integration strategies.

width= Noise PCA CCA Autoencoder D.T. cycle GAN Integrated Diffusion (our method) 1 .8571 .5107 .0137 -.009 .3798 .7049 2 .8545 .3978 -.0065 -.023 .2914 .5236 3 .8340 .2507 .0037 .0097 .2128 .4724 4 .8480 .1618 -.014 -.0122 .2475 .5524 5 .8095 .1295 .0237 .0254 .2082 .5403 6 .7740 .1244 .0069 .0000 .2176 .6120 7 .7295 .1312 .0198 .0119 .2580 .5022 8 .6802 .1431 -.0096 -.0012 .1766 .7166 9 .6298 .1568 -.0106 .0149 .1680 .7919 10 .5814 .1697 .0026 .0146 .1926 .6823

Table 2: DEMaP correlations comparing geodesic distance on ground truth noiseless high dimensional artificial tree data with multimodal embeddings created from a variety of techniques.

5.2 Denoising

Figure 3: Denoising of MNIST data with alternating (middle row) and integrated (bottom row) diffusion operators.

Previous work in diffusion filters has shown that low pass filtering can correct many types of noise present in real world biological datasets, allowing for downstream analysis Van Dijk et al. (2018). Here, we compare several methods of diffusion operator construction with our integrated diffusion approach. As done previously, we created multimodal MNIST data by adding differing amounts of global noise. After computing the joint diffusion operator with each of these comparison methods, we filter the noisier MNIST modality as done previously Van Dijk et al. (2018) and as can be seen in Figure 3. To get quantitative results, we train a kNN-classifier on the denoised pixel values to determine how well each operator is able to recover ground truth (Figure 3). As shown in Figure 4, across all denoising comparisons, classification accuracy on increasingly noisy MNIST digits were best recovered by integrated diffusion followed by powered alternating diffusion, both methods account for global information within each noisy modality.

width= Noise PCA CCA Autoencoder D.T. cycle GAN Integrated Diffusion (our method) 1 .9494 .9022 .0925 .1515 .4612 .9057 2 .8148 .6414 .0925 .0925 .2744 .8923 3 .5656 .4427 .0925 .1026 .1616 .8805 4 .3585 .3215 .0925 .0959 .1498 .8737 5 .2341 .2441 .0925 .0925 .1397 .8737 6 .1380 .2256 .0925 .0892 .1111 .8855 7 .1296 .1717 .0925 .0976 .1010 .8855 8 .1010 .1734 .0925 .0909 .1229 .8502 9 .1026 .1532 .0925 .0925 .1044 .8519 10 .0976 .1481 .0925 .0909 .1178 .8754

Table 3: Comparing MNIST denoising pixel classification accuracy with differing multimodal denoising strategies.
Figure 4: MNIST denoised pixel classification accuracy with differing diffusion-based multimodal denoising strategies. As above, data was generated by adding differing amounts of random gaussian noise to MNIST pixels.

Other multimodal integration strategies can allow for denoising of noisy modalities. We similarly trained a knn-classifier to predict the original digit from the denoised pixel values determined from each technique (Table 3). Across all comparisons, integrated diffusion’s denoising significantly improved prediction of denoised handwritten digits when compared to CCA, feature concatenated PCA, domain transfer and normal autoencoder as well as a cycle GAN.

5.3 Clustering

Figure 5: Spectral clustering performed with diffusion operators from each of the multimodal integration strategies. Data was generated by adding differing amounts of random gaussian noise to sklearn noisey circles.

width= Global Noise Ratio 1 2 3 4 5 6 7 8 9 10 Feature Concatenation .0002 .0026 .0234 .0000 .0005 .0000 .0000 .0000 .0000 .0000 Joint PCA .5746 .4161 .1495 .0607 .0144 .0066 .0038 .0052 .0058 .0024 Distance Addition .0011 .0002 .0059 .0000 .0000 .0000 .0000 .0002 .0000 .0021 Canonical Correlation Analysis .5393 .1964 .08852 .03286 .0167 .0164 .0082 .0104 .0036 .0035 Autoencoder .0000 .0000 .0000 .0000 .0000 .000 .0000 .0000 .0002 .000 Domain Transfer Autoencoder .0204 .0012 .0023 .00064 .0016 .0000 .0001 .0000 -.0001 -.0003 cycle GAN .3304 .3151 .3097 .2656 .2851 .3774 .3732 .3122 .2005 .2682 Affinity Addition .0041 .0062 .0034 .0107 .0010 .0049 .0000 .0000 .0004 .0047 Affinity Product .0472 .1732 .0908 .0528 .0706 .1096 .0000 .0115 .0024 .0039 Alternating Diffusion .1308 .1999 .1883 .1201 .1100 .0159 .0001 .0036 .0045 .0007 Powered Alternating Diffusion .3607 .5187 .5534 .3057 .2000 .4060 .1136 .1527 .0932 .0754 Locally Corrected Alternating Diffusion .1305 .4429 .3301 .2979 .2094 .1226 .1259 .1449 .0754 .0384 Integrated Difusion (our method) .3167 .5337 .6672 .6298 .6043 .5071 .4649 .4549 .3849 .4416

Table 4: Adjusted Rand Index between known MNIST labels and cluster labels; ratio of global noise added is gradually increased to demonstrate invariance of integrated spectral clustering to differing amounts of noise

In this next experiment, we showcase the utility of our integrated diffusion operator in partitioning high dimensional data by incorporating information from multiple modalities. Previously, a variation on spectral clustering has been performed with the diffusion operators as done in Moon et al. (2019). Here, we computed spectral clusters on each of our multimodal datasets by performing kmeans clustering on each diffusion operator as well as on the embedding space of non diffusion-based multimodal learning approaches. Across comparisons on toy data we see that increasing amounts of global noise disrupts integrated diffusion based spectral clustering less than other methods of joint clustering multimodal data (Figure 5).

Next, we tried to compare clustering accuracy across different clustering methods by running kmeans on both the joint diffusion operators and the compressed feature spaces of other non-diffusion based methods. We compared these computed cluster labels against ground truth MNIST labels using adjusted rand index (ARI). Using the noisy multimodal MNIST data modalities previously generated we computed 10 clusters in each method and identified clustering accuracy. While clustering accuracy remained poor throughout all comparisons, they remained highest when clustering the powered alternating diffusion and integrated diffusion operators, both of which account for differences in global levels of information in each modality. Furthermore, as noise increased, integrated diffusion which also corrects local noise, performed superior to powered alternating diffusion (Table 4).

Figure 6: Application of integrated diffusion to multimodal single cell data. A) Visualization of gene expression and chromatin accessibility manifolds as well as alternating and integrated diffusion operators via PHATE. Points colored by annotated cell type. B) Visualization of spectral entropy of each modality. C) Adjusted Rand Index between spectrally computed clusters on each diffusion operator and known annotated cell types. D) Mutual information between the expression of a gene and it’s accessibility across differing denoising strategies: modality specific denoising, denoising both modalities with alternating diffusion operator or integrated diffusion operator. E) Average mutual information for differing denoising strategies across all gene expression-gene accessibility pairs.

6 Biological Applications

New methods allow for the measurement tens to hundreds of thousands of features in single cells, allowing for unprecedented insight into biological and cell type specific processes. Until recently, only a single modality could be measured in each cell, be it expression of genes through RNA sequencing or the accessibility of chromatin regions through ATAC sequencing. Now novel techniques allow for the measurement of different modalities at single-cell resolution. Increasingly commonly, individual cells are measured with a combination of chromatin accessibility, RNA expression, protein expression and spatial location Ma et al. (2020); Cao et al. (2018); Liu et al. (2020). This new type of data is powerful, as it not only allows for the study of each modality independently, but also allows for the discovery of regulatory mechanisms between modalities. Currently, no computational techniques are capable of modelling and predicting these dynamics as there are no strategies that integrate different modalities of data to jointly visualize, cluster and denoise multimodal single-cell data.

We apply integrated diffusion to multimodal single cell data of 11,909 blood cells, visualizing the joint manifold, identifying known cell types and uncovering key cross modalities interactions. Visualizing each modality, gene expression and chromatin accessibility, independently reveals similar overall structure however different resolutions. Chromatin accessibility data, when compared to gene expression data, is incredibly sparse and generally considered to be far less informative. When computing the spectral entropy of each modality, we can clearly see that the chromatin accessibility diffusion operator has a far fewer informative dimensions than the gene expression operator. The alternating diffusion approach, which does not take into account the information present within each modality, creates an embedding that blends the distinct structure of gene expression data with the less informative structure of chromatin accessibility data. Integrated diffusion, however, appears to better resolve differences in information across dataset, producing a visualization that contains sharper borders between populations and displays clear structure when visualized with PHATE (Figure 6A).

These more clearly resolved populations also correspond with more biologically relevant clusters. Using cellular annotations of this dataset which predict celltype based on the expression of known marker genes and accessibility regions Hao et al. (2020b), we computed clusters from the diffusion operator of each modality as well as alternating and integrated operators. Clusters from the integrated operator best overlapped with annotated cell types (Figure 6C).

A major issue in single cell data is sparsity due to under sampling which makes it very difficult to measure and model cross modality interactions. Theoretically, if a gene is expressed, then the chromatin encoding that gene must be accessible. With this understanding of the data, we try to recover these known associations between gene expression and chromatin accessibility (Figure 6D). Due to sparsity, there is no association as computed by mutual information between these variables without denoising. There are several strategies to recovering these cross modality interactions: denoising with modality specific diffusion operators, denoising with a single alternating diffusion operator or denoising with a single integrated diffusion operator. Using the integrated diffusion operator appears to best recover known gene expression and chromatin accessibility associations as shown in genes CD19, CD14 and CD4 (Figure 6D). We then computed these associations across all genes with each of our denoising strategies. Across 18,659 genes, integrated diffusion recovered significantly more information between a gene’s accessibility and its expression than alternating diffusion and modality-specific diffusion (Figure 6E).

7 Conclusion

We introduce the integrated diffusion operator, a method for learning the joint data geometry as described by multiple data measurement modalities applied to a single system. We show its improvement over more naive diffusion based methods (e.g., feature concatenation, alternating diffusion, affinity addition) and several non-diffusion based methods such as PCA, CCA autoencoders and cycleGANs on synthetic and biological datasets. We apply our method in the biomedical setting to a multi-omics dataset, where we generated rich joint manifolds, compute cell populations with increased accuracy and recover cross modality gene-chromatin associations. Our flexible framework is extendable to multiple modalities and will allow for the successful integration and analysis of massive multi-omic datasets from a wide variety of fields. Future work will involve multiscale diffusion operators designed to integrate data at many levels of granularity.

References

  • M. Amodio and S. Krishnaswamy (2018) MAGAN: aligning biological manifolds. External Links: 1803.00385 Cited by: §2.
  • D. B. Burkhardt, J. S. Stanley, A. Tong, A. L. Perdigoto, S. A. Gigante, K. C. Herold, G. Wolf, A. J. Giraldez, D. van Dijk, and S. Krishnaswamy (2020) Quantifying the effect of experimental perturbations in single-cell rna-sequencing data using graph signal processing. bioRxiv. External Links: Document, Link, https://www.biorxiv.org/content/early/2020/08/01/532846.full.pdf Cited by: §B.1, §1.
  • J. Cao, D. A. Cusanovich, V. Ramani, D. Aghamirzaie, H. A. Pliner, A. J. Hill, R. M. Daza, J. L. McFaline-Figueroa, J. S. Packer, L. Christiansen, F. J. Steemers, A. C. Adey, C. Trapnell, and J. Shendure (2018) Joint profiling of chromatin accessibility and gene expression in thousands of single cells. Science 361 (6409), pp. 1380–1385. External Links: Document, Link Cited by: §6.
  • R. R. Coifman and S. Lafon (2006) Diffusion maps. Applied and Computational Harmonic Analysis 21 (1), pp. 5–30. External Links: Document Cited by: §1, §1, §3, §3, §3, §4.2.
  • Y. Hao, S. Hao, E. Andersen-Nissen, W. M. M. III, S. Zheng, A. Butler, M. J. Lee, A. J. Wilk, C. Darby, M. Zagar, P. Hoffman, M. Stoeckius, E. Papalexi, E. P. Mimitou, J. Jain, A. Srivastava, T. Stuart, L. B. Fleming, B. Yeung, A. J. Rogers, J. M. McElrath, C. A. Blish, R. Gottardo, P. Smibert, and R. Satija (2020a) Integrated analysis of multimodal single-cell data. bioRxiv. External Links: Document, Link Cited by: §1.
  • Y. Hao, S. Hao, E. Andersen-Nissen, W. M. Mauck, S. Zheng, A. Butler, M. J. Lee, A. J. Wilk, C. Darby, M. Zagar, P. Hoffman, M. Stoeckius, E. Papalexi, E. P. Mimitou, J. Jain, A. Srivastava, T. Stuart, L. B. Fleming, B. Yeung, A. J. Rogers, J. M. McElrath, C. A. Blish, R. Gottardo, P. Smibert, and R. Satija (2020b) Integrated analysis of multimodal single-cell data. bioRxiv. External Links: Document, Link, https://www.biorxiv.org/content/early/2020/10/12/2020.10.12.335331.full.pdf Cited by: §6.
  • A. J. Izenman (2012) Introduction to manifold learning. Wiley Interdisciplinary Reviews: Computational Statistics 4 (5), pp. 439–446. Cited by: §3.
  • O. Katz, R. Talmon, Y. Lo, and H. Wu (2019) Alternating diffusion maps for multimodal data fusion. Information Fusion 45, pp. 346–360. External Links: Document, Link Cited by: §2, §3, §3.
  • Y. Liu, M. Yang, Y. Deng, G. Su, A. Enninful, C. C. Guo, T. Tebaldi, D. Zhang, D. Kim, Z. Bai, E. Norris, A. Pan, J. Li, Y. Xiao, S. Halene, and R. Fan (2020) High-spatial-resolution multi-omics sequencing via deterministic barcoding in tissue. Cell 183 (6), pp. 1665–1681.e18. External Links: Document, Link Cited by: §6.
  • S. Ma, B. Zhang, L. M. LaFave, A. S. Earl, Z. Chiang, Y. Hu, J. Ding, A. Brack, V. K. Kartha, T. Tay, T. Law, C. Lareau, Y. Hsu, A. Regev, and J. D. Buenrostro (2020) Chromatin potential identified by shared single-cell profiling of RNA and chromatin. Cell 183 (4), pp. 1103–1116.e20. External Links: Document, Link Cited by: §6.
  • K. R. Moon, J. Stanley, D. Burkhardt, D. van Dijk, G. Wolf, and S. Krishnaswamy (2018) Manifold learning-based methods for analyzing single-cell rna-sequencing data. Current Opinion in Systems Biology 7, pp. 36–46. Cited by: §1, §3.
  • K. R. Moon, D. van Dijk, Z. Wang, S. Gigante, D. B. Burkhardt, W. S. Chen, K. Yim, A. v. d. Elzen, M. J. Hirn, R. R. Coifman, N. B. Ivanova, G. Wolf, and S. Krishnaswamy (2019) Visualizing structure and transitions in high-dimensional biological data. Nature Biotechnology 37 (12), pp. 1482–1492. Cited by: §1, Figure 2, §4.2, §4.2, §5.1, §5.3.
  • D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst (2013) The emerging field of signal processing on graphs: extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Processing Magazine 30 (3), pp. 83–98. Cited by: §B.1, §4.2.
  • J. S. Stanley, S. Gigante, G. Wolf, and S. Krishnaswamy (2020) Harmonic alignment. External Links: 1810.00386 Cited by: §B.1, §2.
  • L. Van Der Maaten, E. Postma, and J. Van den Herik (2009) Dimensionality reduction: a comparative. J Mach Learn Res 10, pp. 66–71. Cited by: §3.
  • D. Van Dijk, R. Sharma, J. Nainys, K. Yim, P. Kathail, A. Carr, C. Burdziak, K. R. Moon, C. L. Chaffer, D. Pattabiraman, et al. (2018) Recovering gene interactions from single-cell data using data diffusion. Cell 174 (3), pp. 716–729. Cited by: §A.2, §1, §5.2.
  • K. D. Yang, A. Belyaeva, S. Venkatachalapathy, K. Damodaran, A. Katcoff, A. Radhakrishnan, G. V. Shivashankar, and C. Uhler (2021) Multi-domain translation between single-cell imaging and sequencing data using autoencoders. Nature Communications 12 (1). External Links: Document, Link Cited by: §2.
  • J. Zhu, T. Park, P. Isola, and A. A. Efros (2020)

    Unpaired image-to-image translation using cycle-consistent adversarial networks

    .
    External Links: 1703.10593 Cited by: §1.

Appendix A Trimodal Data Integration

The integrated diffusion framework is generalizable to many modalities. Thus far, we have only applied our approach to toy and biological data generated via two measurement modalities, but theoretically we can integrate data generated from more modalities. In this section, we display results from each of our comparison strategies on multimodal MNIST and high dimensional artificial tree data.

a.1 Visualization

width= Global Noise Ratio 1 2 3 4 5 6 7 8 9 10 Feature Concatenation 0.8670 0.8215 0.4882 0.3098 0.2180 0.1902 0.1313 0.1279 0.1077 0.1044 Joint PCA 0.9478 0.8519 0.6919 0.4461 0.3249 0.2037 0.1549 0.1364 0.1246 0.1279 Distance Addition 0.9512 0.8535 0.7458 0.5825 0.4074 0.2879 0.2710 0.1919 0.1852 0.1616 Canonical Correlation Analysis 0.5774 0.5044 0.2399 0.1009 0.0566 0.0197 0.0094 0.0072 0.0029 0.0008 Autoencoder 0.0926 0.0926 0.0926 0.0926 0.0926 0.0926 0.0926 0.0926 0.0926 0.0926 Affinity Addition 0.9562 0.8535 0.7744 0.7525 0.7306 0.7155 0.7441 0.7121 0.7138 0.6633 Affinity Product 0.9411 0.803 0.7407 0.6296 0.6296 0.6145 0.5909 0.5791 0.5758 0.564 Alternating Diffusion 0.931 0.8165 0.7323 0.6465 0.6094 0.6263 0.537 0.5488 0.5 0.532 Powered Alternating Diffusion 0.9242 0.8249 0.7879 0.7542 0.7407 0.7643 0.7542 0.7525 0.7172 0.7559 Locally Corrected Alternating Diffusion 0.9343 0.7508 0.6515 0.5421 0.4848 0.4057 0.2778 0.4057 0.4175 0.3687 Integrated Diffusion (our method) 0.9209 0.7710 0.7239 0.6801 0.7071 0.7003 0.6768 0.6869 0.6667 0.6128

Table 5: Classification accuracy of MNIST digits from embedding generated by a variety of multimodal integration strategies on three noisy data modalities.

To quantify the differences in visualizations produced from differing multimodal integration strategies, we leveraged the same experimental set up as done previously on tri-modal MNIST and artificial tree data. In these experiments, instead of generating two data modalities by adding varying amounts of local and global noise, we generated three modalities. To simulate real data, we added increasing amounts of noise to two of the modalities, while keeping the third fixed, thus establishing a ratio between noise levels.

We compared diffusion operator constructions based off of multimodal feature concatenation, distance addition, affinity addition, affinity multiplication and alternating diffusion. We also compared to non-diffusion based approaches: feature concatenated PCA, CCA and feature concatenated autoencoder as done previously. We also performed ablation studies, comparing these techniques to various diffusion operators: alternating diffusion with local low rank approximation, alternating diffusion with modality specific powering of diffusion operators via spectral entropy ratio, and finally our integrated diffusion approach.

As done previously, for our tri-modal MNIST comparisons, we integrated all modalities of data and calculated the first 20 diffusion map components by eigendecomposition of the powered diffusion operator for our diffusion based comparisons. From this embedding we train a kNN-classifier to predict MNIST digit of origin from the embedding. For our non-diffusion based comparisons, an embedding is created and used for classification. For our tree comparisons, we are trying to reconstruct the original noiseless tree from tri-modal locally noisy tree using our integrated approaches, evaluating performance using DeMAP.

All trimodal integration strategies performed comparably when both modalities had a similar degree of local and global noise. As the difference in global noise increased in our MNIST embedding classification task, however, strategies that accounted for global information with modality specific diffusion operator powering outperformed strategies that did not and non-diffusion based strategies (Table 5). When embedding trees with varying degrees of local branch specific noise, methods that perform local correction significantly outperformed methods that did not and non-diffusion based strategies (Table 6). These findings are in line our previous results on integrated bimodal data visualization.

width= Local Noise Ratio 1 2 3 4 5 6 7 8 9 10 Feature Concatenation 0.483 0.0714 0.1968 0.1583 0.2275 0.184 0.2009 0.2037 0.1972 0.199 Joint PCA 0.8594 0.8509 0.8376 0.8164 0.7852 0.7432 0.6910 0.6313 0.5683 0.5063 Distance Addition 0.4729 0.3035 0.1989 0.1457 0.1607 0.1252 0.0587 0.0137 0.026 0.0404 Canonical Correlation Analysis 0.2253 0.7765 0.3618 0.0294 -0.0077 -0.0096 -0.0079 -0.0059 -0.0042 -0.0029 Autoencoder 0.0099 -0.0021 -0.0132 0.02809 -0.0051 -0.0089 0.0007 -0.0060 0.0139 0.0221 Affinity Addition 0.5405 0.159 0.404 0.436 0.4371 0.4512 0.4752 0.4853 0.4769 0.4636 Affinity Product 0.4542 0.5712 0.6152 0.6505 0.613 0.4532 0.2885 0.1794 0.1383 0.1344 Alternating Diffusion 0.5807 0.5983 0.5892 0.5788 0.5478 0.4712 0.4265 0.3638 0.2862 0.2283 Powered Alternating Diffusion 0.6089 0.6336 0.6728 0.6449 0.6079 0.4573 0.5897 0.5065 0.1824 0.2605 Locally Corrected Alternating Diffusion 0.6908 0.6521 0.6177 0.6835 0.786 0.8802 0.8985 0.8986 0.8830 0.8788 Integrated Difusion (our method) 0.7351 0.6533 0.7187 0.7700 0.8354 0.8919 0.9007 0.8769 0.8635 0.8266

Table 6: DEMaP correlations comparing geodesic distance on ground truth noiseless high dimensional artificial tree data with multimodal embeddings created from a variety of techniques on three noisy data modalities.

a.2 Denoising

Here, we compare our integrated diffusion approach to other approaches for data denoising. As done previously, we created trimodal MNIST data by adding differing amounts of global noise. After computing the joint diffusion operator with each of these comparison methods, we filter the noisier MNIST modality as done previously Van Dijk et al. (2018) and as can be seen in Figure 3. To get quantitative results, we train a kNN-classifier on the denoised pixel values to determine how well each operator is able to recover ground truth. As shown in Table 7, across denoising comparisons, classification accuracy on increasingly noisy MNIST digits were best recovered by integrated diffusion followed by powered alternating diffusion, both methods account for global information within each noisy modality.

width= Global Noise Ratio 1 2 3 4 5 6 7 8 9 10 Feature Concatenation 0.9529 0.8316 0.4596 0.2626 0.1919 0.1616 0.1229 0.1448 0.1094 0.1313 Joint PCA 0.4829 0.4852 0.2422 0.0962 0.0541 0.0185 0.0085 0.0072 0.0053 0.0020 Distance Addition 0.9663 0.8283 0.6667 0.4192 0.2929 0.2172 0.2138 0.1599 0.1633 0.1448 Canonical Correlation Analysis 0.8922 0.6481 0.4074 0.2659 0.2239 0.2003 0.1835 0.1432 0.1448 0.1178 Autoencoder 0.0926 0.0926 0.0926 0.0926 0.0926 0.0926 0.0926 0.0926 0.0926 0.0926 Affinity Addition 0.9209 0.7828 0.5943 0.4074 0.3552 0.2778 0.2508 0.2037 0.1869 0.1852 Affinity Product 0.8569 0.5067 0.303 0.1902 0.1734 0.1835 0.1481 0.1448 0.1616 0.133 Alternating Diffusion 0.9293 0.8754 0.8064 0.7374 0.6852 0.6566 0.5774 0.569 0.5471 0.5286 Powered Alternating Diffusion 0.9074 0.8872 0.8721 0.8822 0.8535 0.8300 0.8300 0.8098 0.8013 0.8013 Locally Corrected Alternating Diffusion 0.8569 0.5067 0.3030 0.1902 0.1734 0.1835 0.1481 0.1448 0.1616 0.133 Integrated Difusion (our method) 0.9024 0.8788 0.8737 0.8754 0.8603 0.8771 0.8721 0.8721 0.8552 0.8569

Table 7: Comparing MNIST denoising pixel classification accuracy with differing multimodal denoising strategies on three noisy data modalities.

a.3 Clustering

width= Global Noise Ratio 1 2 3 4 5 6 7 8 9 10 Feature Concatenation 0.0002 0.0375 0.0001 0.0000 0.0000 0.0000 -0.0001 0.0000 0.0000 -0.0001 Joint PCA 0.5774 0.5044 0.2399 0.1009 0.0566 0.0197 0.0094 0.0072 0.0029 0.0011 Distance Addition 0.0001 0.0023 0.0000 0.0000 0.0002 0.0002 0.0000 0.0000 -0.0001 0.0000 Canonical Correlation Analysis 0.4551 0.1904 0.0873 0.0402 0.0270 0.0071 0.0112 0.0062 0.0042 0.0033 Autoencoder 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 Affinity Addition 0.0386 0.0768 0.0002 0.004 0.0003 0.0000 0.0007 0.0001 0.0002 0.0001 Affinity Product 0.3451 0.2963 0.0261 0.0009 0.0100 0.0012 0.0017 0.0017 0.0021 0.0021 Alternating Diffusion 0.6436 0.4292 0.0817 0.0376 0.0067 0.0235 0.0558 0.0093 0.0006 0.0601 Powered Alternating Diffusion 0.5752 0.5887 0.3289 0.3766 0.2483 0.0804 0.2397 0.264 0.1081 0.1599 Locally Corrected Alternating Diffusion 0.3869 0.6265 0.5719 0.5371 0.3334 0.2446 0.1551 0.0875 0.0654 0.0738 Integrated Difusion (our method) 0.4592 0.7651 0.6832 0.6722 0.7130 0.5919 0.5614 0.5290 0.5676 0.4907

Table 8: Adjusted Rand Index between known MNIST labels and cluster labels; ratio of global noise added is gradually increased to demonstrate invariance of integrated spectral clustering to differing amounts of noise present on three noisy data modalities

Finally, we tried to compare spectral clustering accuracy across different clustering methods by running kmeans on both the joint diffusion operators and the compressed feature spaces of other non-diffusion based methods. As done previously, we compared these computed cluster labels against ground truth MNIST labels using adjusted rand index (ARI). Using the noisy trimodal MNIST data modalities previously generated we computed 10 clusters in each method and identified clustering accuracy. While clustering accuracy remained poor throughout all comparisons, they remained highest when clustering the powered alternating diffusion and integrated diffusion operators, both of which account for differences in global levels of information in each modality. Furthermore, as noise increased, integrated diffusion which also corrects local noise, performed superior to powered alternating diffusion (Table 8).

Appendix B Methods Details

b.1 Fast Noise Decay, Slow Signal Decay

As discussed in Burkhardt et al. (2020); Stanley et al. (2020); Shuman et al. (2013) among other sources, the eigenvectors of the diffusion operator (closely related to the Graph Laplacian) form frequency harmonics of the associated graph. In the case of the diffusion operator, the high-eigenvalued eigenvectors are low frequency harmonics and the low-eigenvalued eigenvectors represent high-frequency components which are often noise in data (causing nearby points to seem as if they have significantly different features). Therefore, the diffusion operator is often powered to reduce the noise.

To see this, we can first inspect the spectral properties of the diffusion operator defined on our data manifold as it is increasingly powered. By observing the distribution of eigenvalues as the diffusion timescale changes, we can see that smaller eigenvalues are gradually ”shaved” off of the spectrum (Figure 7). Thus, powering the diffusion operator can be viewed as a low-pass filter which removes high-frequency noise present in the data.

For numeric stability, we choose a symmetric operator that is conjugate to the asymetric operator for powering the diffusion operator.

(5)

where

is the diagonal matrix containing the node degrees (i.e., row sums of the affinity matrix). Note that the matrix

is also positive semidefinite and has the same set of non-negative eigenvalues as . We can then normalize these eigenvalues to obtain a probability distribution, , such that .

Figure 7: Spectra of the diffusion operator after increasing timescales; here t = 1,2,3,4,5

For a particular dataset, we determine the powering necessary to denoise the data and retain signal by quantifying the rate of the information decay. We hypothesize that the noise dimensions, which have low eigenvalues (and do not explain much of the data) decay quickly while signal values are harder to ”shave off.” Because of this difference in the rates of eigenvalue ”decay”, we can estimate the optimal timescale at which to power our diffusion operator such that maximal signal is preserved while the largest sources of global noise are smoothed out.

In order to compute the amount of information contained in the operator after any power we compute its associated spectral entropy. When we plotted the spectral entropy, our hypothesis of fast noise-decay and slow-signal decay was confirmed by the negative exponential shape of the spectral entropy curve (Figure 6B). We generally choose the ”elbow-point” of the spectral entropy versus timescale curve.

b.2 Spectral entropy as a measure of operator information

To measure the information contained in an operator, we compute the eigenvalue entropy. This measures how information is spread through the eigenspace of the diffusion operator.

(6)

Note that this can also be thought of as the classic Shannon entropy calculated on the normalized spectrum of the diffusion operator. Due to the submodularity of the logarithmic function, the spectral entropy is largely determined by the significant eigenvalues corresponding to the ”signal-rich” diffusion dimensions.

b.3 Neural Network Methods

We compared our diffusion map-based approaches to several neural network models: an autoencoder trained on concatenated features from both views (”joint” model), an autoencoder adopted for domain transfer, and a cycle GAN.

For our joint autoencoders, our architecture for the classification task (handwritten digits) was as follows: an input layer of 128 (2 x 64) nodes, followed by layers of 55, 32, 32, and 20 nodes (at the bottleneck). On the tree dataset, our architecture was: an input layer of 200 (2 x 100) nodes, followed by layers of 128, 64, 32, and 20 nodes at the bottleneck. We chose a 20-dimensional representational space to stay consistent with our choice to draw the first 20 diffusion components when assessing diffusion based methods.

Our domain transfer-adapted autoencoder followed largely the same architecture as the joint model; however, instead of learning to reconstruct a joint input, the domain transfer model aimed to reconstruct the same points in the other view. For example, corrupted digits would serve as input to the model, which would aim to learn a representation from which the other view of the digits could be reproduced.

The final neural model tested was a cycle GAN, adopted from the field of domain transfer. Briefly, this model aims to learn two mappings from a ”source” to a ”target” distribution and vice versa. Because there is no shared latent space between the views in this architecture, we assessed classification accuracy by classifying ”uncorrupted” mappings from the ”noisy” view (we took this approach when comparing against Canonical Correlation Analysis as well).

Note that due to the complexity of making numerous pairwise latent spaces for the domain transfer autoencoders and cycle GANs, we opted to omit these methods from comparisons on the trimodal data.