1 Introduction
There are many examples of data that can be thought of as a composition of basic features. For such data, an efficient description can often be constructed by a – possibly weighted – enumeration of the basic features that are present in a single observation.
As a first example, we could describe genomes of single organisms as a composition of genes and gene clusters, where the presence or absence of specific genes is determined by the evolutionary history and further reflected in the presence or absence of functions and biochemical pathways the organism has at its disposal [20, 1]
. Depending on the level of description, such a composition is not necessarily a linear superposition of the basic features. It has recently been estimated, for example, that due to horizontal gene transfer the genome of Homo Sapiens outside of Africa is composed of
of Neanderthal DNA [23] but no single genomic locus is actually a superposition. Nonetheless, such a description conveys a lot of information: a machine learning algorithm that could be trained in an unsupervised manner on a large number of genomes and automatically output such coefficients would be very valuable in the field of comparative genomics [16].As a further example we can take the gene expression signature of a single cell, which is determined by the activity of modules of genes that are activated depending on cell identity and the environmental conditions [25]. Since there are far fewer such gene modules than genes, the activity of these modules can be used as an efficient description of the state of the cell. The inference of such modules based on single cell genomic data and downstream tasks like clustering cells into subtypes is an ongoing field of research [26].
On a even more finegrained level, there have been recently several successful efforts to model protein sequence data as a composition of features that arise from structural and functional constraints and are also influenced by phylogeny [29, 27, 28]. This leads to several possible patterns of amino acids for making up functional groups, or contacts between amino acids, and the presence or absence of these patterns can be used as features and inferred from aligned sequence data of homologous proteins.
There are also many examples of composite data outside of biology. An immediate example are images that contain multiple objects. The efficient extraction of such objects, which can be seen as basic features, has important applications, for example for selfdriving cars [24]. In such applications, one is of course also interested in the number and locations of the objects, but a basic description of an image, using an enumeration of objects present, can be part of a general pipeline.
As a final example, we note that in natural language processing documents are often modeled as a mixture of topics, each of which gives a contribution to different aspects of the document: for example, the authors of ref.
[6] use the distribution of words. As in the case of genomes, the actual document is far from being a superposition of the topics, but such a description is nonetheless useful in fields like text classification.A natural candidate model for finding efficient representations are undercomplete, sparse autoencoders [18]
. These are multilayer neural networks that are trained (in an unsupervised fashion, i.e. on unlabeled data) to realize the identity function. Their goal is to learn a compressed parametrization of the data distribution: to this end, the training is performed under the constraint that the internal representation in a specific layer, called
bottleneck, is sparse and lowdimensional. Under the assumption that only a few basic features contribute to any given observation and that the number of basic features is smaller than the dimension of the bottleneck, such an internal representation could be expected to identify the basic features that describe the data.In this work, we present evidence that it is indeed possible to find representations of composite data in terms of basic features, but that this process is very sensitive to both overfitting and underfitting: If the imposed sparsity is not strong enough, the resulting representation does not correspond to the basic features. If it is too strong, the dictionary of basic features is not represented completely.
Therefore we present a modified version of the autoencoder, the replicated autoencoder, which is designed to find good solutions in cases where overfitting is a danger. We test this hypothesis on synthetic and on real, biological data. In both cases we find more natural representations of the data using the replicated autoencoder.
2 Results
2.1 Natural representations of composite data
By composite data we mean data in which a single observation
, represented by a vector of real numbers, can be roughly seen as a function
of basic features and real weights for :The basic features
could be either onehot encodings for categorical features or real valued vectors. We call the set of all
the dictionary and the size of the dictionary. The weight determines how strongly the basic feature contributes to the observation and could be either binary, encoding presence or absence of a basic feature, or a positive real number that quantifies the contribution. The easiest version of composite data would be a linear superposition, but we do not limit ourselves to this case. In fact, the synthetic data shown below is not a linear superposition of the basic features. We also do not assume that the equality holds exactly but allow for noise and other factors to influence the observation .In this work we study multilayer neural networks trained in an unsupervised fashion on composite data. In such a network, a natural way of representing data that is a composition of basic features is to use the activations of one of the hidden layers as a representation of the input and match each of the hidden units of this layer with one basic feature. For a specific input coming from this data, only the neurons corresponding to the basic features present in that input should then show a significant activation, and the activation should be correlated to the corresponding weight. Under the assumption that the number of features included in each example is much lower than the total number of possible features, we expect such a representation of composite data to be sparse. We do not assume that the size of the dictionary
or the basic features are known, but infer them from the data.2.2 Model
We train feedforward autoencoders (AE) with stochastic gradient descent (SGD), minimizing the reconstruction error
. The basic AE model we consider is made of an input layer, three hidden layers and an output layer, made respectively by , , , and units, with , see fig. 1. The smallest layer with size is the bottleneck. We use the activations in the bottleneck as the representation of the input.In each layer except the last one we use as a activation function the rectified linear unit (ReLU), defined as
. In order to obtain a sparse representation in the bottleneck, we add an L1 penalty for the activations of the neurons of the central hidden layer: .To summarize, for a single autoencoder (SAE) we use the loss function
(1) 
where is the regularizer strength. Higher values of enforce lower activations of the units in the hidden layer and higher overall sparsity (for a detailed discussion on the general effects of this regularizer, see for example ref. [14]).
We observe that with higher there are more units that show little to no activation on any input pattern in the training set: The autoencoder shuts down some units in a tradeoff between the two regularization terms in the loss function. We call the number of active units the inferred dictionary size, and is the number of deactivated units. We infer and therefore from the data (see below).
2.3 Replicated systems and overfitting
If the data is indeed composite in nature, we expect a representation that captures the true underlying contribution of features to generalize well. Therefore we take special measures to avoid overfitting, which would decrease generalization performance and would constitute evidence that the current representation in the hidden units is not corresponding to the basic features in the data. We note that in neural networks, overfitting has been connected to sharp minima in the loss function [3, 7, 4]
. To avoid such minima, we modify the optimization procedure to prefer weights that are in the vicinity of other weights that have low loss, which is a measure of the flatness of the loss landscape. Borrowing physics terminology, we call the number of low loss weights around a specific set of weights the "local entropy" of the weights. The analogy is that while in physics the entropy is a measure of how many states a system in equilibrium can occupy when the states are weighted with their probabilities, in our case the local entropy determines how many sets of weights are around some specific solution when weighted with the likelihood on the training set. Intuitively, one expects weights with a high local entropy to generalize well since they are more robust with respect to perturbations of the parameters and the data and therefore less likely to be an artifact of overfitting. In fact, such flat minima have already been found to have better generalization properties for some deep architectures
[7]. We call the optimization procedure that finds such minima robust optimization.More precisely, the local (free) entropy of a certain configuration of the weights is defined as [3]:
(2) 
where the function measures the distance between the weights: several choices are possible, but in the rest of the work we use exclusively the euclidean distance. The parameter controls indirectly the locality, i.e. the size of the portion of landscape around that we are considering (a larger corresponds to a smaller radius). The parameter has the role of an inverse temperature in physics, and it controls indirectly the amount of flatness required of the local landscape (a larger corresponds to flatter landscapes).
Computing the local entropy is expensive and impractical in most cases. However, as described in detail in ref. [3], if we use the negative local entropy as an energy function (i.e. as the objective function that we wish to optimize) with an associated fictitious "inverse temperature" that we choose to be a positive integer, the canonical partition function of the system is amenable to an equivalent description that can be implemented in a straightforward way: We add replicas of our model, , and we add an interaction between each replica and the central (original) configuration that forces them to be at a certain distance. We thus end up with the new replicated objective function
(3) 
where is the total loss of the replica . It is important at this stage to observe that the canonical physical description presupposes a noisy optimization process where the amount of noise is regulated by some inverse temperature , while in this work (following ref. [3]) we will be relying on the noise provided by SGD instead, thereby using the minibatch size and the learning rate as "equivalent" control parameters. Relatedly, we should also note that, although the interaction term is purely attractive, the replicas won’t collapse unless the coupling coefficient is very large, due to the presence of noise in the optimization. Thus, in our protocol, the coefficient
is initialized to some small value and gradually increased at each training epoch.
Besides the analytical argument, the intuitive reason why this procedure achieves more robust optimization results is that the interaction will prevent replicas to remain trapped in bad minima of the loss: if one of the replicas finds an overfitted solution and this overfitting is associated with a sharp minimum, it is likely that the other replicas will not be at the same minimum, but at a higher point in the loss function. The overfitted replica will be pulled out of the sharp minimum as the interaction term grows (see figure 1 for a sketch).
The robust optimization protocol that we have used throughout this work can be then summarized as follows (additional details can be found in the Materials and Methods section 5.3 Learning algorithm). We train autoencoders with different initializations coupled with a central autoencoder , which we call the center. Every replica is trained on batches from the training set with normal SGD, but we add a gradually increasing coupling term between every replica and the central autoencoder, see Eq. (3). At the end of the training procedure, we have trained replicas and one center. All of these models are autoencoders that can be used for prediction or representation. We typically discard all replicas and only use the center. We call an autoencoder that is trained using this procedure a replicated autoencoder (RAE). In the rest of this work, we ask if this robust optimization is helpful for finding a natural representations of composite data. We test this idea first on synthetic data where we control the generative process and then extend the approach to protein sequence data. In the latter case the exact generative process is unknown, but a coarse approximation to the basic features can be found in the taxonomic labels.
2.4 Synthetic data
Following ref. [21], we generate synthetic datasets of examples obtained as superpositions of basic features, modeled as follows. We consider a dictionary of basic features , where is the size of the dictionary. In this setup, we choose as a random binary ( or ) sparse vector of length . We use binary weights to control the contribution of the basic feature on the observation and set only a small number of the weights to for each observation. The final observation is defined to be
(4) 
Note that this is not a simple linear superposition due to the elementwise function. The purpose of this way of generating data is to let all basic features have a potential impact on every observation while keeping the task of inferring their contributions and the basic features themselves nontrivial. A possible representation of the data is one where each feature in the dictionary corresponds to a single hidden unit in the central layer of the autoencoder. We call this the natural representation of the synthetic dataset. This representation needs hidden units. For this reason we expect the autoencoder to be able to find the natural representation when , given that an appropriate value for has been used. Additional details on synthetic data generation can be found in the Materials and Methods section 5.1 Synthetic data.
2.5 RAE versus SAE on synthetic data
We train a single autoencoder (SAE) and a replicated autoencoder (RAE) on the synthetic data. We compare the reconstruction performance on unseen examples, the regularization loss and the ability to infer the basic features based on the hidden units of the bottleneck.
The striking difference between the RAE and the SAE is that the RAE is able to achieve a better reconstruction performance at high sparsity, in the region where (see fig. 2A). This is connected to the observation that the RAE has a number of active units that is significantly larger than the SAE while keeping a similar L1 norm for most inputs. This might sound paradoxical, but we recall here that is the number of units in the bottleneck that show a significant activation for at least one input from the training set. This is not directly suppressed by the L1 regularization on the bottleneck, which penalizes cases in which many units are activated for a single input. There are thus different ways to realize the same overall L1 norm. The SAE deactivates more units completely, while using a larger fraction of the remaining active units on the inputs on average. The RAE, on the other hand, deactivates fewer units completely, but using a smaller fraction of the active units for every input. Another way of stating this is that the RAE uses representations that are more distributed over all available units and keeps closer to (see S2 Fig. for an example of this behaviour).
Both SAE and RAE are able to retrieve all the features of a sufficiently small dictionary, if is chosen such that . At the same time we observe that for the RAE, the range of for which this is true is significantly wider than for the SAE (fig. 2A, solid line). If we plot the loss as a function of , we observe that it grows slowly up to a certain knee point (fig. 2
A, dashed line). This point coincides with the maximum number of retrieved features. This can be interpreted as a phase transition between overfitting and underfitting, and for
the performance deteriorates quickly.In general, the retrieval of features is better for smaller dictionaries for both models, but for larger dictionary sizes the RAE retrieves a higher number of features, see fig. 2B: for both the SAE and the RAE retrieve 100% of the features, while for only the RAE is able to do so. For the RAE finds more features than the SAE.
2.6 Biological data: protein families
In this section we test the capability of the RAE to infer basic features on real data. We use sequence data of homologous proteins because they allow a reasonable interpretation of composition: due to coevolution of residues that are part of structural contacts or functional groups, certain patterns of amino acids arise. These patterns can be exploited for the prediction of contacts with the structure of a single protein [22, 8], infer protein interaction networks [9, 11] and paralogs [15, 5], model evolutionary landscapes [13] and predict pathogenicity of mutations in humans [17, 12]. Since these patterns are inheritable, we expect their presence to be partly determined by the phylogenetic history of the organism and therefore to be correlated with its taxonomy. We therefore argue that a ‘natural’ representation of an amino acid sequence should be correlated with taxonomy of the organism.
We thus proceed as follows. We consider a wide variety of protein families (see the Materials and Methods section 5.2 Protein data), and we use aligned sequences in a onehot encoding as the input of the autoencoders. Each family is partitioned in train set, test set and validation set in the proportion 80%10%10%. We then test two different measures of correlation between the representations of the SAE and the RAE of the sequences with their taxonomic labels. Note that, analogously to the case of synthetic data, the training of the autoencoders is agnostic about these labels.
The behavior of the autoencoders trained on protein sequence data is qualitatively similar to what we saw for synthetic data: there is always a knee point in the curve of the loss (both train and test) as a function of , see S3 Fig., S4 Fig., S5 Fig.. We expect that the range of values around the knee point corresponds to a representation that is close to the underlying biology.
We determine the knee point for a given protein family by fitting the error curve (not directly the loss) on the test set by two connected line segments and then use the point where they intersect as . All the subsequent analysis is done on the validation set, using the autoencoder with the identified . See Materials and Methods, Section 5.4 Knee Point Identification for a more detailed description of the procedure.
We measure the taxonomic information captured by the hidden layer in two ways: First, in analogy with synthetic data, under the very hopeful hypothesis that each taxonomic label corresponds to a single hidden unit. We test this idea in the next paragraph. Secondly, we ask how well a clustering of the sequences based on the hidden representations correlates with the taxonomic labels in comparison to a clustering based directly on the amino acid sequences.
Since the taxonomic classification is modeled by a tree, we consider the labels as organized by their depth , that is their distance from the root of the tree. For example, the root has , the label ’Bacteria’ has , ’Proteobacteria’ has . Every label is associated with one or more sequences in the training set and every sequence corresponds to several labels. The labels near the root are the most populated, while the labels deeper in the tree are sparsely populated. We expect the labels in the first few levels to be more correlated with the hidden unit since deeper labels correspond to only a few sequences in the training set. For these reasons we restricted the following analysis to labels up to depth , with the additional condition that they must contain at least 20 sequences from the training set.
2.6.1 NeuronTaxon correlation
Given a taxonomic label indexed by
, we consider the binary variable
that, for each sequence in the MSA, is equal to if the sequence belongs to that taxon and is equal to otherwise; then, after the AE has been trained, we compute (on the training set) the correlation matrix between the variables and the activations of the hidden units. For every label we select the most correlated unit . Then we define a score as the average correlation (on the test set) of the most correlated units for every label:(5) 
where is the total number of labels considered in a MSA.
The results are shown in Fig. 3: RAE consistently finds a higher score than SAE. It is useful to note the general trend of this score: the more sequences in the dataset, the worse the score. Additionally, the protein families of ribosomal domains have a much higher score, which is probably due to the fact that ribosomes are well sampled (see the 3 Discussion section for more on this).
2.6.2 Clustering data in the latent space
For a given label at depth , we consider the sublabels at depth branching from ; we select the subset of the training set corresponding to the label , then we compute the centroids of the clusters corresponding to the sublabels by averaging the sequences with that sublabel. Given a new sequence from the test set belonging to , we assign the sublabel according to the closest centroid. In order to perform this clustering procedure on disjoint subsets in such a way that the accuracies are independent from each other, we fix the depth and consider only labels found at that depth. We choose , because it provides the most variety of sublabels with a high number of examples in the protein families we considered.
First we run this procedure using the original sequences, the same ones on which we trained the AEs. Then we repeat the clustering using, for each sequence, its representation in terms of the hidden units of the AEs. We ask whether this representation improves the accuracy of the clustering, depending on whether we use the representation from RAE or SAE.
The results are shown in fig. 4: the representation learned by RAE does improve the accuracy for the majority of labels both respect to SAE (bottomleft panel) the to input space (bottomright panel).
3 Discussion
In this work we have presented a method to extract representations of composite data that connects to the structure of the underlying generative process. To this end, we combined two techniques that allowed us to recover such representations from the bottleneck of autoencoders trained on the composite data: The first is a regularization that forces the autoencoder to use a sparse representation. The second is the replicating of the autoencoder, which changes the properties of the solutions found. We showcased the method on two different datasets. In the first dataset, where we controlled the generative process, we showed that the replication allows to extract the underlying basic features also in cases where the sparsity constraints are too strong for a single autoencoder. After a closer analysis, we found that replication enables the system to effectively disentangle basic features and specialize parts of the internal representations.
In a second step, we applied the same method to protein sequence data. Since patterns of amino acids are inheritable, we used the correlation between the extracted representations and the phylogenetic labels as a metric for assessing the quality of the representation. We found that the replication of the autoencoder resulted in representations that are closer to biological reality and that the qualitative characteristics of the loss function and internal representations are similar to autoencoders trained on synthetic data.
One intriguing observation is that the point where the internal representation becomes correlated with the basic features is identifiable: The knee point in the loss curve in dependence of the regularization parameter corresponds to the peak performance in feature retrieval. For synthetic data we were able to verify this directly. Near this knee point, each hidden unit represented one basic feature. This also allowed us to infer the number of basic features present in the data (i.e. its inherent dimensionality). For protein sequence data, we observed that the internal representation becomes correlated with taxonomic labels at the knee point. After this knee point, the loss deteriorates quickly, indicating that the autoencoder starts dropping important information from the internal representation.
Interestingly, we found that feature retrieval on synthetic data became more difficult for increasing dictionary sizes. This could be addressed either by using a larger and therefore more expressive architecture or by using a larger training set. We generally expect the size of the training set necessary for the extraction of the basic features to scale with the size of the dictionary [10, 2].
Regarding the difficulty of the feature extraction task, we found a similar behavior on the protein families: Families with more sequences also contain a higher number of labels and are expected to have a wider variety of features. It is further noteworthy that the correlation between taxonomic labels and internal representations was more pronounced for ribosomal domains than for other families with a similar number of sequences (see fig.
3, circled markers). This is probably due to the fact that ribosomes are well studied systems in many species and that in the databases we used there are more species per sequence for these domains (see tab. 1). This indicates that a well balanced dataset is an additional factor in the inference of basic features from composite data.4 Conclusion
In conclusion, we have shown that replicated autoencoders are capable of finding representations of composite data that are meaningful in terms of the underlying biological system. We believe the approach to be very general, since we used no prior knowledge about the biology involved. The work we presented here encourages us to believe that the method could be useful for other data in biology where the representations and basic features extracted might lead to new biological insights. As one example for a possible direction of future research we point to the increasing number of measurements coming from singlecell transcriptomics [30]. In these data, the basic features are conceptually clearer than in protein sequence data and we suspect that representations of cell states in terms of gene expression modules would be found. Such representations could in turn be used to cluster cell types or analyze pathologies like Alzheimer’s Disease [19].
5 Materials and Methods
5.1 Synthetic data
We generate synthetic data points according to eq. (4) with the following characteristics: each example has components, we generate training sets with examples and a test set with examples. We used four different datasets with dictionary sizes , , and .
The architecture (fig. 1) is fixed: we used intermediate hidden layers with and a bottleneck with for all our experiments with synthetic data.
We chose the feature vectors to be random with binary ( or ) independent entries, with a fixed average fraction of nonzero components. The coefficients are also binary, sparse and random: they were generated with a probability of being nonzero. However, in order to make the retrieval problem sufficiently difficult, for each pattern we ensured that it contained at least three features (i.e. we discarded and resampled those that didn’t meet the criterion ). The generation of a dataset is therefore parametrized by , , , , and . In this work, we fixed the sparsity of the features at and the sparsity of the coefficients at . We always chose .
Since we work with binary patterns, the activation function of the output layer is chosen to be , which sets the range of each output unit between and . The loss function of choice for these datasets is mean square error (MSE), which is simply the squared difference between a unit in the input layer and the corresponding unit in the output layer, summed over all the units.
5.2 Protein data
We considered 18 protein families from the PFAM database (tab. 1) selected according a number of criteria: we want many types of proteins represented, as well as families covering many different partitions of the tree of life; additionally, we chose families with a sufficient number of sequences and species, varying the ratio between these two numbers.
DATASET  n. seq  n. species  n. amm  DATASET  n. seq  n. species  n. amm 

PF01978.19  4531  1806  68  PF04545.16  35976  8384  50 
PF09278.11  6117  2867  65  PF00805.22  38453  3485  40 
PF00444.18  6551  5971  38  PF07676.12  48848  6060  38 
PF03459.17  8823  4066  64  PF00356.21  49284  5450  46 
PF00831.23  9782  9209  57  PF03989.13  60674  8153  48 
PF00253.21  10577  8650  54  PF01381.22  72011  9760  55 
PF03793.19  20495  4026  63  PF00196.19  85219  6666  57 
PF10531.9  22080  7683  58  PF00353.19  101177  2304  36 
PF02954.19  35339  5079  42  PF04542.14  110168  8385  71 
Source: https://pfam.xfam.org/
Given a sequence of length to the AE, we represent each aminoacid with a 21components onehot encoding: each input sequence is thus a binary vector of length , and the entire dataset with a matrix . The architecture is rescaled according to the sequence length : we set , and the number of units in the bottleneck to .
Since each amino acid is a categorical variable represented by a onehot encoding, a common way to compute the reconstruction error
for a single amino acid is the cross entropy between the input and the output. To do this, we consider the 21 units in the output layer that describe the site , then we apply a softmax operation so that each unit can be interpreted as a probability(6) 
and finally we compute the cross entropy
(7) 
where is the index corresponding to the true value of the amino acid. The complete loss function is the summation of the cross entropies for each amino acid of the sequence:
(8) 
Here we choose a linear activation function for the units in the output layer.
5.3 Learning algorithm
The algorithm we use to train RAE consists in iterating two alternating steps: a step of SGD on each replica computed on its own reconstruction loss, followed by a step in which each replica is pushed towards the center and the center towards the replicas. In practice this procedure is similar to elasticaveraging SGD [31], which in turn is related to the optimization of local entropy [3]. The pseudocode for the algorithm is sketched in alg. 1.
We impose an exponential scheduling on the coupling between the replicas and the center, namely we take , where is the time step of the training in units of epochs.
The training of SAE is performed with the same procedure with just one replica and setting .
In order to set the values for the many hyperparameters of these algorithms, we decided to select one prototype case among the synthetic data and one among the proteins data, and then to proceed by trial and error in order to find a regime in which the training converges and has good performance; once we found these values, we assumed that the general performances should not be sensitive to the finetuning of the hyperparameters: for this reason we used the same set of hyperparameters for every synthetic dataset and the other set of hyperparameters for all the protein families. We observed them to work well in the majority of cases.
For synthetic data we set and trained for epochs. For all protein data data we set and trained for epochs. The training epochs sufficient for reaching convergence with respect to the training loss. The batch size was fixed to 50 for all trainings.
We did not use any momentum, which resulted in a deterioriation of performances across every region of parameters and nonconvergence. The reason for this behavior could be that the loss landscape for this optimization problem appears to be highly nonconvex, especially when the regularization approaches the region near ; momentumrelated techniques, on the other hand, are designed to work well when the loss landscape is sufficiently smooth [14].
5.4 Knee Point Identification
Part of our approach is identifying the knee point in the loss curve. To this end, we consider the reconstruction error curve on the test set in dependence of . The curve has two parts, separated by the knee point: A slow increase in reconstruction performance (decrease in error) and a drastic decrease in reconstruction performance (drastic increase in error) when becomes too high. We fit the the region around the knee point with the function:
(9) 
which is simply the equation of two straight lines passing from the same point at . From the fit over the four parameters we obtain the estimation of the position of the knee point.
We use the error curve (the number of wrong amino acids in the reconstruction) rather than the loss directly, since the error curve is better approximated by two line segments and therefore easier fitted by our approach, leading to better approximations of the knee point.
The knee point is different for each protein and we expected it to be also different for SAE and RAE. Empirically, however, we obtained the best results across all the protein families by using the of the SAE also for the RAE.
6 Supporting information
S1 Fig.
Example trajectories of the loss during the training. Ten trajectories are shown for SAE and three for RAE. The panels on the left show the train loss, the right ones show the test loss. Here we show a case where SAE and RAE have the same performance (D=160, top line) and one case where RAE has a much better performance (D=240, top line). Note that the improvement is greater for the test loss, showing that RAE generalizes better. These trajectories refer to Fig. 2B in the main text. The regularizer strength is set in the proximity of the knee point, namely .
S2 Fig.
Examples of average activation of the bottleneck neurons. There are different ways to realize the same overall L1 norm of the units in the bottleneck layer. The figure shows rank plots for different AE. On the xaxis there are the hidden units ranked by their average activation: the units on the right are the most active on average and the ones on the far right are the ones that are always deactivated (their signal is next to zero across all the dataset). On the yaxis there is the average activation of the units. In the high sparsity case we can see that SAE deactivates more units completely, while RAE, on the other hand, deactivates fewer units completely. This effect disappears at lower sparsity, far from the knee point of the loss curve. The dataset used for these result is PF01978.19.
S3 Fig.
Loss and score curves for all the proteins considered (part 1 of 3). The behavior of the autoencoders trained on protein sequence data is qualitatively similar to what we saw for synthetic data: there is always a knee point in the curve of the loss as a function of , corresponding to the maximum correlation with the taxonomic labels.
S4 Fig.
Loss and score curves for all the proteins considered (part 2 of 3).
S5 Fig.
Loss and score curves for all the proteins considered (part 3 of 3).
7 Funding Acknowledgements
CB and RZ acknowledge ONR Grant N000141712569.
References
 [1] (2016) Evolution by gene loss. Nature Reviews Genetics 17 (7), pp. 379. Cited by: §1.
 [2] (2015) Simple, efficient, and neural algorithms for sparse coding. CoRR abs/1503.00778. External Links: 1503.00778 Cited by: §3.
 [3] (2016) Unreasonable effectiveness of learning neural networks: from accessible states and robust ensembles to basic algorithmic schemes. Proceedings of the National Academy of Sciences 113 (48), pp. E7655–E7662. Cited by: §2.3, §2.3, §2.3, §5.3.
 [4] (2019) Shaping the learning landscape in neural networks around wide flat minima. arXiv preprint arXiv:1905.07833. Cited by: §2.3.
 [5] (2016) Inferring interaction partners from protein sequences. Proceedings of the National Academy of Sciences 113 (43), pp. 12180–12185. Cited by: §2.6.
 [6] (2003) Latent dirichlet allocation. Journal of machine Learning research 3 (Jan), pp. 993–1022. Cited by: §1.
 [7] (2016) Entropysgd: biasing gradient descent into wide valleys. arXiv preprint arXiv:1611.01838. Cited by: §2.3.
 [8] (2018) Inverse statistical physics of protein sequences: a key issues review. Reports on Progress in Physics 81 (3), pp. 032601. Cited by: §2.6.
 [9] (2019) Protein interaction networks revealed by proteome coevolution. Science 365 (6449), pp. 185–189. Cited by: §2.6.
 [10] (2016) An overview of lowrank matrix recovery from incomplete observations. IEEE Journal of Selected Topics in Signal Processing 10 (4), pp. 608–622. Cited by: §3.
 [11] (2016) Interprotein sequence coevolution predicts known physical interactions in bacterial ribosomes and the trp operon. PloS one 11 (2), pp. e0149166. Cited by: §2.6.
 [12] (2017) Contextaware prediction of pathogenicity of missense mutations involved in human disease. arXiv preprint arXiv:1701.07246. Cited by: §2.6.
 [13] (2015) Coevolutionary landscape inference and the contextdependence of mutations in betalactamase tem1. Molecular biology and evolution 33 (1), pp. 268–280. Cited by: §2.6.
 [14] (2016) Deep learning. MIT press. Cited by: §2.2, §5.3.
 [15] (2016) Simultaneous identification of specifically interacting paralogs and interprotein contacts by direct coupling analysis. Proceedings of the National Academy of Sciences 113 (43), pp. 12186–12191. Cited by: §2.6.
 [16] (2003) Comparative genomics. PLoS biology 1 (2), pp. e58. Cited by: §1.
 [17] (2017) Mutation effects predicted from sequence covariation. Nature biotechnology 35 (2), pp. 128. Cited by: §2.6.
 [18] (2015) Deep learning. nature 521 (7553), pp. 436. Cited by: §1.
 [19] (2019) Singlecell transcriptomic analysis of alzheimer’s disease. Nature, pp. 1. Cited by: §4.
 [20] (2018) Statistics of shared components in complex component systems. Physical Review X 8 (2), pp. 021023. Cited by: §1.
 [21] (2017) Meanfield messagepassing equations in the hopfield model and its generalizations. Physical Review E 95 (2), pp. 022117. Cited by: §2.4.
 [22] (2011) Directcoupling analysis of residue coevolution captures native contacts across many protein families. Proceedings of the National Academy of Sciences 108 (49), pp. E1293–E1301. Cited by: §2.6.
 [23] (2014) The complete genome sequence of a neanderthal from the altai mountains. Nature 505 (7481), pp. 43. Cited by: §1.

[24]
(2016)
You only look once: unified, realtime object detection.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pp. 779–788. Cited by: §1.  [25] (2003) Module networks: identifying regulatory modules and their conditionspecific regulators from gene expression data. Nature genetics 34 (2), pp. 166. Cited by: §1.
 [26] (2015) Defining cell types and states with singlecell genomics. Genome research 25 (10), pp. 1491–1498. Cited by: §1.

[27]
(2019)
Learning compositional representations of interacting systems with restricted boltzmann machines: comparative study of lattice proteins
. arXiv preprint arXiv:1902.06495. Cited by: §1.  [28] (2019) Learning protein constitutive motifs from sequence data. eLife 8, pp. e39397. Cited by: §1.
 [29] (2017) Emergence of compositional representations in restricted boltzmann machines. Physical review letters 118 (13), pp. 138301. Cited by: §1.
 [30] (2015) Cell types in the mouse cortex and hippocampus revealed by singlecell rnaseq. Science 347 (6226), pp. 1138–1142. Cited by: §4.
 [31] (2015) Deep learning with elastic averaging sgd. In Advances in Neural Information Processing Systems, pp. 685–693. Cited by: §5.3.
Comments
There are no comments yet.