The process from drug discovery to market costs, on average, well over $1 billion and can span 12 years or more (dimasi2016innovation; Deloitte2019; wouters2020estimated); due to high attrition rates, rarely can one progress to market in less than ten years (martin2017clinical; paul2010improve). The high levels of attrition throughout the process not only make investments uncertain but require market approved drugs to pay for the earlier failures. Despite an industry-wide focus on efficiency for over a decade, spurred on by publications and annual reports highlighting revenue cliffs from ending exclusivity and falling productivity, significant improvements have proved elusive against the backdrop of scientific, technological and regulatory change (Deloitte2019). For the aforementioned reasons, there is now a greater interest in applying computational methodologies to expedite various parts of the drug discovery and development pipeline (reda2020machine), see Figure 1.
Digital technologies have transformed the drug development process generating enormous volumes of data. Changes range from moving to electronic lab notebooks (nishida2020description), electronic regulatory submissions, through increasing volumes of laboratory, experimental, and clinical trial data collection (SuraeDDW) including the use of devices (coran2019advancing; marquis2019technology) to precision medicine and the use of “big data” (hulsen2019big). The data collected about therapies extends well beyond research and development to include hospital, specialist and primary care medical professionals’ patient records — including observations taken from social media, e.g. for pharmacovigilance (sloane2015social; sarker2015utilizing). There are innumerable online databases and other sources of information including scientific literature, clinical trials information, through to databases of repurposable drugs (Corsello2017; pantziarka2018redo_db). Technological advances now allow for greater -omic profiling beyond genotyping and whole genome sequencing (WGS); standardisation of microfluidics and antibody tagging has made single-cell technologies widely available to study both the transcriptome, e.g. using RNA-seq (heath2016single), the proteome (targeted), e.g. via mass cytometry (spitzer2016mass), or even multiple modalities together (mcginnis2019multi).
One of the key characteristics of biomedical data that is produced and used in the drug discovery process is its inter-connected nature. Such data structure can be represented as a graph, a mathematical abstraction ubiquitously used across disciplines and fields in biology to model the diverse interactions between biological entities that intervene at the different scales. At the molecular scale, proteins and other biomolecules can be represented as graphs capturing spatial and structural relationships between their amino acid residues (fout2017protein; zamora2019structural) and small molecule drugs as graphs relating their constituent atoms and chemical bonding structure (duvenaud2015convolutional; klicpera2020directional). At an intermediary scale, interactomes are graphs that capture specific types of interactions between biomolecular species (e.g. metabolites, mRNA, proteins) (han2008understanding)
, with protein–protein interaction (PPI) graphs being perhaps most commonplace. Finally, at a higher level of abstraction, knowledge graphs can represent the complex relationships between drugs, side effects, diagnosis, associated treatments, and test results(zhu2019graph; choi2020learning) as found in electronic medical records (EMR).
Within the last decade, two emerging trends have reshaped the data modelling community: network analysis and deep learning. The ”network medicine” paradigm has long been recognised in the biomedical field(barabasi2011network), with multiple approaches borrowed from graph theory and complex network science applied to biological graphs such as PPIs and gene regulatory networks (GRNs). Most approaches in this field were limited to handcrafted
graph features such as centrality measures and clustering. In contrast, deep neural networks, a particular type of machine learning algorithms, are used to learn optimal tasks-specific features. The impact of deep learning was ground-breaking in computer vision(voulodimos2018deep)young2018recent) but was limited to specific domains by the requirements on the regularity of data structures. At the convergence of these two fields is Graph Machine Learning (GML) a new class of ML methods exploiting the structure of graphs and other irregular datasets (point clouds, meshes, manifolds, etc).
The essential idea of GML methods is to learn effective feature representations of nodes (perozzi2014deepwalk; sun2018rotate) (e.g., users in social networks), edges (e.g. predicting future interactions in recommender systems), or the entire graphs (sun2019infograph) (e.g. predicting properties of molecular graphs). In particular, a type of methods called graph neural networks (GNNs) (kipf2016semi; velivckovic2018graph; gilmer2017neural), which are deep neural network architectures specifically designed for graph-structure data, are attracting growing interest. GNNs iteratively update the features of the nodes of a graph by propagating the information from their neighbours. These methods have already been successfully applied to a variety of tasks and domains such as recommendation in social media and E-commerce (pal2020pinnersage; yang2019aligraph; rossi2020sign; rossi2020temporal)
, traffic estimations in Google Maps(google2020), misinformation detection in social media (monti2019fake), and various domains of natural sciences including modelling of glass (cranmer2020discovering) and event classification in particle physics (shlomi2020graph; choma2018graph).
In the biomedical domain, GML has now set the state of the art for mining graph-structured data including drug–target–indication interaction and relationship prediction through knowledge graph embedding (sun2018rotate; schlichtkrull2018modeling; balazevic2019tucker); molecular property prediction (duvenaud2015convolutional; klicpera2020directional), including the prediction of absorption, distribution, metabolism, and excretion (ADME) profiles (feinberg2020improvement); early work in target identification (pittala2020relation) to de novo molecule design (zhavoronkov2019deep; gainza2020deciphering). Most notably, Stokes et al. (stokes2020deep) used directed message passing graph neural networks operating on molecular structures to propose repurposing candidates for antibiotic development, validating their predictions in vivo to propose suitable repurposing candidates remarkably structurally distinct from known antibiotics. Therefore, GML methods appear to be extremely promising in applications across the drug development pipeline.
The rest of this paper is organised as follows. Section 2 introduces notation and mathematical preliminaries to simplify the exposition. Section 3 introduces the diverse tasks addressed with machine learning approaches on graphs as well as the main classes of existing methods. Section 4 focuses on the application of machine learning on graphs within drug discovery and development before providing a closing discussion in Section 5.
2.1. Notations and preliminaries of graph theory
We denote a graph where is a set of nodes, or vertices, and is a set of edges. Let denote a node and denote an edge from node to node . When multiple edges can connect the same pair of nodes, the graph is called a multigraph. Node features are represented by and are the features of node . Edge features, or attributes, are similarly represented by where . We may also denote different nodes as and such that is the edge from to with attributes . Note that under this definition, undirected graphs are defined as directed graphs with each undirected edge represented by two directed edges.
The neighbourhood of node , sometimes referred to as one-hop neighbourhood, is the set of nodes that are connected to it by an edge, , with shorthand used for compactness. The cardinality of a node’s neighbourhood is called its degree and the diagonal degree matrix, , has elements .
Two nodes and in a graph are connected if there exists a path in starting at one and ending at the other, i.e. there exists a sequence of consecutive edges of connecting the two nodes. A graph is connected if there exists a path between every pair of nodes in the graph. The shortest path distance between and is defined as the number of edges in the shortest path between the two nodes and denoted by .
A graph is a subgraph of if and only if and . If it also holds that , then is called an induced subgraph of .
The adjacency matrix typically represents the relations between nodes such that the entry on the row and column indicates whether there is an edge from node to node , with representing that there is an edge, and that there is not (i.e. ). Most commonly, the adjacency matrix is a square (from a set of nodes to itself), but the concept extends to bipartite graphs where an matrix can represent the edges from one set of nodes to another set of size , and is sometimes used to store scalar edge weights. The Laplacian matrix of a simple (unweighted) graph is . The normalised Laplacian is often preferred, with a variant defined as .
2.2. Knowledge graph
The term knowledge graph is used to qualify a graph that captures types of relationships between a set of entities. In this case, includes relationship types as edge features. Knowledge graphs are commonly introduced as sets of triplets , where represents the set of relationships. Note that multiple edges of different types can connect two given nodes. As such, the standard adjacency matrix is ill-suited to capture the complexity of a knowledge graph. Instead, a knowledge graph is often represented as a collection of adjacency matrices
, forming an adjacency tensor, in which each adjacency matrixcaptures one type of relationship.
2.3. Random walks
A random walk is a sequence of nodes selected at random during an iterative process. A random walk is constructed by considering a random walker that moves through the graph starting from a node . At each step, the walker can either move to a neighbouring node with probability or stay on node with probability . The sequence of nodes visited after a fixed number of steps gives a random walk of length . Graph diffusion is a related notion that models the propagation of a signal on a graph. A classic example is heat diffusion (kondor2002diffusion), which studies the propagation of heat in a graph starting from some initial distribution.
2.4. Graph isomorphism
Two graphs and are said to be isomorphic if there exists a bijective function such that
. Finding if two graphs are isomorphic is a recurrent problem in graph analysis that has deep ramifications for machine learning on graphs. For instance, in graph classification tasks, it is assumed that a model needs to capture the similarities between pairs of graphs to classify them accurately.
The Weisfeiler-Lehman (WL) graph isomorphism test (wl1968) is a classical polynomial-time algorithm in graph theory. It is based on iterative graph recolouring, starting with all nodes of identical “colour” (label). At each step, the algorithm aggregates the colours of nodes and their neighbourhoods and hashes the aggregated colour into unique new colours. The algorithm stops upon reaching a stable colouring. If at that point, the colourings of the two graphs differ, the graphs are deemed non-isomorphic. However, if the colourings are the same, the graphs are possibly (but not necessarily) isomorphic. In other words, the WL test is a necessary but insufficient condition for graph isomorphism. There exist non-isomorphic graphs for which the WL test produces identical colouring and thus considers them possibly isomorphic; the test is said to fail in this case (berkholz2015limitations).
3. Machine learning on graphs
Most machine learning methods that operate on graphs can be decomposed into two parts: a general-purpose encoder and a task-specific decoder (chami2020machine)
. The encoder embeds a graph’s nodes, or the graph itself, in a low-dimensional feature space. To embed entire graphs, it is common first to embed nodes and then apply a permutation invariant pooling function to produce a graph level representation (e.g. sum, max or mean over node embeddings) analogous to global pooling in convolutional neural networks. The decoder computes an output for the associated task. The components can either be combined in two-step frameworks, with the encoder pre-trained in an unsupervised setting, or in an end-to-end fashion. The end tasks can be classified following multiple dichotomies: supervised/unsupervised, inductive/transductive, and node-level/graph-level.
This is the classical dichotomy found in machine learning (murphy2012machine). Supervised tasks aim to learn a mapping function from labelled data such that the function maps each data point to its label and generalises to unseen data points. In contrast, unsupervised tasks highlight unknown patterns and uncover structures in unlabelled datasets.
Inductive tasks correspond to supervised learning discussed above. Transductive tasks expect that all data points are available when learning a mapping function, including unlabelled data points(kipf2016semi). Hence, in the transductive setting, the model learns both from unlabelled and labelled data. In this respect, inductive learning is more general than transductive learning, as it extends to unseen data points.
This dichotomy is based on the object of interest. A task can either focus on the nodes within a graph, e.g. classifying nodes within the context set by the graph, or focus on whole graphs, i.e. each data point corresponds to an entire graph (sun2019infograph). Note that node-level tasks can be further decomposed into node attributes prediction tasks (kipf2016semi) and link inference tasks (sun2018rotate). The former focuses on predicting properties of nodes while the latter infers missing links in the graph.
As an illustration, consider the task of predicting the chemical properties of small molecules based on their chemical structures. This is a graph-level task in a supervised (inductive) setting whereby labelled data is used to learn a mapping from chemical structure to chemical properties. Alternatively, the task of identifying groups of proteins that are tightly associated in a PPI graph is an unsupervised node-level task. However, predicting proteins’ biological functions using their interactions in a PPI graph corresponds to a node-level transductive task.
Further, types of tasks can be identified, e.g. based on whether we have static or varying graphs. Biological graphs can vary and evolve along a temporal dimension resulting in changes to composition, structure and attributes (othmer1971instability; praktiknjo2020tracing). However, the classifications detailed above are the most commonly found in the literature. We review below the existing classes of GML methods.
3.1. Traditional approaches
3.1.1. Graph statistics
In the past decades, a flourish of heuristics and statistics have been developed to characterise graphs and their nodes. For instance, the diverse centrality measures capture different aspects of graphs connectivity. Thecloseness centrality quantifies how closely a node is connected to all other nodes, and the betweenness centrality counts on how many shortest paths between pairs of other nodes in the graph a given node is. Furthermore, graph sub-structures can be used to derive topological descriptors of the wiring patterns around each node in a graph. For instance, motifs (milo2002network), and graphlets (prvzulj2004modeling)
correspond to sets of small graphs used to characterise local wiring patterns of nodes. Specifically, for each node, one can derive a feature vector of lengthcorresponding to the number of motifs (graphlets) and where the entry gives the number of occurrences of motif in the graph that contains the node.
These handcrafted features can provide node, or graph, representations that can be used as input to machine learning algorithms. A popular approach has been the definition of kernels based on graph statistics that can be used as input to Support Vector Machines (SVM). For instance, the graphlet kernel(shervashidze2009efficient) captures node wiring patterns similarity, and the WL kernel (shervashidze2011weisfeiler) captures graph similarity based on the Weisfeiler-Lehman algorithm discussed in Section 2.
3.1.2. Random walks
Random-walk based methods have been a popular, and successful, approach to embed a graph’s nodes in a low-dimensional space such that node proximities are preserved. The underlying idea is that the distance between node representations in the embedding space should correspond to a measure of distance on the graph, measured here by how often a given node is visited in random walks starting from another node. Deepwalk (perozzi2014deepwalk) and node2vec (grover2016node2vec) are arguably the most famous methods in this category.
In practice, Deepwalk simulates multiple random walks for each node in the graph. Then, given the embedding of a node , the objective is to maximise the log probability for all nodes that appear in a random walk within a fixed window of . The method draws its inspiration from the SkipGram model developed for natural language processing (mikolov2013efficient).
DeepWalk uses uniformly random walks, but several follow-up works analyse how to bias these walks to improve the learned representations. For example, node2vec biases the walks to behave more or less like certain search algorithms over the graph. The authors report a higher quality of embeddings with respect to information content when compared to Deepwalk.
3.2. Geometric approaches
Geometric models for knowledge graph embedding posit each relation type as a geometric transformation from source to target in the embedding space. Consider a triplet , denoting the source node and denoting the target node. A geometric model learns a transformation such that is small, with being some notion of distance (e.g. Euclidean distance) and denoting the embedding of entity . The key differentiating choice between these approaches is the form of the geometric transformation .
TransE (bordes2013translating) is a purely translational approach, where corresponds to the sum of the source node and relation embeddings. In essence, the model enforces that the motion from the embedding of the source node in the direction given by the relation embedding terminates close to the target node’s embedding as quantified by the chosen distance metric. Due to its formulation, TransE is not able to account effectively for symmetric relationships or one-to-many interactions.
Alternatively, RotatE (sun2018rotate) represents relations as rotations in a complex latent space. Thus, applies a rotation matrix , corresponding to the relation, to the embedding vector of the source node such that the rotated vector lies close to the embedding vector of the target node in terms of Manhattan distance. The authors demonstrate that rotations can correctly capture diverse relation classes, including symmetry/anti-symmetry.
3.3. Matrix/tensor factorisation
Matrix factorisation is a common problem in mathematics that aims to approximate a matrix by the product of low-dimensional latent factors, . The general problem can be written as
represents a measure of the distance between two inputs, such as Euclidean distance or Kullback-Leibler divergence.
In machine learning, matrix factorisation has been extensively used for unsupervised applications such as dimensionality reduction, missing data imputation, and clustering. These approaches are especially relevant to the knowledge graph embedding problem and have set state-of-the-art (SOTA) results on standard benchmarks(rossi2020knowledge).
For graphs, the objective is to factorise the adjacency matrix , or a derivative of the adjacency matrix (e.g. Laplacian matrix). It can effectively be seen as finding embeddings for all entities in the graph on a low-dimensional, latent manifold under user-defined constraints (e.g. latent space dimension) such that the adjacency relationships are preserved under dot products.
Laplacian eigenmaps, introduced by Belkin et al. (belkin2002laplacian), is a fundamental approach designed to embed entities based on a similarity derived graph. Laplacian eigenmaps uses the eigendecomposition of the Laplacian matrix of a graph to embed each of the nodes of a graph in a low-dimensional latent manifold. The spectral decomposition of the Laplacian is given by equation , where
is a diagonal matrix with entries corresponding to the eigenvalues of L and columnof
gives the eigenvector associated to theeigenvalue (i.e. ). Given a user defined dimension , the embedding of node is given by the vector , where indicates the entry of vector .
Nickel et al. (nickel2011three) introduced RESCAL to address the knowledge graph embedding problem. RESCAL’s objective function is defined as
where and , with denoting the latent space dimension. The function denotes a regulariser, i.e. a function applying constraints on the free parameters of the model, on the factors and . Intuitively, factor learns the embedding of each entity in the graph and factor specifies the interactions between entities under relation . Yang et al. (yang2014embedding) proposed DistMult, a variation of RESCAL that considers each factor as a diagonal matrix. Trouillon et al. (trouillon2016complex) proposed ComplEx, a method extending DistMult to the complex space taking advantage of the Hermitian product to represent asymmetric relationships.
Alternatively, some existing frameworks leverage both a graph’s structural information and the node’s semantic information to embed each entity in a way that preserves both sources of information. One such approach is to use a graph’s structural information to regularise embeddings derived from the factorisation of the feature matrix (cai2011graph; chang2015heterogeneous). The idea is to penalise adjacent entities in the graph to have closer embeddings in the latent space, according to some notion of distance. Another approach is to jointly factorise both data sources, for instance, introducing a kernel defined on the feature matrix (huang2017label; huang2017accelerated).
3.4. Graph neural networks
Graph Neural Networks (GNNs) were first introduced in the late 1990s (sperduti1997supervised; gori2005new; merkwirth2005automatic) but have attracted considerable attention in recent years, with the number of variants rising steadily (kipf2016semi; hamilton2017inductive; gilmer2017neural; velivckovic2018graph; xu2018powerful; xu2018representation; schlichtkrull2018modeling; maron2018invariant; chami2019hyperbolic). From a high-level perspective, GNNs are a class of models that learn how to diffuse information on graph-structured datasets for representation learning.
A typical GNN layer is comprised of three functions: 1) a message passing function Msg that permits information exchange between nodes over edges; 2) an aggregation function Agg that combines the collection of received messages into a single, fixed-length representation; 3) and an update function Update
that produces node-level representations given the previous representation and the aggregated messages. Common choices are a simple linear transformation forMsg, summation, simple- or weighted-averages for AggUpdate function, although it is not uncommon for the Msg or Update function to be absent or reduced to an activation function only. Where the node representations after layer are , we have
or, more compactly,
where , and are the update, aggregation and message passing functions, respectively, and indicates the layer index (Fey/Lenssen/2019). The initial node representations, , are typically set to node features, . Figure 2 gives a schematic representation of this operation.
3.4.1. Graph convolutional network
The graph convolutional network (GCN) (kipf2016semi) can be decomposed in this framework as
is some activation function, usually a rectified linear unit (ReLU). The scheme is simplified further if we consider the addition of self-loops, that is, an edge from a node to itself, commonly expressed as the modified adjacency, where the aggregation includes the self-message and the update reduces to
With respect to the notations in Figure 2, for GCN we have .
As the update depends only on a node’s local neighbourhood, these schemes are also commonly referred to as neighbourhood aggregation. Indeed, taking a broader perspective, a single-layer GNN updates a node’s features based on its immediate or one-hop neighbourhood. Adding a second GNN layer allows information from the two-hop neighbourhood to propagate via intermediary neighbours. By further stacking GNN layers, node features can come to depend on the initial values of more distant nodes, analogous to the broadening the receptive field in later layers of convolutional neural networks—the deeper the network, the broader the receptive field (see Figure 3). However, this process is diffusive and leads to features washing out as the graph thermalises. This problem is solved in convolutional networks with pooling layers, but an equivalent canonical coarsening does not exist for irregular graphs.
3.4.2. Graph attention network
Graph Attention Networks (GAT) (velivckovic2018graph) weight incoming messages with an attention mechanism and multi-headed attention for train stability. Including self-loops, the message, aggregation and update functions
are otherwise unchanged. Although the authors suggest the attention mechanism is decoupled from the architecture and should be task specific, in practice, their original formulation is most widely used. The attention weights, are softmax normalised, that is
is the output of a single layer feed-forward neural network without a bias (a projection) with LeakyReLU activations, that takes the concatenation of transformed source- and target-node features as input,
3.4.3. Relational graph convolutional networks
At many scales of systems biology, the relationships between entities have a type, a direction, or both. For instance, the type of bonds between atoms, binding of two proteins, and gene regulatory interactions are essential to understanding the systems in which they exist. This idea is expressed in the message passing framework with messages that depend on edge attributes. Relational Graph Convolutional Networks (R-GCNs) (schlichtkrull2018modeling) learn separate linear transforms for each edge type, which can be viewed as casting the graph as a multiplex graph and operating GCN-like models independently on each layer, as shown in Figure 4.
The R-GCN model decomposes to
for edge types , with separate transforms for self-loops, and problem-specific normalization constant .
3.4.4. Graph Pooling
Geometric deep learning approaches machine learning with graph-structured data as the generalisation of methods designed for learning with grid and sequence data (images, time-series; Euclidean data) to non-Euclidean domains, i.e. graphs and manifolds (bronstein2017geometric). This is also reflected in the derivation and naming conventions of popular graph neural network layers as generalised convolutions (kipf2016semi; schlichtkrull2018modeling). Modern convolutional neural networks have settled on the combination of layers of kernels interspersed with max-pooling. Developing a corresponding pooling workhorse for GNNs is an active area of research. The difficulty is that, unlike Euclidean data structures, there are no canonical up- and down-sampling operations for graphs. As a result, there are many proposed methods that centre around learning to pool or prune based on features (ying2018hierarchical; cangea2018towards; gao2019graph; lee2019self; bodnar2020deep), and learned or non-parametric structural pooling (boykov2006graph; luzhnica2019clique; bianchi2020mincutpool; jin2020hierarchical). However, the distinction between featural and structural methods is blurred when topological information is included in the node features.
The most successful feature-based methods extract representations from graphs directly either for cluster assignments (ying2018hierarchical; bodnar2020deep) or top-k pruning (cangea2018towards; gao2019graph; lee2019self). DiffPool uses GNNs both to produce a hierarchy of representations for overall graph classification and to learn intermediate representations for soft cluster assignments to a fixed number of pools (ying2018hierarchical). Figure 5 presents an example of this kind of pooling. top-k pooling takes a similar approach, but instead of using an auxiliary learning process to pool nodes, it is used to prune nodes (cangea2018towards; gao2019graph). In many settings, this simpler method is competitive with DiffPool at a fraction of the memory cost.
Structure-based pooling methods aggregate nodes based on the graph topology and are often inspired by the processes developed by chemists and biochemists for understanding molecules through their parts. For example, describing a protein in terms of its secondary structure (-helix, -sheets) and the connectivity between these elements can be seen as a pooling operation over the protein’s molecular graph. Figure 6 shows how a small molecule can be converted to a junction tree representation, with the carbon ring (in pink) being aggregated into a single node. Work on decomposing molecules into motifs bridges the gap between handcrafted secondary structures and unconstrained learning methods (jin2020hierarchical). Motifs are extracted based on a combined statistical and chemical analysis, where motif templates (i.e. graph substructures) are selected based on how frequently they occur in the training corpus and molecules are then decomposed into motifs according to some chemical rules. More general methods look to concepts from graph theory such as minimum cuts (boykov2006graph; bianchi2020mincutpool) and maximal cliques (luzhnica2019clique) on which to base pooling. Minimum cuts are graph partitions that minimise some objective and have obvious connections to graph clustering, whilst cliques (subsets of nodes that are fully connected) are in some sense at the limit of node community density.
4. Drug development applications
The process of discovering a drug and making it available to patients can take up to 10 years and is characterised by failures, or attrition, see Figure 1. The early discovery stage involves target identification and validation, hit discovery, lead molecule identification and then optimisation to achieve desired characteristics of a drug candidate (hughes2011principles). Pre-clinical research typically comprises both in vitro and in vivo assessments on toxicity, pharmacokinetics (PK), pharmacodynamics (PD), and efficacy of the drug. Providing good pre-clinical evidence are presented, the drug then progresses for human clinical trials normally through three different phases of clinical trials. In the following subsections, we explore how GML can be applied to distinct stages within the drug discovery and development process.
4.1. Target identification
Target identification is the search for a molecular target with a significant functional role(s) in the pathophysiology of a disease such that a hypothetical drug could modulate said target culminating with beneficial effect (schenone2013target; titov2012identification). Early targets included G-protein coupled receptors (GPCRs), kinases, and proteases and formed the major target protein families among first-in-class drugs (eder2014discovery) — targeting other classes of biomolecules is also possible, e.g., nucleic acids. For an organ-specific therapy, an ideal target should be strongly and preferably expressed in the tissue of interest, and preferably a three-dimensional structure should be obtainable for biophysical simulations.
There is a range of complementary lines of experimental evidence that could support target identification. For example, a phenomenological approach to target identification could consider the imaging, histological, or -omic presentation of diseased tissue when compared to matched healthy samples. Typical differential presentation includes chromosomal aberrations (e.g., from WGS), differential expression (e.g., via RNA-seq) and protein translocation (e.g., from histopathological analysis) (paananen2019omics). As the availability of -omic technologies increases, computational and statistical advances must be made to integrate and interpret large quantities of high dimensional, high-resolution data on a comparatively small number of samples, occasionally referred to as panomics (sandhu2018panomics; matthews2016omics).
In contrast to a static picture, in vitro and in vivo models are built to examine the dynamics of disease phenotype to study mechanism. In particular, genes are manipulated in disease models to understand key drivers of a disease phenotype. For example, random mutagenesis could be induced by chemicals or transposons in cell clones or mice to observe the phenotypic effect of perturbing certain cellular pathways at the putative target protein (titov2012identification). As a targeted approach, bioengineering techniques have been developed to either silence mRNA or remove the gene entirely through genetic editing. In modern times, CRISPR is being used to knockout genes in a cleaner manner to prior technologies, e.g. siRNA, shRNA, TALEN (boettcher2015choosing; smith2017evaluation; peretz2018combined). Furthermore, innovations have led to CRISPR interference (CRISPRi) and CRISPR activation (CRISPRa) that allow for suppression or overexpression of target genes (le2017dual).
To complete the picture, biochemical experiments observe chemical and protein interactions to inform on possible drug mechanisms of action (schenone2013target), examples include: affinity chromatography, a range of mass spectrometry techniques for proteomics, and drug affinity responsive target stability (DARTS) assays (cuatrecasas1968selective; lomenick2011target; ong2005mass). X-ray crystallography and cryogenic electron microscopy (cryo-EM) can be used to detail structures of proteins to identify druggable pockets (shoemaker2018x); computational approaches can be used to assess the impacts of mutations in cancer resulting in perturbed crystal structures (malhotra2019understanding). Yeast two-hybrid or three-hybrid systems can be employed to detail genomic protein–protein or RNA–protein interactions (hamdi2012yeast; licitra1996three).
Systems biology aims to unify phenomenological observations on disease biology (the “-omics view”), genetic drivers of phenotypes (driven by bioengineering) through a network view of interacting biomolecules (butcher2004systems). The ultimate goal is to pin down a “druggable” point of intervention that could hopefully reverse the disease condition. One of the outcomes of this endeavour is the construction of signalling pathways; for example, the characterisation of the TGF-, PI3K/AKT and Wnt-dependent signalling pathways have had profound impacts on oncology drug discovery (akhurst2012targeting; hennessy2005exploiting; janssens2006wnt).
In contrast to complex diseases, target identification for infectious disease requires a different philosophy. After eliminating pathogenic targets structurally similar to those within the human proteome, one aims to assess the druggability of the remaining targets. This may be achieved using knowledge of the genome to model the constituent proteins when 3D structures are not already available experimentally. The Blundell group has shown that 70-80% of the proteins from Mycobacterium tuberculosis and a related organism, Mycobacterium abscessus (infecting cystic fibrosis patients), can be modelled via homology (ochoa2015chopin; skwark2019mabellini). By examining the potential binding sites, such as the catalytic site of an enzyme or an allosteric regulatory site, the binding hotspot can be identified and the potential value as a target estimated (blundell2020personal). Of course, target identification is also dependent on the accessibility of the target to a drug, as well as the presence of efflux pumps — and metabolism of any potential drug by the infectious agent.
4.1.1. From systems biology to machine learning on graphs
Organisms, or biological systems, consist of complex and dynamic interactions between entities at multiple scales. At the submolecular level, proteins are chains of amino acid residues which fold to adopt highly specific conformational structures. At the molecular scale, proteins and other biomolecules physically interact through transient and long-timescale binding events to carry out regulatory processes and perform signal amplification through cascades of chemical reactions. By starting with a low-resolution understanding of these biomolecular interactions, canonical sequences of interactions associated with specific processes become labelled as pathways that ultimately control cellular functions and phenotypes. Within multicellular organisms, cells interact with each other forming diverse tissues and organs. A reductionist perspective of disease is to view it as being the result of perturbations of the cellular machinery at the molecular scale that manifest through aberrant phenotypes at the cellular and organismal scales. Within target identification, one is aiming to find nodes that upon manipulation lead to a causal sequence of events resulting in the reversion from a diseased to a healthy state.
It seems plausible that target identification will be the greatest area of opportunity for machine learning on graphs. From a genetics perspective, examining Mendelian traits and genome-wide association studies (GWAS) linked to coding variants of drug targets have a greater chance of success in the clinic (king2019drug; nelson2015support). However, when examining protein–protein interaction networks, Fang et al. (fang2019genetics) found that various protein targets were not themselves “genetically associated”, but interacted with other proteins with genetic associations to the disease in question. For example in the case of rheumatoid arthritis (RA), tumour necrosis factor (TNF) inhibition is a popular drug mechanism of action with no genetic association — but the interacting proteins of TNF including CD40, NFKBIA, REL, BIRC3, and CD8A are known to exhibit a genetic predisposition to RA.
Oftentimes, systems biology has focused on networks with static nodes and edges, ignoring faithful characterisation of underlying biomolecules that the nodes represent. With GML, we can account for much richer representations of biology accounting for multiple relevant scales, for example, graphical representation of molecular structures (discussed in Sections 4.2 and 4.3), functional relationships within a knowledge graph (discussed in Section 4.4), and expression of biomolecules. Furthermore, GML can learn graphs from data as opposed to relying on pre-existing incomplete knowledge (wang2019dynamic; kazi2020differentiable). Early work utilising GML for target identification includes Pittala et al. (pittala2020relation), whereby a knowledge graph link prediction approach was used to beat the in house algorithms of Open Targets (carvalho2019open) to rediscover drug targets within clinical trials for Parkinson’s Disease.
The utilisation of multi-omic expression data capturing instantaneous multimodal snapshots of cellular states will play a significant role in target identification as costs decrease (nicora2020integrated; sanchez2020interpreting) — particularly in a precision medicine framework (sandhu2018panomics). Currently, however, only a few panomic datasets are publicly available. A small number of early adopters have spotted the clear utility in employing GML (wang2020moronet), occasionally in a multimodal learning (nguyen2020multiview; ma2019integrate), or causal inference setting (pfister2019stabilizing). These approaches have helped us move away from the classical Mendelian “one gene – one disease” philosophy and appreciate the true complexity of biological systems.
4.2. Design of small molecule therapies
Drug design broadly falls into two categories: phenotypic drug discovery and target-based drug discovery. Phenotypic drug discovery (PDD) begins with a disease’s phenotype without having to know the drug target. Without the bias from having a known target, PDD has yielded many first-in-class drugs with novel mechanisms of action (swinney2011were). It has been suggested that PDD could provide the new paradigm of drug design, saving a substantial amount of costs and increasing the productivity (moffat2017opportunities). However, drugs found by PDD are often pleiotropic and impose greater safety risks when compared to target-oriented drugs. In contrast, best-in-class drugs are usually discovered by a target-based approach.
For target-based drug discovery, after target identification and validation, “hit” molecules would be identified via high-throughput screening of compound libraries against the target (hughes2011principles), typically resulting in a large number of possible hits. Grouping these into “hit series” and they become further refined in functional in vitro assays. Ultimately, only those selected via secondary in vitro assays and in vivo models would be the drug “leads”. With each layer of screening and assays, the remaining compounds should be more potent and selective against the therapeutic target. Finally, lead compounds are optimised by structural modifications, to improve properties such as PKPD, typically using heuristics, e.g. Lipinski’s rule of five (lipinski1997experimental). In addition to such structure-based approach, fragment-based (FBDD) (blundell2002high; murray2010structural) and ligand-based drug discovery (LBDD) have also been popular (erlanson2004fragment; acharya2011recent). FBDD enhances the ligand efficiency and binding affinity with fragment-like leads of approximately 150 Da, whilst LBDD does not require 3D structures of the therapeutic targets.
Both phenotypic and target-based drug discovery comes with their own risks and merits. While the operational costs of target ID may be optional, developing suitable phenotypic screening assays for the disease could be more time-consuming and costly (zheng2013phenotypic; moffat2017opportunities). Hence, the overall timeline and capital costs are roughly the same (zheng2013phenotypic).
In this review, we make no distinction between new chemical entities (NCE), new molecular entities (NME), or new active substances (NAS) (branch2014new).
4.2.1. Modelling philosophy
For a drug, the base graph representation is obtained from the molecule’s SMILES signature and captures bonds between atoms, i.e. each node of the graph corresponds to an atom and each edge stands for a bond between two atoms (duvenaud2015convolutional; klicpera2020directional). The features associated with atoms typically include its element, valence, and degree. Edge features include the associated bond’s type (single, double, triple), its aromaticity, and whether it is part of a ring or not. Additionally, Klicpera et al. (klicpera2020directional) consider the geometric length of a bond and geometric angles between bonds as additional features. This representation is used in most applications, sometimes complemented or augmented with heuristic approaches.
To model a graph structure, Jin et al. (jin2018junction) used the base graph representation in combination with a junction tree derived from it. To construct the junction tree, the authors first define a set of molecule substructures, such as rings. The graph is then decomposed into overlapping components, each corresponding to a specific substructure. Finally, the junction tree is defined with each node corresponding to an identified component and each edge associates overlapping components.
Jin et al. then extended their previous work by using a hierarchical representation with various coarseness of the small molecule (jin2020hierarchical). The proposed representation has three levels: 1) an atom layer, 2) an attachment layer, and 3) a motif layer. The first level is simply the basic graph representation. The following levels provide the coarse and fine-grain connection patterns between a molecule’s motifs. Specifically, the attachment layer describes at which atoms two motifs connect, while the motif layer only captures if two motifs are linked. Considering a molecule base graph representation , a motif is defined as a subgraph of induced on atoms in and bonds in . Motifs are extracted from a molecule’s graph by breaking bridge bonds.
Kajino (kajino2019molecular) opted for a hypergraph representation of small molecules. A hypergraph is a generalisation of graphs in which an edge, called a hyperedge, can connect any number of nodes. In this setting, a node of the hypergraph corresponds to a bond between two atoms of the small molecule. In contrast, a hyperedge then represents an atom and connects all its bonds (i.e. nodes).
4.2.2. Molecular property prediction
Pharmaceutical companies may screen millions of small molecules against a specific target, e.g., see GlaxoSmithKline’s DNA-encoded small molecule library of 800 million entries (clark2009design). However, as the end result will be optimised via a skilled medicinal chemist, one should aim to substantially cut down the search space by screening only a representative selection of molecules for optimisation later. One route towards this is to select molecules with heterogeneous chemical properties using GML approaches. This is a popular task with well-established benchmarks such as QM9 (ramakrishnan2014quantum) and MD17 (chmiela2017machine). Top-performing methods are based on graph neural networks.
For instance, using a graph representation of drugs, Duvenaud et al. (duvenaud2015convolutional) have shown substantial improvements over non-graph-based approaches for molecule property prediction tasks. Specifically, the authors used GNNs to embed each drug and tested the predictive capabilities of the model on diverse benchmark datasets. They demonstrated improved interpretability of the model and predictive superiority over previous approaches which relied on circular fingerprints (glen2006circular). The authors use a simple GNN layer with a read-out function on the output of each GNN layer that updates the global drug embedding.
Alternatively, Schutt et al. (schutt2018schnet) introduced SchNet, a model that characterises molecules based on their representation as a list of atoms with interatomic distances, that can be viewed as a fully connected graph. SchNet uses learned embeddings for each atom using two modules: 1) an atom-wise module, and 2) an interaction module. The former applies a simple MLP transformation to each atom representation input, while the latter updates the atom representation based on the representations of the other atoms of the molecule and using relative distances to modulate contributions. The final molecule representation is obtained with a global sum pooling layer over all atoms’ embeddings.
With the same objective in mind, Klicpera et al. (klicpera2020directional) recently introduced DimeNet, a novel GNN architecture that diverges from the standard message passing framework presented in Section 3. DimeNet defines a message coefficient between atoms based on their relative positioning in 3D space. Specifically, the message from node to node is iteratively updated based on ’s incoming messages as well as the distances between atoms and the angles between atomic bonds. DimeNet relies on more geometric features, considering both the angles between different bonds and the distance between atoms. The authors report substantial improvements over SOTA models for the prediction of molecule properties on two benchmark datasets.
Most relevant to the later stages of preclinical work, Feinberg et al. extended previous work on molecular property prediction (feinberg2018potentialnet) to include ADME properties (feinberg2020improvement). In this scenario, by only using structures of drugs predictions were made across a diverse range of experimental observables, including half-lives across in vivo models (rat, dog), hERG protein interactions and IC values for common liver enzymes predictive of drug toxicity.
4.2.3. Enhanced high throughput screens
Within the previous section, chemical properties were a priori defined. In contrast, Stokes et al. (stokes2020deep) leveraged results from a small phenotypic growth inhibition assay of 2,335 molecules against Escherichia coli to infer antibiotic properties of the ZINC15 collection of 107 million molecules. After ranking and curating hits, only 23 compounds were experimentally tested — leading to halicin being identified. Of particular note was that the Tanimoto similarity of halicin when compared its nearest neighbour antibiotic, metronidazole, was only 0.21 — demonstrating the ability of the underlying ML to generalise to diverse structures.
Testing halicin against a range of bacterial infections, including Mycobacterium tuberculosis, demonstrated broad-spectrum activity through selective dissipation of the pH component of the proton motive force. In a world first, Stokes et al. showed efficacy of an AI-identified molecule in vivo (Acinetobacter baumannii infected neutropenic BALB/c mice) to beat the standard of care treatment (metronidazole) (stokes2020deep).
4.2.4. De novo design
A more challenging task than those previously discussed is de novo design of small molecules from scratch; that is, for a fixed target (typically represented via 3D structure) can one design a suitable and selective drug-like entity?
In the landmark paper, Zhavoronkov et al. (zhavoronkov2019deep)
created a novel chemical matter against discoidin domain receptor 1 (DDR1) using a variational autoencoder style architecture. Notably, they penalise the model to select structures similar to disclosed drugs from the patent literature. As the approach was designed to find small molecules for a well-known target, crystal structures were available and subsequently utilised. Additionally, the ZINC dataset containing hundreds of millions of structures was used (unlabelled data) along with confirmed positive and negative hits for DDR1.
In total, 6 compounds were synthesised with 4 attaining 1 values. Whilst selectivity was shown for 2 molecules of DDR1 when compared to DDR2, selectivity against a larger panel of off-targets was not shown. Whilst further development (e.g., PK or toxicology testing) was not shown, Zhavoronkov et al. demonstrated de novo small molecule design in an experimental setting (zhavoronkov2019deep). Arguably, the recommendation of an existing molecule is a simpler task than designing one from scratch.
4.3. Design of new biological entities
New biological entities (NBE) refer to biological products or biologics, that are produced in living systems (shire2009formulation). The types of biologics are very diversified, from proteins (40 amino acids), peptides, antibodies, to cell and gene therapies. Therapeutic proteins tend to be large, complex structured and are unstable in contrast to small molecules (patel2015biologics). Biologic therapies typically use cell-based production systems that are prone to post-translational modification and are thus sensitive to environmental conditions requiring mass spectrometry to characterise the resulting heterogeneous collection of molecules (mo2012structural).
In general, the target-to-hit-to-lead pathway also applies to NBE discovery, with similar procedures like high-throughput screening assays. Typically, an affinity-based high-throughput screening method is used to select from a large library of candidates using one target. One must then separately study off-target binding from similar proteins, peptides and immune surveillance (kumar2020characterization).
4.3.1. Modelling philosophy
Focusing on proteins, the consensus to derive the protein graph representation is to use pairwise spatial distances between amino acid residues, i.e. the protein’s contact map, and to apply an arbitrary cut-off or Gaussian filter to derive adjacency matrices (fout2017protein; zamora2019structural; gligorijevic2020structure), see Figure 8.
However, protein structures are substantially more complex than small molecules and, as such, there are several resulting graph construction schemes. For instance, residue-level graphs can be constructed by representing the intramolecular interactions, such as hydrogen bonds, that compose the structure as edges joining their respective residues. This representation has the advantage of explicitly encoding the internal chemistry of the biomolecule, which determines structural aspects such as dynamics and conformational rearrangements. Other edge constructions can be distance-based, such as K-NN (where a node is joined to its most proximal neighbours) (fout2017protein; zamora2019structural) or based on Delaunay triangulation. Node features can include structural descriptors, such as solvent accessibility metrics, encoding the secondary structure, distance from the centre or surface of the structure and low-dimensional embeddings of physicochemical properties of amino acids. It is also possible to represent proteins as large molecular graphs at the atomic level in a similar manner to small molecules. Due to the plethora of graph construction and featurisation schemes available, tools are being made available to facilitate the pre-processing of said protein structure (Jamasb2020).
One should note that sequences can be considered as special cases of graphs and are compatible with graph-based methods. However, in practice, language models are preferred to derive protein embeddings from amino acids sequences (rao2019evaluating; rives2019biological). Recent works suggest that combining the two can increase the information content of the learnt representations (ingraham2019generative; gligorijevic2020structure). Several recurrent challenges in the scientific community aim to push the limit of current methods. For instance, the CAFA (radivojac2013large) and CAPRI (lensink_capri2020) challenges aim to improve protein functional classification and protein–protein interaction prediction.
4.3.2. ML-assisted directed evolution
Display technologies have driven the modern development of NBEs; in particular, phage display and yeast display are widely used for the generation of therapeutic antibodies. In general, a peptide or protein library with diverse sequence variety is generated by PCR, or other recombination techniques (galan2016library). The library is “displayed” for genotype-phenotype linkage such that the protein is expressed and fused to surface proteins while the encoding gene is still encapsulated within the phage or cell. Therefore, the library could be screened and selected, in a process coined “biopanning”, against the target (e.g. antigen) according to binding affinity. Thereafter, the selected peptides are further optimised by repeating the process with a refined library. In phage display, selection works by physical capture and elution (nixon2014drugs); for cell-based display technologies (like yeast display), fluorescence-activated cell sorting (FACS) is utilised for selection (bradbury2011beyond).
Due to the repeated iteration of experimental screens and the high number of outputs, such display technologies are now being coupled to ML systems for greater speed, affinity and further in silico selection (rickerby2020machine; yang2019machine). As of yet, it does not appear that advanced ML architectures have been applied in this domain, but promising routes forward have been recently developed. For example, Hawkins-Hooker et al. (hawkins2020generating) trained multiple variational autoencoders on the amino acid sequences of 70,000 luciferase-like oxidoreductases to generate new functional variants of the luxA bacterial luciferase. Testing these experimentally led to variants with increased enzyme solubility without disrupting function. Using this philosophy, one has a grounded approach to refine libraries for a directed evolution screen.
4.3.3. Protein engineering
Some proteins have reliable scaffolds that one can build upon, for example, antibodies whereby one could modify the variable region but leave the constant region intact. For example, Deac et al. (deac2019attentive) used dilated (á trous) convolutions and self-attention on antibody sequences to predict the paratope (the residues on the antibody that interact with the antigen) as well as a cross-modal attention mechanism between the antibody and antigen sequences. Crucially, the attention mechanisms also provide a degree of interpretability to the model.
In a protein-agnostic context, Fout et al. (fout2017protein) used Siamese architectures (bromley1994signature) based on GNNs to predict at which amino acid residues are involved in the interface of a protein–protein complex. Each protein is represented as a graph where nodes correspond to amino acid residues and edges connect each residue to its closest residues. The authors propose multiple aggregation functions with varying complexities and following the general principles of DCNN (atwood2016diffusion). The output embeddings of each residue of both proteins are concatenated all-to-all and the objective is to predict if two residues are in contact in the protein complex based on their concatenated embeddings. The authors report a significant improvement over the method without the GNN layers, i.e. directly using the amino acid residue sequence and structural properties (e.g. solvent accessibility, distance from the surface).
Gainza et al. (gainza2020deciphering) recently introduced Molecular Surface Interaction Fingerprinting (MaSIF) for tasks such as protein–ligand prediction, protein pocket classification, or protein interface site prediction. The approach is based on GML applied on mesh representations of the solvent-excluded protein surface, abstracting the underlying sequence and internal chemistry of the biomolecule. In practice, MaSIF first discretises a protein surface with a mesh where each point (vertex) is considered as a node in the graph representation. Then the protein surface is decomposed into overlapping small patches based on the geodesic radius, i.e. clusters of the graph. For each patch, geometric and chemical features are handcrafted for all nodes within the patch. The patches serve as bases for learnable Gaussian kernels (monti2017geometric) that locally average node-wise patch features and produce an embedding for each patch. The resulting embeddings are fed to task-dependent decoders that, for instance, give patch-wise scores indicating if a patch overlaps with an actual protein binding site.
4.3.4. De novo design
One of the great ambitions of bioengineering is to design proteins from scratch. In this case, one may have an approximate structure in mind, e.g., to inhibit the function of another endogenous biomolecule. This motivates the inverse protein folding problem, identifying a sequence that can produce a pre-determined protein structure. For instance, Ingraham et al. (ingraham2019generative)
leveraged an autoregressive self-attention model using graph-based representations of structures to predict corresponding sequences.
Strokach et al. (strokach2020fast) leveraged a deep graph neural network to tackle protein design as a constraint satisfaction problem. Predicted structures resulting from novel sequences were initially assessed in silico using molecular dynamics and energy-based scoring. Subsequent in vitro synthesis of sequences led to structures that matched the secondary structure composition of serum albumin evaluated using circular dichroism.
With a novel amino acid sequence that could generate the desired shape of an arbitrary protein, one would then want to identify potential wanted and unwanted effects via functional annotations. These include enzyme commission (EC) numbers, a hierarchy of labels to characterise the reactions that an enzyme catalyses — previously studied by the ML community (ryu2019deep; dalkiran2018ecpred).
Zamora et al. (zamora2019structural) developed a pipeline based on graph representations of a protein’s amino acid residues for structure classification. The model consists of the sequential application of graph convolutional blocks. A block takes as input two matrices corresponding to residue features and coordinates, respectively. The block first uses a layer to learn Gaussian filters applied on the proteins’ spatial distance kernel, hence deriving multiple graph adjacency matrices. These are then used as input graphs in a GNN layer which operates on the residue features. The block then performs a 1D average pooling operation on both the feature matrix and the input coordinate matrix, yielding the outputs of the block. After the last block, the final feature matrix is fed to a global attention pooling layer which computes the final embedding of the protein used as input to an MLP for classification. The model performs on par with existing 3D-CNN-based models. However, the authors observe that it is more interpretable, enabling the identification of substructures that are characteristic of structural classification.
Recently, Gligorijevic et al. (gligorijevic2020structure) proposed DeepFRI, a model that predicts a protein’s functional annotations based on structure and sequence information. The authors define the graph representation of a protein-based on the contact map between its residues. They first use a pre-trained language module that derives , each protein’s graph is then fed to multiple GCN layers (kipf2016semi). The outputs of each GCN layer are concatenated to give the final embedding of a protein that is fed to an MLP layer giving the final functional predictions. The authors report substantial improvements over SOTA methods. Furthermore, they highlight the interpretability of their approach, demonstrating the ability to associate specific annotations with particular structures.
4.4. Drug repurposing
The term drug repurposing is used to describe the use of an existing drug, whether approved or in development as a therapy, for an indication other than the originally intended indication. Considering that only of drugs that reach clinical trials receive FDA approval, repurposed drugs offer an attractive alternative to new therapies as they are likely to have shorter development times and higher success rates with early stages of drug discovery already completed. It has been estimated that repurposed treatments account for approximately 30% of newly FDA approved drugs and their associated revenues (pillaiyar2020medicinal), and that up to 75% of entities could be repurposed (nosengo2016new). Note that we incorporate product line extensions (PLEs) within drug repurposing whereby one wishes to identify secondary indications, different formulations for an entity, or partner drugs for combination therapies.
As such, there is a major interest in using in silico methods to screen and infer new treatment hypotheses (hodos2016silico). Drug repurposing relies on finding new indications for existing molecules, either by identifying actionable pleiotropic activity (off-target repurposing), similarities between diseases (on-target repurposing), or by identifying synergistic combinations of therapies (combination repurposing). Well-known examples include: ketoconazole (Nizoral) used to treat fungal infections via enzyme CYP51A1 and now used to treat Cushing syndrome via off-target interactions with CYP17A1 (steroid synthesis/degradation), NR3C4 (androgen receptor), and NR3C1 (glucocorticoid receptor); sildenafil (Viagra) originally intended for pulmonary arterial hypertension on-target repurposed to treat erectile dysfunction; and Genvoya, a combination of emtricitabine, tenofovir alafenamide (both reverse transcriptase inhibitors), elvitegravir (an integrase inhibitor), and cobicistat (a CYP3A inhibitor to improve PK) to treat human immunodeficiency virus (HIV).
4.4.1. Off-target repurposing
Estimates suggest that each small molecule may interact with tens, if not hundreds of proteins (zhou2015comprehensive). Due to small molecule pleiotropy — particularly from first-in-class drugs (moffat2017opportunities) — off-targets of existing drugs can be a segue to finding new indications.
A variety of traditional techniques are used to identify missing drug–target interactions. For instance, when the structure of the protein is known, these stem from biophysical simulations, i.e. molecular docking or molecular dynamics. Depending on how one thresholds sequence similarity between the 21,000 human proteins, structural coverage whereby a 3D structure of the protein exists ranges from 30% ( seq. sim.) to 70% ( seq. sim.) (somody2017structural). However, 34% of proteins are classified as intrinsically disordered proteins (IDPs) with no 3D structure (deiana2019intrinsically; uversky2019intrinsically). Besides, drugs seldom have a fixed shape due to the presence of rotatable bonds. There is now a growing body of GML literature to infer missing drug–target interactions both with and without relying on the availability of a 3D protein structure.
Requiring protein structures, Torng et al. (torng2019graph)
focused on the task of associating drugs with protein pockets they can bind to. Drugs are represented based on their atomic structures and protein pockets are characterised with a set of key amino acid residues connected based on Euclidean distance. Drug embeddings are obtained with the GNN operator from Duvenaudet al. (duvenaud2015convolutional). To derive embeddings of protein pockets, the authors first use two successive graph autoencoders with the purposes of 1) deriving a compact feature vector for each residue, and 2) deriving a graph-level representation of the protein pocket itself. These autoencoders are pre-trained, with the encoder of the first serving as input to the second. Both encoders are then used as input layers of the final model. The association prediction between a drug and a protein pocket is then obtained by feeding the concatenation of the drug and pocket representations to an MLP layer. The authors report improved performance against the previous SOTA model based on a 3D-CNN operating on a grid-structure representation of the protein pocket (ragoza2017protein).
A range of GML methods for drug–target interaction do not require protein structure. For instance, Gao et al. (gao2018interpretable)
use two encoders to derive embeddings for proteins and drugs, respectively. For the first encoder, recurrent neural networks are used to derive an embedding matrix of the protein-based on its sequence and functional annotations. For the second encoder, each drug is represented by its underlying graph of atoms and the authors use GNNs to extract an embedding matrix of the graph. They use three layers of GIN(xu2018powerful) to build their subsequent architecture. Finally, a global attention pooling mechanism is used to extract vector embeddings for both drugs and proteins based on their matrix embeddings. The two resulting vectors are fed into a Siamese neural network (bromley1994signature) to predict their association score. The proposed approach is especially successful compared to baseline for cold-start problems where the protein and/or drug are not present in the training set.
Alternatively, Nascimento et al. (nascimento2016multiple) introduce KronRLS-MKL, a method that casts drug–target interaction prediction as a link prediction task on a bi-partite graph capturing drug–protein binding. The authors define multiple kernels capturing either drug similarities or protein similarities based on multiple sources of data. The optimisation problem is posed as a multiple kernel learning problem. Specifically, the authors use the Kronecker operator to obtain a kernel between drug–protein pairs. The kernel is then used to predict a drug–protein association based on their similarity to existing drug–target link. Crichton et al. (crichton2018neural) cast the task in the same setting. However, the authors use existing embedding methods, including node2vec (grover2016node2vec), deepwalk (perozzi2014deepwalk), and LINE (tang2015line), to embed nodes in a low-dimensional space such that the embeddings capture the local graph topology. The authors feed these embeddings to a machine learning model trained to predict interactions. The underlying assumption is that a drug will be embedded closer to its protein targets.
Similarly, Olayan et al. (olayan2018ddr)
propose DDR to predict drug–target interactions. The authors first build a graph where each node represents either a drug or a protein. In addition to drug–protein edges, an edge between two drugs (or two proteins) represents their similarity according to a predefined heuristic from multiple data sources. DDR embeds each drug–protein pair based on the number of paths of predefined types that connect them within the graph. The resulting embeddings are fed to a random forest algorithm for drug–target prediction.
4.4.2. On-target repurposing
On-target repurposing takes a holistic perspective and uses known targets of a drug to infer new putative indications based on diverse data. For instance, one can identify functional relationships between a drug’s targets and genes associated with a disease. Also, one may look for similarities between diseases — especially those occurring in different tissues. Hypothetically, one could prospectively find repurposing candidates in the manner of Fang et al. (fang2019genetics) by finding a missing protein–protein interactions between a genetically validated target and a drug’s primary target. Knowledge graph completion approaches have been particularly effective in addressing these tasks.
For instance, Yang et al. (yang2019drug)
introduced BNNR. The authors build a block matrix with a drug similarity matrix, a disease similarity matrix, and a disease–drug indication matrix. The method is based on the matrix completion property of Singular Value Thresholding (SVT) algorithm applied to the block matrix. BNNR incorporates regularisation terms to balance approximation error and matrix rank properties to handle noisy drug–drug and disease–disease similarities. It also adds a constraint that clips the association scores to the interval. The authors report performance improvements when compared to competing approaches.
Alternatively, Wang et al. (wang2020toward) recently proposed an approach to predict new drug indications based on two bipartite graphs, capturing drug–target interactions and disease–gene associations, and a PPI graph. Their algorithm is composed of an encoder module, relying on graph attention networks (GAT; (velivckovic2018graph)), and an MLP decoder module. The encoder derives drug and disease embeddings through the distillation of information along the edges of the graphs. The input features for drugs and diseases are based on similarity measures. On the one hand, drug features correspond to the Tanimoto similarities between its SMILES representation and that of the other drugs. On the other hand, a disease’s features are defined by its similarity to other diseases computed based on MeSH-associated terms.
4.4.3. Combination repurposing
Combination drugs have been particularly effective in diseases with complex aetiology or an evolutionary component where resistance to treatment is common, such as infectious diseases. If synergistic drugs are found, one can reduce dose whilst improving efficacy (keith2005multicomponent; he2018methods). Strategically, combination therapies provide an additional way to extend the indications and efficacy of available entities. They can be used for a range of purposes, for example, convenience and compliance as a fixed-dose formulation (e.g. valsartan and hydrochlorothiazide for hypertension (dipette2019fixed)), to achieve synergies (e.g. co-trimoxazole: trimethoprim and sulfamethoxazole for bacterial infections), to broaden spectrum (e.g. for treatment of infections by an unknown pathogen), or to combat disease resistance (e.g. multi-drug regimens for drug-sensitive and drug-resistant tuberculosis). The number of potential pairwise combinations of just two drugs makes a brute force empirical laboratory testing approach a lengthy and daunting prospect. To give a rough number, there exist around 4,000 approved drugs which would require 8 million experiments to test all possible combinations of two drugs at a single dose. Besides, there are limitless ways to change the dosage and the timing of treatments, as well as the delivery method.
Arguably some of the first work using GML to model combination therapy was DECAGON by Zitnik et al. (zitnik2018modeling) used to model polypharmacy side-effects via a multi-modal graph capturing drug–side effect–drug triplets in addition to PPI interactions. In contrast, Deac et al. (deac2019drug) forwent incorporation of a knowledge graph instead modelling drug structures directly and using a coattention mechanism to achieve a similar level of accuracy. Typically architectures predicting drug–drug antagonism can be minimally adapted for prediction of synergy. However, more nuanced architectures are emerging combining partial knowledge of drug–target interactions with target–disease machine learning modules (jin2020modeling).
In the last year to address the unprecedented COVID-19 global health crisis, multiple research groups have explored graph-based approaches to identify drugs that could be repurposed to treat SARS-CoV-2 (zhou2020network; morselli2020network; zeng2020repurpose; ioannidis2020few). For instance, Morselli et al. (morselli2020network) proposed an ensemble approach combining three different graph-based association strategies. The first two are similar in principle. First, each drug and each disease is represented by the set of proteins that it targets. Second, the association between a drug and a disease is quantified based on the distance between the two sets on a PPI graph. The two approaches differ on whether the distance is quantified with shortest paths or random walks. The last strategies rely on GNNs for multimodal graphs (knowledge graph). The graph contains PPIs, drug–target interactions, disease–protein associations, and drug indications. The formulation of the GNN layer is taken from the DECAGON model (zitnik2018modeling), an architecture similar to the R-GCN model (schlichtkrull2018modeling).
Alternatively, Zeng et al. (zeng2020repurpose) use RotatE to identify repurposing hypotheses to treat SARS-CoV-2 from a large knowledge graph constructed from multiple data sources and capturing diverse relationships between entities such as drugs, diseases, genes, and anatomies. Additionally, Ioannidis et al. (ioannidis2020few) proposed a modification of the RGCN architecture to handle few-shot learning settings in which some relations only connect a handful of nodes.
In the rare disease arena, Sosa et al. (sosa2019literature) introduced a knowledge graph embedding approach to identify repurposing hypotheses for rare diseases. The problem is cast as a link prediction problem in a knowledge graph. The authors use the Global Network of Biological Relationships (GNBR) (percha2018global), a knowledge graph built through literature mining and that contains diverse relationships between diseases, drugs, and genes. Due to the uncertainty associated with associations obtained from literature mining, the authors use a knowledge graph embedding approach design to account for uncertainty (chen2019embedding). Finally, the highest-ranking associations between drugs and rare diseases are investigated, highlighting literature and biological support.
Drug repurposing is now demonstrating itself as a first use case of GML methods likely to lead to new therapies within the coming years. Outside of the pharmaceutical industry, GML methods recommending nutraceuticals (veselkov2019hyperfoods) may also offer fast routes to market through generally recognized as safe (GRAS) regulatory designations.
We have discussed how GML has produced the state-of-the-art results both on graph-level problems for the description of drugs and other biomolecules, and node-level problems for the navigation of knowledge graphs and representation of disease biology. With the design, synthesis and testing of de novo small molecules (zhavoronkov2019deep), the in vitro and in vivo testing of drug repurposing hypotheses (stokes2020deep), and target identification frameworks being conceptualised (pittala2020relation), we are potentially entering into a golden age of validation for GML within drug discovery and development.
A few key hurdles limit lossless representation of biology. At the molecular level, bonds can be either rotatable single bonds or fixed bonds; accurately representing the degrees of freedom of a molecule is a topic of active research(flam2020neural). At the cellular level, expression of mRNA and proteins exhibit stochastic dynamics (kholodenko2006cell; raj2008nature). A pathway is not expressed in a binary fashion: some proteins may only have the potential to be expressed, e.g. via unspliced pre-mRNA, and meanwhile, proteases are actively recycling unwanted proteins. Historically, most -omic platforms have recorded an average “bulk” signal; however, with the recent single-cell revolution, GML offers a principled approach to the characterisation of signalling cascades.
GML is still in its infancy, and underlying theoretical guarantees and limitations are under active research. For instance, deeper GNNs suffer from oversmoothing of features and oversquashing of information. Oversmoothing is the phenomenon of features washing out through repeated rounds of message passing and aggregation (oono2019graph). The inverse, having too few layers to exchange information globally, is referred to as under-reaching (barcelo2019logical). These issues have limited the expressivity of traditional GNNs (dehmamy2019understanding; chen2020can). To alleviate this, a promising direction is to incorporate global information in the model, for instance, by using contrastive approaches (velickovic2019deep; sun2019infograph), by augmenting the framework with relative positioning to anchor nodes (you2019position; li2020distance), or by implementing long-range information flow (flam2020neural).
Due to the problem of missing data within biomedical knowledge graphs, we envisage opportunities for active learning to be deployed to label critical missing data points to explore experimentally and therefore reduce model uncertainty (sverchkov2017review). Due to the significant expense associated with drug discovery and development, integrating in silico
modelling and experimental research is of great strategic importance. While active learning has previously led to biased datasets in other settings, modern techniques are addressing these drawbacks(gudovskiy2020deep; aggarwal2020active).
Finally, because GML allows for the representation of unstructured multimodal datasets, one can expect to see tremendous advances made within data integration. Most notably, highly multiplexed single-cell omic technologies are now being expanded in spatial settings (burgess2019spatial; baharlou2019mass). In addition, CRISPR screening data with associated RNA sequencing readouts are emerging as promising tools to identify key genes controlling a cellular phenotype (daniloski2020identification).
We gratefully acknowledge support from William L. Hamilton, Benjamin Swerner, Lyuba V. Bozhilova, and Andrew Anighoro.