[ICMI'21] [Pytorch] Graph Capsule Aggregation
Humans express their opinions and emotions through multiple modalities which mainly consist of textual, acoustic and visual modalities. Prior works on multimodal sentiment analysis mostly apply Recurrent Neural Network (RNN) to model aligned multimodal sequences. However, it is unpractical to align multimodal sequences due to different sample rates for different modalities. Moreover, RNN is prone to the issues of gradient vanishing or exploding and it has limited capacity of learning long-range dependency which is the major obstacle to model unaligned multimodal sequences. In this paper, we introduce Graph Capsule Aggregation (GraphCAGE) to model unaligned multimodal sequences with graph-based neural model and Capsule Network. By converting sequence data into graph, the previously mentioned problems of RNN are avoided. In addition, the aggregation capability of Capsule Network and the graph-based structure enable our model to be interpretable and better solve the problem of long-range dependency. Experimental results suggest that GraphCAGE achieves state-of-the-art performance on two benchmark datasets with representations refined by Capsule Network and interpretation provided.READ FULL TEXT VIEW PDF
In this paper, we study the task of multimodal sequence analysis which a...
Human communication is multimodal in nature; it is through multiple
Multimodal sentiment analysis has currently identified its significance ...
Human language is often multimodal, which comprehends a mixture of natur...
Recurrent neural network (RNN), as a powerful contextual dependency mode...
The task of video and text sequence alignment is a prerequisite step tow...
Multimodal sentiment analysis utilizes multiple heterogeneous modalities...
[ICMI'21] [Pytorch] Graph Capsule Aggregation
Humans analyze sentiment by the rich information from spoken words, facial attributes and tone of voice, which correspond to textual, visual and acoustic modalities, respectively(Latour et al., 1994; Manning et al., 2014). It is natural that multimodal sources provide more reliable information for a model to predict sentiment labels. Nevertheless, there are two fundamental challenges for multimodal sentiment analysis. One is the “unaligned” nature of multimodal sequences. For instance, streams from audio and vision are created by receptors using different receiving frequency. As a result, successfully inferring long-range dependency is the key to tackle the issue of “unaligned” nature. The other challenge is how to effectively and efficiently model the long sequences. As common methods to model sequences, RNN and its variants are susceptible to gradient vanishing or exploding and have high time complexity due to their recurrent nature (Mai et al., 2020b). Therefore, it is critical to propose a model which can process sequential data appropriately without recurrent architecture.
Existing models commonly implement forced word-alignment before training(Zadeh et al., 2018a, c; Tsai et al., 2019; Pham et al., 2019; Gu et al., 2018; Mai et al., 2020a) to solve the problem of “unaligned” nature, which aligns the visual and acoustic features to the resolution of words before inputting them into model. However, such word-alignment (Yuan and Liberman, 2008) is time-consuming and not feasible because it requires detailed meta-information about the datasets. Moreover, it may lead to inadequate interactions between modalities as the interactions are not limited to the span of one word. Therefore, the issue of long-range dependency still exists. In addition, owing to heavy reliance on RNN, previous models are usually difficult to train and require plenty of time to infer. Recently, some transformer-based models(Tsai et al., 2019; Zadeh et al., 2019; Huang et al., 2020) which can compute in parallel in the time dimension have been proposed to avoid problems of RNN and better explore long-range dependency. Nevertheless, they fail to obtain highly expressive and refined representation of sequences because transformer (Vaswani et al., 2017) is a sequence model which cannot sufficiently fuse information from all time steps.
In this paper, we propose an end-to-end model called Graph Capsule Aggregation (GraphCAGE) that can compute in parallel in the time dimension by converting unaligned multimodal sequential data into graphs and explicitly learn long-range dependency by the aggregation capability of Capsule Network and graph-base neural model. GraphCAGE consists of two stages: graph construction and graph aggregation. The former first implements modality fusion by cross-modal transformer, then applies Dynamic Routing of Capsule Network and self-attention to create nodes and edges, respectively. This module can significantly solve the problem of long-range dependency because the nodes can proportionally absorb information from every time step by routing mechanism. The latter stage combines Graph Convolutional Network (GCN) with Capsule Network to further aggregate information from nodes and finally produces high-level and refined representation of the graph. We illustrate the aggregation capability of our model in Figure 1. Additionally, routing mechanism equips GraphCAGE with interpretability because we are able to observe the values of routing coefficients to figure out the contributions from different elements. We will discuss the interpretability in Section 4.4.3.
In brief, the main contributions of this work are listed below:
We propose a novel architecture called GraphCAGE to model unaligned multimodal sequences. GraphCAGE applies Dynamic Routing of Capsule Network to construct node, which enables the model to process longer sequence with stronger ability of learning long-range dependency. Taking advantage of aggregation capability of Capsule Network, GraphCAGE produces high-expressive representations of graphs without any loss of information.
With sequences transformed into graphs, GraphCAGE can model sequence without RNN, which prevents gradient vanishing or exploding during training. Moreover, computing in parallel greatly boosts efficiency in inferring time.
Applying Capsule network in node construction and graph aggregation, GraphCAGE is interpretable owing to routing mechanism. With larger routing coefficients indicating greater contribution, we can figure out what information our model focuses on to make predictions.
Multimodal language learning aims at learning representations from multimodal sequences including textual, visual and acoustic modalities(Liang et al., 2018; Tsai et al., 2019). A lot of previous studies(Dai et al., 2020; Ghosal et al., 2018; Pham et al., 2018; Zadeh et al., 2018c, b, a) regard RNN such as LSTM and GRU as the default architecture for sequence modeling and they focus on exploring intra- and inter-modal dynamics for word-aligned multimodal sequences. For example, Zadeh et al. propose Memory Fusion Network which is constructed by LSTMs and gated memory network to explore view-specific and cross-view interactions(Zadeh et al., 2018a). In (Zadeh et al., 2018b), Multi-attention Recurrent Network is composed of LSTMs and multi-attention block in order to model both dynamics above. With RNN being the main modules, they are confronted with the problems of training and long inferring time. Recently, (Tsai et al., 2019; Mai et al., 2020b; Yang et al., 2020) propose alternative networks to model unaligned multimodal sequences. Tsai et al.(Tsai et al., 2019) use cross-modal transformer and self-attention transformer to learn long-range dependency. However, the temporal information is collected by self-attention transformer which is a sequence model, implying that fusion among different time steps is not sufficient. In contrast, our proposed GraphCAGE replaces the self-attention transformer with graph-based model which produces more refined and high-level representations of sequences. In (Mai et al., 2020b) and (Yang et al., 2020), sequences are transformed into graphs and GCN is applied to learn long-range dependency, which not only avoid the problems of RNN but also successfully model unaligned multimodal sequences. Nevertheless, they implement graph pooling and edge pruning to drop some nodes in order to obtain the final representation of graph, leading to information loss. In contrast, GraphCAGE effectively retains all information with Capsule Network which applies Dynamic Routing instead of pooling to aggregate features.
, which is designed for image features extraction. In general, Capsule Network can not only effectively fuse information from numerous elements into highly expressive representations without information loss, but also reveal the contributions from different elements to the representations by routing mechanism. In(Sabour et al., 2017), the authors claim that pooling will destroy the robustness of the model because some valuable features are ignored by pooling layer. In order to retain these features, pooling layer is replaced with Dynamic Routing for the transmission of information between layers, bringing the benefit of no information loss. In (Tsai et al., arXiv preprint arXiv:2001.08735, 2020), the proposed Multimodal Routing is designed based on Capsule Network and provides both local and global interpretation, verifying the fact that Dynamic Routing of Capsule Network can equip model with interpretability. Inspired by Dynamic Routing, our proposed GraphCAGE uses Capsule Network to construct node from features containing inter-modal dynamics. In addition, the final representations of graphs are also created by Capsule Network. As a result of efficient transmission of information and great aggregation capability of Capsule Network, our GraphCAGE can effectively learn long-range dependency and explicitly model unaligned multimodal sequences with interpretation ability provided and no information loss.
As graph-structured data is widely used in many research fields, a series of Graph Neural Networks (GNN) have been introduced in recent years (Xu et al., 2019; Micheli, 2009; Scarselli et al., 2009; Zhang and Chen, 2018). Among them, Graph Convolutional Network (GCN) (Kipf and Welling, 2016) is the most popular because of its superior performance on various tasks. Informed by the fact that GCN can effectively aggregate information of related nodes, we apply GCN to integrate related nodes which contain information from various time steps. By this way, the issue of long-range dependency is solved because even the information from two distant time steps can directly communicate with each other. In most cases, the final representation of a graph is obtained by graph pooling (Mai et al., 2020b; Hamilton et al., 2017; Ying et al., 2018). Similarly, in order to obtain high-level graph representation, edge pruning(Yang et al., 2020) is usually applied in each GCN layer. However, pooling and pruning may rudely drop some important nodes, leading to the loss of information. As we conduct Dynamic Routing of Capsule Network instead of pooling or pruning after GCN, our proposed GraphCAGE model produces high-level and refined representations of sequences without the loss of information.
In this section, we elaborate our proposed GraphCAGE with its diagram illustrated in Figure 2
. Our GraphCAGE consists of two stages including graph construction and graph aggregation. In the first stage, multimodal sequences are transformed into graphs with nodes and edges created by Capsule Network and self-attention respectively, which enables our model to compute in parallel in the time dimension. In the second stage, each graph is condensed into a representative vector via Graph Convolutional Network (GCN) and Capsule Network. Fundamentally, the Capsule Network in the first stage integrates information of every time step into each node, then the GCN and the Capsule Network in the second stage further aggregate information of nodes, which equips our model with excellent capability of learning long-range dependency.
To construct a graph, we need to first create nodes from sequence, then define edges based on these created nodes. All the nodes and edges comprise the graph which contains sufficient information about sentiment and long-range dependency.
In order to create node containing information of interactions between different modalities, we first input features of textual, acoustic and visual modalities into cross-modal transformers111More detail about cross-modal transformer can be found in the link https://github.com/kenford953/GraphCAGE(Tsai et al., 2019):
where denotes the inputted unimodal sequence with being the dimensionality of features and being the sequence length. is the cross-modal transformer translating modality into modality with being the operation of concatenation. For conciseness, we denote as a specific modality in the rest of this paper. The outputs contain inter-modal dynamics but long-range dependency is still understudied because the output of cross-modal transformer is still a sequence which requires adequate fusion at the time dimension to explore the interactions between distant time steps. Capsule Network is an excellent model to figure out relations among various elements. Therefore, we apply Capsule Network to construct node from the output sequence in order to properly fuse information from a large number of time steps. We illustrate the node definition in Figure 3. As shown in Figure 3, we first create capsules as:
where denotes the features of the time step of sequence with being the trainable parameters. means the capsule from the time step and it is used for constructing the node. Then, we define nodes based on these capsules and Dynamic Routing as Algorithm 1 shows. Specifically, a node is defined by the weighted sum of corresponding capsules as shown below:
where denotes the embedding of the node and is the routing coefficient assigned to capsule . It is worth noting that for a total of iterations, all routing coefficients are normalized by softmax and updated based on inner product between the embeddings of capsule and node in every iteration step. The equations for updating are shown as below:
where means the routing coefficient before normalization, which is initialized to zero before iteration begins. denotes the operation of inner product. By comparing the values of routing coefficients, we can understand how much information from a specific time step flows into a node, which provides interpretation. With Capsule Network applied to construct node, our model can effectively learn long-range dependency, because nodes contain information from the whole range of sequence and more informative time steps will be assigned larger routing coefficients.
After node construction, edges are created by the self-attention mechanism over the nodes:
where is the adjacency matrix and denotes the overall node embeddings with being the number of nodes. are learnable parameters.means the matrix transpose operation. With ReLU as our activation function, the negative links between nodes can be effectively filtered out (Mai et al., 2020b) (a negative link implies that a direct connection between these two nodes is not necessary).
It is worth noting that Capsule Network has a large number of trainable parameters. As a result, we apply L2 Regularization on these parameters to alleviate overfitting as:
is a hyper-parameter which reflects the importance of the loss function. Therefore, during training, the total loss function is the Mean Absolute Error (MAE) plus.
As we finish constructing nodes and edges, a graph which contains rich inter-modal dynamics and reliable connections between related nodes has been created. Our graph construction method is informed by recent graph-based architectures for sequential data, but is distinct from all of them in the method of node construction. For example, in (Mai et al., 2020b) and (Yang et al., 2020), the authors define node and edge based on multimodal sequences processed only by Feed-Forward-Network, which causes that the created graph is not highly expressive because the node embedding is not built on high-level features. Moreover, they regard every time step as a node and only depend on GCN to learn long-range dependency, leading to insufficient learning. Contrary to them, our model first uses cross-modal transformer to obtain high-level features which contain inter-modal dynamics, then constructs node based on these features by Capsule Network which enables node to properly gain information from a great quantity of capsules. Note that the number of nodes here is significantly fewer than the length of the input sequence. By this way, each node is built on various time steps and the created graph is highly expressive and also is easier to be processed because of a small number of nodes. In the next stage, we illustrate how we conduct message passing between nodes and extract high-level representation from the graph.
In most cases, representation of graph is extracted by GCN followed by graph pooling or edge pruning to dump some redundant nodes. However, it is hard to avoid dropping valuable nodes which causes the loss of information. To prevent this problem, we retain GCN due to its excellent capability of exchanging information among nodes, and replace pooling or pruning with Capsule Network to prevent information from being lost. The graph aggregation consists of inner-loop and outer-loop. The relationship between inner-loop and outer-loop can be explained in this way: in every iteration of outer-loop, all iterations of inner-loop will be performed. As for the proposed method, Graph convolution is performed in outer-loop and the Dynamic Routing is performed in the inner-loop. So, in every iteration of graph convolution, we will perform iterations of Dynamic Routing to obtain a graph representation. The equations for the iteration of graph convolution are shown below:
where denotes the node embedding at the iteration and is the output node embedding in the graph construction stage ().
denotes the identity matrix which is used to perform self-loop operation andis chosen to be the activation function. Note that and have no superscripts because we share all weights for three modalities in the graph aggregation stage. When all nodes are updated, we generate the final representation of the graph at the iteration using Capsule network. The Capsule network consists of iterations (i.e., the inner-loop iteration) to update the routing coefficients of the nodes. The equation is shown as below:
where denotes the final representation at the iteration and the details of are shown in Algorithm 2. Specifically, Dynamic Routing (i.e., the inner-loop) contains normalization of routing coefficients, construction of representation and update of routing coefficients as shown below:
where means the capsule created by the node at the graph convolution iteration (see Algorithm 2). Note that different from the graph construction stage, each node only owns one capsule at each graph convolution iteration so only one subscript is enough for denoting the capsule.
|Parallel Computing Models|
As stated above, graph convolution enables related nodes communicate with each other and update node embedding, which helps our model further learn long-range dependency because nodes contain information from related time steps. Moreover, intra-modal dynamics are explored effectively because nodes are from two identical modalities. Finishing updating the nodes, Capsule Network is applied to aggregate all the nodes into a highly expressive representation with complete information transmission. More importantly, the highly expressive representation proportionally absorbs information from all nodes by Dynamic Routing, where larger routing coefficient will be assigned if information of the node is more valuable. By this way, interpretation is provided, indicating which node contributes most to the final representation. In contrast, many graph-based architectures roughly drop nodes by pooling or pruning to obtain the final representation, leading to the loss of information. In addition, interpretation of their models depends on the edges between related nodes, which reflect relations among different elements. But the contribution to prediction is not interpretable.
As intra- and inter-modal dynamics are effectively explored and long-range dependency is explicitly learned, we concatenate the graph representations of all the modalities at each iteration and apply fully-connected layers to predict sentiment labels.
In this section, we evaluate our proposed model GraphCAGE on two frequently-used datasets: CMU-MOSI(Zadeh et al., 2016) and CMU-MOSEI(Zadeh et al., 2018c). We first show details about the datasets, baseline models, experimental settings, and then present the results with comparison among GraphCAGE and other baseline models. The remaining part of this section are illustrations about long-range dependency and interpretability.
|Models||Inferring Time (s)|
|Graph Construction without Capsule Network||76.1||76.7|
|Graph Aggregation with GAT||75.9||77.0|
|Graph Aggregation with mean pooling||77.0||77.0|
|Graph Aggregation with LSTM||79.0||79.2|
CMU-MOSI is a popular dataset for multimodal sentiment analysis which contains 2199 video clips. Each video clip is labeled with a real number within [-3, +3] which reflects the sentiment intensity, where +3 means strongly positive sentiment and -3 means strongly negative sentiment. In accordance with most prior works, various metrics are reported including 7-class classification accuracy (), binary classification accuracy (), Mean Absolute Error (MAE) , F1 score and the correlation of the model’s prediction with humans. The total numbers of video clips for training set, validation set and testing set are 1284, 229 and 686, respectively.
CMU-MOSEI consists of 22856 video clips and we use 16326, 1871 and 4659 segments as training, validation and testing set. The reported metrics and sentiment label are the same as those of CMU-MOSI.
We separate baseline models into two groups including recurrent models and parallel computing models.
Recurrent models include Early Fusion LSTM (EF-LSTM), Late Fusion LSTM (LF-LSTM), Tensor Fusion Network (TFN)(Zadeh et al., 2017) and Memory Fusion Network (MFN)(Zadeh et al., 2018a). EF-LSTM and LF-LSTM simply concatenate features at input and output level, which apply LSTM(Hochreiter and Schmidhuber, 1997) to extract features and infer prediction. As stated in (Zadeh et al., 2017), these approaches fail to explore intra- and inter-modal dynamics due to simple concatenation. TFN effectively explores both dynamics with outer product adopted to learn joint representation of three modalities. MFN depends on systems of LSTM to learn interactions among modalities. However, EF-LSTM and MFN are word-level fusion methods which study aligned multimodal sequences and thus we combine connectionist temporal classification (CTC)(Graves et al., 2006)
with them to process the unaligned sequences. The CTC module we use comprises two components: alignment predictor and the CTC loss. The alignment predictor is chosen as a recurrent networks. We train the alignment predictor while minimizing the CTC loss. Then, we multiply the probability outputs from the alignment predictor to source signals. The recurrent natures of the above models bring about some disadvantages including gradient vanishing or exploding, long inferring time and insufficient learning for long-time dependency.
, which disuse RNN to better explore long-range dependency within multimodal sequences. MulT extends Transformer network(Vaswani et al., 2017) to model unaligned multimodal sequences by cross-modal transformer. Nevertheless, it utilizes self-attention transformer to integrate information from different time steps, which causes inadequate fusion at the time dimension because self-attention transformer is a sequence-to-sequence model and cannot fuse sequences at the time dimension. Multimodal Graph and MTAG both creatively adapt GCN to explore long-range dependency with problems of RNN avoided. However, they are confronted with information loss because of the operations of pooling and pruning.
Our model is developed on Pytorch and we choose Mean Absolute Error (MAE) as loss function for sentiment prediction task on CMU-MOSI and CMU-MOSEI datasets. Note that the total loss during training is MAE plus L2 Regularization loss. The optimizer is RMSprop and all hyper-parameters are selected by grid search. The textual, acoustic and visual features are extracted by GloVe word embedding(Pennington et al., 2014), COVAREP(Degottex et al., 2014) and Facet(iMotions 2017, 2017) respectively, with more details in https://github.com/A2Zadeh/CMU-MultimodalSDK. We specify the hyper-parameters and the features in our github link222The code for our model and details about hyper-parameters and features can be found in https://github.com/kenford953/GraphCAGE.
The overall results are shown in Table 1 which indicates that our model outperforms both recurrent and parallel computing models on most of the metrics for two popular datasets. In general, based on the results that parallel computing models achieve better performance than recurrent models, we can infer that it is practical to apply model without recurrent structure to multimodal sequence modeling.
Comparing with recurrent models, GraphCAGE outperforms them by a considerable margin which implies that our model processes sequential data better than canonical RNN. Low performance on unaligned sequences by RNN-based models verifies the incompetence of recurrent network to model excessively long sequence which requires strong capability of learning long-range dependency. With aggregation capability of Capsule Network and the graph-based structure, GraphCAGE can effectively link remote but related time steps, which contributes to the explicit exploration of long-range dependency. Moreover, as Table 2 shows, the inferring time of our model is significantly reduced, demonstrating the high efficiency of our model which can compute in parallel in the time dimension.
As for parallel computing models, GraphCAGE achieves the best performance due to substantially longer memory and efficient transmission of information. Specifically, because GCN and Capsule Network in graph aggregation stage can realize more sufficient fusion at the time dimension than self-attention transformer, GraphCAGE explicitly explores long-range dependency and outperforms MulT. In addition, with Capsule Network applied to transmit information, our model achieves better performance than MTAG and Multimodal Graph which have shortcoming about the loss of information.
In order to verify the effectiveness of our graph construction and graph aggregation stages, we conduct ablation study on CMU-MOSI dataset as Table 3 shows. Generally, the absence of Capsule Network in both stages of our model leads to drastic decline on performance, which indicates that they are critical to improve the ability of learning long-range dependency and enable our GraphCAGE to better model unaligned multimodal sequences.
For model without Capsule Network in graph construction, we directly define each node embedding as the feature of each time step and edges are constructed by self-attention. Apparently, each node only contains information from one time step, which causes insufficient learning for long-range dependency. Moreover, owing to the long sequence length, the number of nodes is excessively large. As a result, the latter GCN and Capsule Network are hard to figure out the relations among these nodes. In contrast, our model first condenses information from sequence into a moderate number of nodes by Capsule Network, then models their relations by later layers, which improves the capability of linking remote but related time steps.
For graph aggregation without Capsule Network, we retain the GCN part and design three aggregation methods including Graph Attention Network (GAT)(Veličković et al., 2018)
, mean pooling and LSTM to replace the Capsule Network. Note that GAT applies attention mechanism to aggregate nodes and achieves excellent performance on various node classification tasks. However, based on the lower performance, we argue that GAT is not suitable for our model because we need to decode the nodes to predict a label rather than classify them. As for mean pooling, the final representation is the average of the embeddings of all nodes. Obviously, mean pooling is too simple to obtain highly expressive graph representation and it will cause the loss of information. For LSTM, it is slightly better than mean pooling because of more learnable parameters. However, the final representation is the last element of the output sequence. As a result, the input order may heavily affect the performance and we cannot figure out the best order because information of a node changes dynamically. In conclusion, applying Capsule Network to aggregate information of nodes is more suitable than other frequently-used aggregation methods because the final representation is refined by absorbing more important information by Dynamic Routing.
As we stated above, because of the adaptation of Capsule Network, GraphCAGE is skilled at modeling long sequences which requires excellent capability of learning long-range dependency. To present this ability in detail, as shown in Figure 4, we find an example of CMU-MOSEI and observe its routing coefficients in graph construction stage which reflect how much the model pays attention to specific information. Specifically, the sentiment of this example is obviously negative because of the word ”but” and the phrase ”not up to” in the last part of the sentence. However, some models with weak ability of learning long-range dependency may predict positive for this example based on the word ”enjoyed” in the front part of the sentence. In contrast, our model attends to both parts of the sentence and pays more attention to the last part with larger routing coefficients assigned. Moreover, we found that the information prefers to flow into the fifth and eighteenth nodes which communicate by GCN and are integrated by Capsule Network later. Presumably it is because the distance between these two nodes is moderate which prevents our model from overly focusing on specific part of the sequence and the later GCN and Capsule Network enable our model to figure out the relations among important parts of the sentence. So we believe that even if the exact sentiment requires contextual information, our model can correctly predict sentiment with excellent capability of connecting remote but related elements.
Interpretation helps us to figure out how the model comes to a prediction from a large number of time steps, which is useful for improving performance on different datasets. To provide interpretation, we adapt Capsule Network into our model where the routing coefficients reflect how much information from the corresponding time step flows into the next layer. As shown in Figure 5, we observe the values of routing coefficients from textual modality of two examples with different sentiments. For the left example, information of the word ”disappointment” is highlighted by the largest routing coefficient, indicating our model predicts the negative sentiment mostly depending on it. As for the right example, our model successfully catches the important positive words ”best”, ”showered”, ”dressed” and ”organized” by assigning larger routing coefficients to them. Based on the analysis above, we can safely draw a conclusion that GraphCAGE actually understands which element leads to specific sentiment and it provides interpretation for us to find out what information contributes to the prediction.
In this paper, we develop a model called GraphCAGE for multimodal sentiment analysis using unaligned multimodal sequences which are too long to model by recurrent networks. For the purpose of explicitly learning long-range dependency, our model adapts Capsule Network to transmit information and applies Graph Convolutional Network to explore relations among different time steps, which can avoid the loss of information and contributes to the interpretation. Moreover, modeling sequences with graph-based structure instead of RNN prevents various problems like gradient vanishing or exploding. Extensive experiments with routing coefficients verify the effectiveness of the adaptation of Capsule Network and GCN. Experiments on two popular datasets show that GraphCAGE achieves SOTA performance on modeling unaligned multimodal sequences.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 3454–3466. External Links: Cited by: §2.1.
Proceedings of the 23rd international conference on Machine learning, pp. 369–376. Cited by: §4.2.
Stacked capsule autoencoders. External Links: Cited by: §2.2.