Deep neural networks trained on single- or multi-view images have enabled 3D reconstruction of objects and scenes using RGB and RGBD approaches for robotics and other 3D vision-based applications. These models generate 3D geometry volumetrically [6, 11, 64] and in the form of point clouds [18, 44, 46]. With these reconstructions, additional networks have been developed to use the 3D geometry as inputs for object detection, classification, and segmentation in 3D environments [3, 47]. However, existing methods still encounter a few challenging scenarios for 3D shape reconstruction .
One such challenge is occlusion in cluttered environments with multiple agents/objects in a scene. Another is spatial resolution. Volumetric methods such as voxelized reconstructions  are primarily limited by resolution. Point cloud representations of shape avoid issues of grid resolution, but instead need to cope with issues of point set size and approximations. Existing methods also are challenged by transparent and highly reflective or textured surfaces. Self-occlusions and occlusions from other objects can also hinder image-based networks, necessitating the possible adoption of multimodal neural networks.
To address these limitations, we propose to use audio-visual input for 3D shape and material reconstruction. A single view of an object is insufficient for 3D reconstruction as only one projection of the object can be seen, while multi-view input does not intrinsically model the spatial relationships between views. By providing a temporal sequence of video frames, we strengthen the relationships between views, aiding reconstruction. We also include audio as an input, in particular, impact sounds resulting from interactions between the object to be reconstructed and the surrounding environment. Impact sounds provide information about the material and internal structure of an object, offering complementary cues to the object’s visual appearance. We choose to represent our final 3D shape using voxel representation due to their state-of-the-art performance in classification tasks. To the best of our knowledge, our audio-visual network is the first to reconstruct multiple 3D objects from a single video.
Main Results: In this paper, we introduce a new method to reconstruct high-quality 3D objects from video, as a sequence of images and sounds. The main contributions of this work can be summarized as follows.
A multimodal LSTM autoencoder neural network for both geometry and material reconstruction from audio and visual data is introduced;
The resulting implementation has been tested on voxel, audio, and image datasets of objects over a range of different geometries and materials;
Experimental results of our approach demonstrate the reconstruction of single sounding objects and multiple colliding objects in a virtual scene;
Audio-augmented datasets with ground-truth objects and their tracking bounding boxes are made available for research in audio-visual reconstruction from video.
Ii Related Work
Computer vision research continues to push state-of-the-art reconstruction and segmentation of objects in a scene . However, there still remain research opportunities in 3D reconstruction. Wide baselines limit the accuracy of feature correspondences between views. Challenging objects for reconstruction include thin or small objects (e.g. table legs), and classes of objects that are transparent, occluded, or have much higher shape variation than other classes (e.g. lamps, benches, and tables compared to cabinets, cars, and speakers for example). In this section, we review previous work relating to 3D reconstruction, multimodal neural networks, and reconstruction network structures.
Ii-a 3D Reconstruction
Deep learning techniques have produced state-of-the-art 3D scene and object reconstructions. These models take an image or series of images and generate a reconstructed output shape. Some methods produce a transformed image of the input, intrinsically representing the 3D object structure [42, 56, 36, 38, 35]. 3D voxel grids provide a shape representation which is easy to visualize and works well with convolution operations [11, 16, 51, 45, 22, 61]. In more recent work, point clouds have also been found to be a viable shape representation for reconstructed objects [21, 15].
Ii-B Multimodal Neural Networks
Neural networks with multiple modalities of inputs help cover a broader range of experimental setups and environments. Common examples include visual question answering , vision and touch , and other multisensory interactions 
. Multiple modes may also take the form of image-to-image translation, e.g. domain transfer
. Using local and global cropped parts of the images (i.e. bounding boxes) have also been shown to serve as a mode of context to supervise learning.
Audio-visual specific multimodal neural networks have also proven effective for speech separation  as well as sound localization [69, 43, 28, 2]. Audio synthesis conditioned on images is also enabled as a result of these combined audio-visual datasets 
. Please see a survey and taxonomy on multimodal machine learning and multimodal deep learning  for more information.
Ii-C Reconstruction Network Structures
While single view networks perform relatively well for most object classes, objects with concave structures or classes of objects with large variations in shape tend to require more views. 3D-R2N2  allows for both single and multi-view implementations given a single network. Other recurrent models include learning in video sequences [10, 19], Point Set Generation 
, and Pixel Recurrent Neural Network (PixelRNN). Methods have also been developed to ensure temporal consistency  and use generative techniques . T-L network  and 3D-R2N2  are most similar to our 3D-MOV reconstruction neural network. Building on these related works, we fuse audio as an additional input and temporal consistency in the form of LSTM layers (Fig. 2).
Iii Technical Approach
In this work, we reconstruct the 3D shape and material of sounding objects given images and impact sounds. Using audio and visual information, we present a method for reconstruction of single instance ModelNet objects augmented with audio and multiple objects colliding in a Sound20K scene from video. In this section, we cover visual representations from object tracking (Section III-A) and audio obtained from sound source separation of impact sounds (Section III-B) that serve as inputs into our 3D-MOV reconstruction network (Section IV).
Iii-a Object Tracking and Visual Representation
Since an entire video frame may contain too much background, we use object tracking to track and segment different objects. This tracking is performed using the Audio-Visual Object Tracker (AVOT) . Similar to the Single Shot MultiBox Detector (SSD) 
, AVOT is a feed-forward convolutional neural network that classifies and scales a fixed number of anchor bounding boxes to track objects in a video. While 3D-MOV aggregates audio-visual features before decoding, AVOT fuses audio-visual inputs before its base network. With additional information from audio, AVOT defines an object based on both its geometry and material.
We use AVOT over other algorithms, such as YOLO  or Faster R-CNN , because of the availability of audio and need for higher object-tracking accuracy given occlusions caused by multiple objects colliding. Unlike CSR-DCF 
, AVOT automatically detects objects in the video without initial markup of bounding boxes. For future work, a scheduler network or a combination of object trackers is worth considering as well as use of Common Objects in Context (COCO) and SUN RGB-D [54, 53, 26, 63]
datasets for initialization and transfer learning.
The output from tracking is a series of segmented image frames for each object, consisting of the contents of its tracked bounding box throughout the video. These segmented frames are grayscaled and resized to a consistent input size of 88 by 88 pixels. While resizing, we maintain aspect ratio and pad to square the image. These dimensions were automatically chosen to account for the size of objects in our Sound20K dataset and to capture their semantic information. Scenes included one, two, and three colliding objects with materials such as granite, slate, oak, and marble. For our single-frame, single impact sound evaluations, we resized ShapeNet’s 224 x 224 image size. For comparison, other image sizes from related work include MNIST, 28 x 28; 3D-R2N2, 127 x 127; ImageNet, 256 x 256.
Iii-B Sound Source Separation of Impact Sounds and Audio Representation
For single frame reconstruction, we synthesize impact sounds on ShapeNet , illustrated in Fig. 3. For multiple frames, we take as input a Sound20K video showing one or more objects moving around a scene. These objects strike one another or the environment, producing impact sounds, which can be heard in the audio track of the video. We refer to these objects, dynamically moving through the scene and generating sound due to impact and collision, as sounding objects. Sound20K provides mixed and unmixed audio which can be used directly or to train algorithms for sound source separation [59, 29, 52]. While prior work to localize objects using audio-visual data exists [2, 69], automatically associating separated sounds with corresponding visual object tracks in the context of the reconstruction task remains an area of future work.
Initially, Sound20K and ShapeNet audio are available as time series data, sampled at 44.1 kHz to cover the full audible range. The audio is converted to mel-scaled spectrograms for neural network inputs, which effectively represent the spectral distribution of energy over time. Each spectrogram is 3 seconds for a single frame (ShapeNet) and 0.03 seconds per multi-frame (Sound20K) with an overlap of 25%. Audio spectrograms are aligned temporally with their corresponding image frames from video, forming the audio-visual input for queries. They are generated with discrete short-time Fourier transforms (STFTs) using a Hann window function.
for time frame and Fourier coefficient with real-valued DT signal , sampled window function for n of length , and hop size .
Iii-B1 Single View, Single Impact Sound
Single-view inputs are based on ShapeNet, a repository of 3D CAD models based on WordNet categories. Evaluations were performed on voxelized versions of ShapeNet’s , ModelNet10 and ModelNet40 models , and image views of these datasets from 3D-R2N2 . To generate audio for these objects to be used for our multi-modal 3D-MOV neural network, we use data from Impact Sound Neural Network . This work synthesized impact sounds for voxelized ModelNet10 and ModelNet40 models  using modal analysis and sound synthesis. Modal analysis is precomputed to obtain modes of vibration for each object and sound synthesized with an amplitude determined at run-time given the hit point location on the object and impulse force. The modes are represented as damped sinusoidal waves where each mode has the form
where is the frequency of the mode, is the damping coefficient, is the excited amplitude, and is the initial phase.
Iii-B2 Multi-Frame, Multi-Impact
Multi-frame inputs to our system consist of Sound20K  videos that may contain multiple sounding objects, possibly of similar sizes, shapes, and/or materials. This synthetic video dataset contains audio and video data for multiple objects colliding in a scene. Sound20K consists of 20,378 videos generated by rigid-body simulation and impact sound synthesis pipeline . Visually, Sound20K  objects can be separated from one another through tracking of bounding boxes. However, audio source separation can be more challenging, particularly for unknown objects. While Sound20K provides separate audio files for each object that can be used, the audio data can also be used to train sound source separation techniques [59, 29, 52] to learn to unmix audio to individual objects by geometry and material. As future work, we will compare the impact on reconstruction quality and performance if we were to use combined, unmixed audio for each object. We will also compare impact of using source separated sounds versus ground truth unmixed audio.
Iv 3d-Mov Network Structure
Our 3D-MOV network is a multi-modal LSTM autoencoder optimized for 3D reconstructions of multiple objects from video. Like 3D-R2N2 , it is recurrent and generates a 3D voxel representation. However, to the best of our knowledge, our 3D-MOV network is the first audio-visual reconstruction network for 3D object reconstruction. After object tracking and sound source separation, we separately train autoencoders to extract visual and audio feature from each frame (Section IV-A). While the 2D encoder weights are reused, the 2D decoders are discarded (blue rectangles in Fig. 5) and replaced with 3D decoders for learning to reconstruct voxel outputs of the tracked objects based on given 2D images and spectrograms. Using a merge layer such as addition, concatenation, or a bilinear model , our method 3D-MOV fuses the results of the audio and visual subnetworks comprised of LSTM autoencoders.
Iv-a Single Frame Feature Extraction
The autoencoder consists of two convolutional layers for spatial encoding followed by a LSTM convolutional layer for temporal encoding. As a general rule of thumb, we use small filters (3x3 and at most 5x5), except for the very first convolutional layer connected to the input, and strides of four and two for the two conv layers. The decoder mirrors the encoder to reconstruct the image (Fig. 4
). After each convolutional layer, we employ layer normalization, which is equivalent to batch normalization for recurrent networks. It normalizes the inputs across features and is defined as:
where is batch i, feature j of the input x across m features.
Iv-B Frame Aggregation
In chronological order, the training video frames make a temporal sequence. LSTM convolutional layers are used to preserve content and spatial information. To generate more training sequences, we perform data augmentation by concatenating frames with strides 1, 2, and 3. For example, we use a skipping stride of 2 to generate a sequence of every other frame. We use a 10-frame sequence size as a sliding window technique for aggregation of the encodings. The encoder weights learned here are used to then learn 3D decoder weights to output a 3D voxel reconstruction based on audio-visual inputs from audio-augmented ModelNet with impact sound synthesis and Sound20K video.
Iv-C Modality Fusion and 3D Decoder
After encoding our inputs with LSTM convolutional layers, we flatten to a fully connected layer for each audio and visual subnetwork. These dense layers are fused together prior to multiple Conv3D transpose layers for the 3D decoder. Prior work in multimodal deep learning, such as visual question and answering systems, have merged modalities for classification tasks using addition and MFB . A 3D decoder accepts the fusion of audio-visual LSTM encodings and maps it to a voxel grid with five deconvolutional layers, similar to T-L Network . Unlike T-L’s voxel grid, we use voxels for greater resolution and apply a single, audio-based material classification to all voxels. Deconvolution, also known as fractionally-strided or transposed convolution, results in a 3D voxel shape by broadcasting input through kernel .
In this section, we present our implementation, training, and evaluation metrics along with 3D-MOV reconstructed objects (Fig.6). Please see the accompanying supplementary materials for more comparative analysis of loss and accuracy against baseline methods by datasets and numbers of views. For each of ShapeNet and Sound20K, we evaluate the network architecture in Section IV against audio, visual, and audio-visual methods using binary cross entropy loss and intersection over union (IoU) reconstruction accuracy.
Our framework was implemented using Tensorflow
and Keras. Training was run on Ubuntu 16.04.6 LTS with a single Titan X GPU. Voxel representations were rendered based on Matlab visualization code from 3D-GAN . From Sound20K videos, images were grayscale with dimensions 84 x 84 x 1 and audio spectrograms were 64 x 25 x 1, zero padded to equivalent dimensions. Visual data was augmented with resizing, cropping, and skipping strides.
Since joint optimization can be difficult to perform, we train our reconstruction autoencoder and fused audio-visual networks separately and then jointly optimize to fine-tune the final network. Mean square error is used for the 2D reconstruction loss to train the encoder to reconstruct input images and audio spectrograms. Binary cross entropy loss is calculated between ground truth and reconstructed 3D voxel grids. During testing, we reconstruct from encoded vector representation of audio-visual inputs to a 3D voxel reconstruction output.
Previous work has used symmetry induced volume refinement to constrain and finalize GAN volumetric outputs . Other methods have used multiple views to continuously refine the output . Furthermore, most adversarial generating methods create examples by perturbing existing data, limiting the solution space. Our approach constrains the space of possible 3D reconstructions for objects in the scene by temporal consistency, aggregation, and fusion of audio and visual inputs.
|Method||Input||1 view||5 views|
|T-L Network ||AV||18.0%*||N/A|
V-C Evaluation metrics
Methods were evaluated against voxel Intersection-over-Union (IoU), also known as the Jaccard index, between the 3D reconstruction and ground truth voxels as well as cross-entropy loss. This can be represented as area of overlap divided by the area of union. More formally:
where is the ground truth occupancy,
the Bernoulli distribution output at each voxel,an indicator function, and t for threshold. Higher IoU means better reconstruction quality.
To the best of our knowledge, this work is the first method to use audio and visual inputs from ShapeNet objects and Sound20K video of multiple objects in a scene to generate 3D object reconstructions with material from video. While multi-view approaches can improve reconstruction accuracy, transparent objects, interior concave structures, self-occlusions, and multiple objects remain a challenge. As objects collide, audio provides a complementary sensory cue that can enhance the reconstruction model to improve results. In this paper, we demonstrate that augmenting image encodings with corresponding impact sounds refine reconstructions of multimodal LSTM autoencoder neural network outputs.
Limitations: our approach is currently implemented and evaluated with fixed-grid shapes. Further experimentation with residual architectures , adaptive grids, and multi-scale reasoning  are worth exploring, though they each will introduce different sets of constraints and complexity. Material classification is predicted based on audio alone, given the textureless image renderings of the datasets used. Also, only a single material is inferred for the entire geometry rather than per voxel classification. Finally, the trade-off between additional views and additional auditory inputs could be further explored.
evaluation of other real-time object trackers, such as YOLO and Faster R-CNN, can be performed and trained on other existing datasets, such as COCO and SUN RGB-D. Further investigations can also examine how the error introduced by object tracking propagates to reconstruction error. Same applies to errors from sound source separation and being able to accurately associate unmixed sounds with their corresponding visual object tracks. Next, while audio helps classify the material of the reconstructed geometry, we assume a single material classification based on audio alone and apply that to all voxels. Research on classifying material per voxel using both audio and visual data could expand part segmentation research into reconstructing objects with different materials. Rather than being fully deterministic, fusing audio and visual information for generative models to reconstruct geometry and material may also be of interest to the research community. Then, there may be more than one possible 3D reconstruction for a given image or sound. Beyond reconstruction, audio may also enhance image and sound generation, as well as memory and attention models. For instance, image generation using an audio conditioned GAN and sound generation based on image conditioning could be explored, similar to WaveNet local and global conditioning techniques. Finally, testing on real data in the wild and larger datasets of annotated audio and visual data allow for future research.
-  (2015) TensorFlow: large-scale machine learning on heterogeneous systems. Note: Software available from tensorflow.org External Links: Cited by: §V-A.
-  (2017) Look, listen and learn. External Links: Cited by: §II-B, §III-B.
-  (2018) Point convolutional neural networks by extension operators. CoRR abs/1803.10091. External Links: Cited by: §I.
-  (2016) Layer normalization. External Links: Cited by: §IV-A.
-  (2017) Multimodal machine learning: A survey and taxonomy. CoRR abs/1705.09406. External Links: Cited by: §II-B.
-  (2016) Learning shape correspondence with anisotropic convolutional neural networks. CoRR abs/1605.06437. External Links: Cited by: §I.
-  (2019) MUREL: multimodal relational reasoning for visual question answering. External Links: Cited by: §II-B.
-  (2015) ShapeNet: an information-rich 3d model repository. External Links: Cited by: §III-B1, §V-B.
-  (2015) Keras. Note: https://keras.io Cited by: §V-A.
-  (2017) Abnormal event detection in videos using spatiotemporal autoencoder. External Links: Cited by: §II-C.
-  (2016) 3D-r2n2: a unified approach for single and multi-view 3d object reconstruction. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §I, §II-A, §II-C, §III-B1, §IV, §V-B.
-  (2017) ScanComplete: large-scale scene completion and semantic segmentation for 3d scans. CoRR abs/1712.10215. External Links: Cited by: §II.
-  (2015) Deep generative image models using a laplacian pyramid of adversarial networks. CoRR abs/1506.05751. External Links: Cited by: §VI.
-  (2018) Looking to listen at the cocktail party: A speaker-independent audio-visual model for speech separation. CoRR abs/1804.03619. External Links: Cited by: §II-B.
A point set generation network for 3d object reconstruction from a single image.
Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 605–613. Cited by: §II-A, §II-C.
-  (2016) Learning a predictable and generative vector representation for objects. CoRR abs/1603.08637. External Links: Cited by: §II-A, §II-C, §IV-C, §V-B, TABLE I.
Weakly supervised generative adversarial networks for 3d reconstruction. CoRR abs/1705.10904. External Links: Cited by: §II-C.
-  (2019) Multi-angle point cloud-vae: unsupervised feature learning for 3d point clouds from multiple angles by joint self-reconstruction and half-to-half prediction. External Links: Cited by: §I.
-  (2016) Learning temporal regularity in video sequences. External Links: Cited by: §II-C.
-  (2015) Deep residual learning for image recognition. CoRR abs/1512.03385. External Links: Cited by: §VI.
-  (2017) Casual 3d photography. SIGGRAPH ASIA. Cited by: §II-A.
-  (2018-07) Predictive and generative neural networks for object functionality. ACM Transactions on Graphics 37, pp. 1–13. External Links: Cited by: §II-A.
-  (2018) Multimodal unsupervised image-to-image translation. ECCV. Cited by: §II-B.
-  (1901) Distribution de la flore alpine dans le bassin des drouces et dans quelques regions voisines. Cited by: §V-C.
-  (2006) Precomputed acoustic transfer: output-sensitive, accurate sound generation for geometrically complex vibration sources. In ACM Transactions on Graphics, Cited by: §III-B2.
-  (2011) A category-level 3-d object dataset: putting the kinect to work. ICCV. Cited by: §III-A.
-  (2012) Current perspectives and methods in studying neural mechanisms of multisensory interactions. Neuroscience & Biobehavioral Reviews 36 (1), pp. 111 – 133. External Links: Cited by: §II-B.
-  (2020-Jan.) Audio-visual 3d reconstruction framework for dynamic scenes. In Proceedings of the 2020 IEEE/SICE International Symposium on System Integration(SII 2020), Hawaii Convention Center, Honolulu, Hawaii, USA, pp. 802–807. External Links: Cited by: §II-B.
-  (2017) Real-time adaptive audio source separation. External Links: Cited by: §III-B2, §III-B.
-  (2019) Making sense of vision and touch: learning multimodal representations for contact-rich tasks. External Links: Cited by: §II-B.
-  (2020) Convolutional neural networks for visual recognition. External Links: Cited by: §IV-A.
-  (2014) Microsoft COCO: common objects in context. CoRR abs/1405.0312. External Links: Cited by: §III-A.
-  (2015) SSD: single shot multibox detector. CoRR abs/1512.02325. External Links: Cited by: §III-A.
-  (2016) Discriminative correlation filter with channel and spatial reliability. CoRR abs/1611.08461. External Links: Cited by: §III-A.
-  (2017) 3D shape reconstruction from sketches via multi-view convolutional networks. In 3D Vision (3DV), 2017 International Conference on, pp. 67–77. Cited by: §II-A.
-  (2017) AlignGAN: learning to align cross-domain images with conditional generative adversarial networks. CVPR. Cited by: §II-A.
-  (2015-09) VoxNet: a 3d convolutional neural network for real-time object recognition. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 922 – 928. Cited by: §I.
-  (2014) Conditional generative adversarial nets. CoRR abs/1411.1784. External Links: Cited by: §II-A.
-  (2015) Fundamentals of music processing: audio, analysis, algorithms, applications. 1st edition, Springer Publishing Company, Incorporated. External Links: Cited by: §III-B.
-  (2011) Multimodal deep learning. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML’11, Madison, WI, USA, pp. 689–696. External Links: Cited by: §II-B.
-  (2018) Im2Struct: recovering 3d shape structure from a single RGB image. CoRR abs/1804.05469. External Links: Cited by: §V-B.
-  (2017) Conditional image synthesis with auxiliary classifier gans. CVPR. Cited by: §II-A.
-  (2018) Audio-visual scene analysis with self-supervised multisensory features. CoRR abs/1804.03641. External Links: Cited by: §II-B.
-  (2016) PointNet: deep learning on point sets for 3d classification and segmentation. arXiv preprint arXiv:1612.00593. Cited by: §I.
-  (2016) Volumetric and multi-view cnns for object classification on 3d data. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5648–5656. Cited by: §II-A.
-  (2017) PointNet++: deep hierarchical feature learning on point sets in a metric space. External Links: Cited by: §I.
-  (2017) Frustum pointnets for 3d object detection from RGB-D data. CoRR abs/1711.08488. External Links: Cited by: §I.
-  (2015) You only look once: unified, real-time object detection. CoRR abs/1506.02640. External Links: Cited by: §III-A.
-  (2016) Learning what and where to draw. NIPS. Cited by: §II-B.
-  (2015) Faster R-CNN: towards real-time object detection with region proposal networks. CoRR abs/1506.01497. External Links: Cited by: §III-A.
-  (2017) OctNet: learning deep 3d representations at high resolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §II-A.
-  (2017) Virtual music experiences. External Links: Cited by: §III-B2, §III-B.
-  (2012) Indoor segmentation and support inference from rgbd images. ECCV. Cited by: §III-A.
SUN rgb-d: a rgb-d scene understanding benchmark suite. CVPR. Cited by: §III-A.
-  (2018) ISNN: impact sound neural network for audio-visual object classification. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: Fig. 3, §III-B1, §III-B.
-  (2018) Customizing an adversarial example generator with class-conditional gans. CVPR. Cited by: §II-A.
-  (2016) WaveNet: a generative model for raw audio. Cited by: §VI.
-  (2016) Pixel recurrent neural networks. CoRR abs/1601.06759. External Links: Cited by: §II-C.
-  (2014) Real-time method for implementing deep neural network based speech separation. External Links: Cited by: §III-B2, §III-B.
-  (2020) AVOT: audio-visual object tracking of multiple objects for robotics. In ICRA20, Cited by: §III-A.
-  (2016) Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. CoRR abs/1610.07584. External Links: Cited by: §II-A, §V-A.
-  (2015) 3D shapenets: a deep representation for volumetric shapes. In CVPR, Cited by: §III-B1.
-  (2013) SUN3D: a database of big spaces reconstructed using sfm and object labels. ICCV. Cited by: §III-A.
TempoGAN: A temporally coherent, volumetric GAN for super-resolution fluid flow. CoRR abs/1801.09710. External Links: Cited by: §I.
-  (2018) TempoGAN: a temporally coherent, volumetric gan for super-resolution fluid flow. ACM Transactions on Graphics (TOG) 37 (4), pp. 95. Cited by: §II-C.
-  (2014) Multi-modal Factorized Bilinear Pooling with Co-Attention Learning for Visual Question Answering. In IEEE International Conference on Computer Vision (ICCV), Cited by: Fig. 2, §IV-C, §IV.
-  (2020) Dive into deep learning. GitHub. Note: https://d2l.ai Cited by: §IV-C.
-  (2017) Generative modeling of audible shapes for object perception. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §III-B2, TABLE I.
-  (2018-09) The sound of pixels. In The European Conference on Computer Vision (ECCV), Cited by: §II-B, §III-B.
-  (2018) Visual to sound: generating natural sound for videos in the wild. CVPR. Cited by: §II-B.