
GeNet: Deep Representations for Metagenomics
We introduce GeNet, a method for shotgun metagenomic classification from raw DNA sequences that exploits the known hierarchical structure between labels for training. We provide a comparison with stateoftheart methods Kraken and Centrifuge on datasets obtained from several sequencing technologies, in which dataset shift occurs. We show that GeNet obtains competitive precision and good recall, with orders of magnitude less memory requirements. Moreover, we show that a linear model trained on top of representations learned by GeNet achieves recall comparable to stateoftheart methods on the aforementioned datasets, and achieves over 90 This provides evidence of the usefulness of the representations learned by GeNet for downstream biological tasks.
01/30/2019 ∙ by Mateo RojasCarulla, et al. ∙ 34 ∙ shareread it

Convolutional neural networks: a magic bullet for gravitationalwave detection?
In the last few years, machine learning techniques, in particular convolutional neural networks, have been investigated as a method to replace or complement traditional matched filtering techniques that are used to detect the gravitationalwave signature of merging black holes. However, to date, these methods have not yet been successfully applied to the analysis of long stretches of data recorded by the Advanced LIGO and Virgo gravitationalwave observatories. In this work, we critically examine the use of convolutional neural networks as a tool to search for merging black holes. We identify the strengths and limitations of this approach, highlight some common pitfalls in translating between machine learning and gravitationalwave astronomy, and discuss the interdisciplinary challenges. In particular, we explain in detail why convolutional neural networks alone can not be used to claim a statistically significant gravitationalwave detection. However, we demonstrate how they can still be used to rapidly flag the times of potential signals in the data for a more detailed followup. Our convolutional neural network architecture as well as the proposed performance metrics are better suited for this task than a standard binary classifications scheme. A detailed evaluation of our approach on Advanced LIGO data demonstrates the potential of such systems as trigger generators. Finally, we sound a note of caution by constructing adversarial examples, which showcase interesting "failure modes" of our model, where inputs with no visible resemblance to real gravitationalwave signals are identified as such by the network with high confidence.
04/18/2019 ∙ by Timothy D. Gebhard, et al. ∙ 32 ∙ shareread it

From Variational to Deterministic Autoencoders
Variational Autoencoders (VAEs) provide a theoreticallybacked framework for deep generative models. However, they often produce "blurry" images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient expost density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves stateoftheart sample quality on the common MNIST, CIFAR10 and CelebA datasets.
03/29/2019 ∙ by Partha Ghosh, et al. ∙ 28 ∙ shareread it

Witnessing Adversarial Training in Reproducing Kernel Hilbert Spaces
Modern implicit generative models such as generative adversarial networks (GANs) are generally known to suffer from instability and lack of interpretability as it is difficult to diagnose what aspects of the target distribution are missed by the generative model. In this work, we propose a theoretically grounded solution to these issues by augmenting the GAN's loss function with a kernelbased regularization term that magnifies local discrepancy between the distributions of generated and real samples. The proposed method relies on socalled witness points in the data space which are jointly trained with the generator and provide an interpretable indication of where the two distributions locally differ during the training procedure. In addition, the proposed algorithm is scaled to higher dimensions by learning the witness locations in a latent space of an autoencoder. We theoretically investigate the dynamics of the training procedure, prove that a desirable equilibrium point exists, and the dynamical system is locally stable around this equilibrium. Finally, we demonstrate different aspects of the proposed algorithm by numerical simulations of analytical solutions and empirical results for low and highdimensional datasets.
01/26/2019 ∙ by Arash Mehrjou, et al. ∙ 20 ∙ shareread it

Interventional Robustness of Deep Latent Variable Models
The ability to learn disentangled representations that split underlying sources of variation in high dimensional, unstructured data is of central importance for data efficient and robust use of neural networks. Various approaches aiming towards this goal have been proposed in the recent time  validating existing work is hence a crucial task to guide further development. Previous validation methods focused on shared information between generative factors and learned features. The effects of rare events or cumulative influences from multiple factors on encodings, however, remain uncaptured. Our experiments show that this already becomes noticeable in a simple, noise free dataset. This is why we introduce the interventional robustness score, which provides a quantitative evaluation of robustness in learned representations with respect to interventions on generative factors and changing nuisance factors. We show how this score can be estimated from labeled observational data, that may be confounded, and further provide an efficient algorithm that scales linearly in the dataset size. The benefits of our causally motivated framework are illustrated in extensive experiments.
10/31/2018 ∙ by Raphael Suter, et al. ∙ 14 ∙ shareread it

AReS and MaRS  Adversarial and MMDMinimizing Regression for SDEs
Stochastic differential equations are an important modeling class in many disciplines. Consequently, there exist many methods relying on various discretization and numerical integration schemes. In this paper, we propose a novel, probabilistic model for estimating the drift and diffusion given noisy observations of the underlying stochastic system. Using stateoftheart adversarial and moment matching inference techniques, we circumvent the use of the discretization schemes as seen in classical approaches. This yields significant improvements in parameter estimation accuracy and robustness given random initial guesses. On four commonly used benchmark systems, we demonstrate the performance of our algorithms compared to stateoftheart solutions based on extended Kalman filtering and Gaussian processes.
02/22/2019 ∙ by Gabriele Abbati, et al. ∙ 12 ∙ shareread it

Disentangling Factors of Variation Using Few Labels
Learning disentangled representations is considered a cornerstone problem in representation learning. Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations. However, in many practical settings, one might have access to a very limited amount of supervision, for example through manual labeling of training examples. In this paper, we investigate the impact of such supervision on stateoftheart disentanglement methods and perform a large scale study, training over 29000 models under welldefined and reproducible experimental conditions. We first observe that a very limited number of labeled examples (0.010.5 set) is sufficient to perform model selection on stateoftheart unsupervised models. Yet, if one has access to labels for supervised model selection, this raises the natural question of whether they should also be incorporated into the training process. As a casestudy, we test the benefit of introducing (very limited) supervision into existing stateoftheart unsupervised disentanglement methods exploiting both the values of the labels and the ordinal information that can be deduced from them. Overall, we empirically validate that with very little and potentially imprecise supervision it is possible to reliably learn disentangled representations.
05/03/2019 ∙ by Francesco Locatello, et al. ∙ 12 ∙ shareread it

Deconfounding Reinforcement Learning in Observational Settings
We propose a general formulation for addressing reinforcement learning (RL) problems in settings with observational data. That is, we consider the problem of learning good policies solely from historical data in which unobserved factors (confounders) affect both observed actions and rewards. Our formulation allows us to extend a representative RL algorithm, the ActorCritic method, to its deconfounding variant, with the methodology for this extension being easily applied to other RL algorithms. In addition to this, we develop a new benchmark for evaluating deconfounding RL algorithms by modifying the OpenAI Gym environments and the MNIST dataset. Using this benchmark, we demonstrate that the proposed algorithms are superior to traditional RL methods in confounded environments with observational data. To the best of our knowledge, this is the first time that confounders are taken into consideration for addressing full RL problems with observational data. Code is available at https://github.com/CausalRL/DRL.
12/26/2018 ∙ by Chaochao Lu, et al. ∙ 10 ∙ shareread it

Bayesian Online Detection and Prediction of Change Points
Online detection of instantaneous changes in the generative process of a data sequence generally focuses on retrospective inference of such change points without considering their future occurrences. We extend the Bayesian Online Change Point Detection algorithm to also infer the number of time steps until the next change point (i.e., the residual time). This enables us to handle observation models which depend on the total segment duration, which is useful to model data sequences with temporal scaling. In addition, we extend the model by removing the i.i.d. assumption on the observation model parameters. The resulting inference algorithm for segment detection can be deployed in an online fashion, and we illustrate applications to synthetic and to two medical realworld data sets.
02/12/2019 ∙ by Diego AgudeloEspaña, et al. ∙ 10 ∙ shareread it

ODIN: ODEInformed Regression for Parameter and State Inference in TimeContinuous Dynamical Systems
Parameter inference in ordinary differential equations is an important problem in many applied sciences and in engineering, especially in a datascarce setting. In this work, we introduce a novel generative modeling approach based on constrained Gaussian processes and use it to create a computationally and data efficient algorithm for state and parameter inference. In an extensive set of experiments, our approach outperforms its competitors both in terms of accuracy and computational cost for parameter inference. It also shows promising results for the much more challenging problem of model selection.
02/17/2019 ∙ by Philippe Wenk, et al. ∙ 10 ∙ shareread it

The Incomplete Rosetta Stone Problem: Identifiability Results for MultiView Nonlinear ICA
We consider the problem of recovering a common latent source with independent components from multiple views. This applies to settings in which a variable is measured with multiple experimental modalities, and where the goal is to synthesize the disparate measurements into a single unified representation. We consider the case that the observed views are a nonlinear mixing of componentwise corruptions of the sources. When the views are considered separately, this reduces to nonlinear Independent Component Analysis (ICA) for which it is provably impossible to undo the mixing. We present novel identifiability proofs that this is possible when the multiple views are considered jointly, showing that the mixing can theoretically be undone using function approximators such as deep neural networks. In contrast to known identifiability results for nonlinear ICA, we prove that independent latent sources with arbitrary mixing can be recovered as long as multiple, sufficiently different noisy views are available.
05/16/2019 ∙ by Luigi Gresele, et al. ∙ 9 ∙ shareread it
Bernhard Schölkopf
is this you? claim profile
Director at Max Planck Institute for Intelligent Systems