
ResourceEfficient Neural Networks for Embedded Systems
While machine learning is traditionally a resource intensive task, embed...
read it

Efficient and Robust Machine Learning for RealWorld Systems
While machine learning is traditionally a resource intensive task, embed...
read it

Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic Circuits
Probabilistic circuits (PCs) are a promising avenue for probabilistic mo...
read it

DynamicPPL: Stanlike Speed for Dynamic Probabilistic Models
We present the preliminary highlevel design and features of DynamicPPL....
read it

Learning Continuous Treatment Policy and Bipartite Embeddings for Matching with Heterogeneous Causal Effects
Causal inference methods are widely applied in the fields of medicine, p...
read it

Antithetic and Monte Carlo kernel estimators for partial rankings
In the modern age, rankings data is ubiquitous and it is useful for a va...
read it

Variational Gaussian Dropout is not Bayesian
Gaussian multiplicative noise is commonly used as a stochastic regularis...
read it

Interpolated Policy Gradient: Merging OnPolicy and OffPolicy Gradient Estimation for Deep Reinforcement Learning
Offpolicy modelfree deep reinforcement learning methods using previous...
read it

General Latent Feature Modeling for Data Exploration Tasks
This paper introduces a general Bayesian non parametric latent feature ...
read it

Improving Output Uncertainty Estimation and Generalization in Deep Learning via Neural Network Gaussian Processes
We propose a simple method that combines neural networks and Gaussian pr...
read it

OneShot Learning in Discriminative Neural Networks
We consider the task of oneshot learning of visual categories. In this ...
read it

Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks
Deep neural networks (DNNs) have excellent representative power and are ...
read it

Lost Relatives of the Gumbel Trick
The Gumbel trick is a method to sample from a discrete probability distr...
read it

General Latent Feature Models for Heterogeneous Datasets
Latent feature modeling allows capturing the latent structure responsibl...
read it

Bayesian inference on random simple graphs with power law degree distributions
We present a model for random simple graphs with a degree distribution t...
read it

GPflow: A Gaussian process library using TensorFlow
GPflow is a Gaussian process library that uses TensorFlow for its core c...
read it

Deep Bayesian Active Learning with Image Data
Even though active learning forms an important pillar of machine learnin...
read it

Magnetic Hamiltonian Monte Carlo
Hamiltonian Monte Carlo (HMC) exploits Hamiltonian dynamics to construct...
read it

The Mondrian Kernel
We introduce the Mondrian kernel, a fast random feature approximation to...
read it

Distributed Flexible Nonlinear Tensor Factorization
Tensor factorization is a powerful tool to analyse multiway data. Compa...
read it

A Theoretically Grounded Application of Dropout in Recurrent Neural Networks
Recurrent neural networks (RNNs) stand at the forefront of many recent d...
read it

A General Framework for Constrained Bayesian Optimization using Informationbased Search
We present an informationtheoretic framework for solving global blackb...
read it

Parallel Predictive Entropy Search for Batch Global Optimization of Expensive Objective Functions
We develop parallel predictive entropy search (PPES), a novel algorithm ...
read it

Sandwiching the marginal likelihood using bidirectional Monte Carlo
Computing the marginal likelihood (ML) of a model requires marginalizing...
read it

Dirichlet Fragmentation Processes
Tree structures are ubiquitous in data across many domains, and many dat...
read it

A study of the effect of JPG compression on adversarial images
Neural network image classifiers are known to be vulnerable to adversari...
read it

Scalable Discrete Sampling as a MultiArmed Bandit Problem
Drawing a sample from a discrete distribution is one of the building com...
read it

An Empirical Study of Stochastic Variational Algorithms for the Beta Bernoulli Process
Stochastic variational inference (SVI) is emerging as the most promising...
read it

MCMC for Variationally Sparse Gaussian Processes
Gaussian process (GP) models form a core part of probabilistic machine l...
read it

Neural Adaptive Sequential Monte Carlo
Sequential Monte Carlo (SMC), or particle filtering, is a popular class ...
read it

Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference
Convolutional neural networks (CNNs) work well on large datasets. But la...
read it

Dropout as a Bayesian Approximation: Appendix
We show that a neural network with arbitrary depth and nonlinearities, ...
read it

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Deep learning tools have gained tremendous attention in applied machine ...
read it

Training generative neural networks via Maximum Mean Discrepancy optimization
We consider training a deep neural network to generate samples from an u...
read it

A LinearTime Particle Gibbs Sampler for Infinite Hidden Markov Models
Infinite Hidden Markov Models (iHMM's) are an attractive, nonparametric ...
read it

On Sparse variational methods and the KullbackLeibler divergence between stochastic processes
The variational framework for learning inducing variables (Titsias, 2009...
read it

Latent Gaussian Processes for Distribution Estimation of Multivariate Categorical Data
Multivariate categorical data occur in many applications of machine lear...
read it

Predictive Entropy Search for Bayesian Optimization with Unknown Constraints
Unknown constraints arise in many types of expensive blackbox optimizat...
read it

Scalable Variational Gaussian Process Classification
Gaussian process classification is a popular method with a number of app...
read it

SublinearTime Approximate MCMC Transitions for Probabilistic Programs
Probabilistic programming languages can simplify the development of mach...
read it

SiGMa: Simple Greedy Matching for Aligning Large Knowledge Bases
The Internet has enabled the creation of a growing number of largescale...
read it

Warped Mixtures for Nonparametric Cluster Shapes
A mixture of Gaussians fit to a single curved or heavytailed cluster wi...
read it

Classification using log Gaussian Cox processes
McCullagh and Yang (2006) suggest a family of classification algorithms ...
read it

A reversible infinite HMM using normalised random measures
We present a nonparametric prior over reversible Markov chains. We use c...
read it

Avoiding pathologies in very deep networks
Choosing appropriate architectures and regularization strategies for dee...
read it

Studentt Processes as Alternatives to Gaussian Processes
We investigate the Studentt process as an alternative to the Gaussian p...
read it

Automatic Construction and NaturalLanguage Description of Nonparametric Regression Models
This paper presents the beginnings of an automatic statistician, focusin...
read it

The Random Forest Kernel and other kernels for big data from random partitions
We present Random Partition Kernels, a new class of kernels derived by d...
read it

Gaussian Process Volatility Model
The accurate prediction of timechanging variances is an important task ...
read it

Randomized Nonlinear Component Analysis
Classical methods such as Principal Component Analysis (PCA) and Canonic...
read it
Zoubin Ghahramani
is this you? claim profile
Zoubin Ghahramani FRS is a BritishIranian researcher at Cambridge University and Professor of Information Engineering. He is jointly appointed to University College London as well as to the Alan Turing Institute. And since 2009, he has been a fellow of St John’s College, Cambridge. From 2003 to 2012, he was Associate Professor of Research at the Carnegie Mellon School of Computing Science. He is also the Chief Scientist of Uber and the Vice Director of the Leverhulme Centre.
Ghahramani received a degree in cognitive science and computer science from the American school of Madrid in Spain and the University of Pennsylvania in 1990. He obtained his PhD from Michael I. Jordan and Tomaso Poggio’s Department of Brain and Cognitive Science at the Massachusetts Institute of Technology.
After his PhD, Ghahramani moved to the University of Toronto in 1995, working with Geoffrey Hinton, where he was an ITRC Postdoctoral Fellow at the Artificial Intelligence Lab. He was a faculty member at the Gatsby Computational Neuroscience Unit at University College London from 1998 to 2005.
In the field of Bayesian machine learning, Ghahramani made significant contributions as well as in graphical models and computer science. His current research focuses on Bayesian nonparametric modeling and statistical machine learning. He has also been working on artificial intelligence, information collection, bioinformatics and statistics, which are the basis for the management of uncertainty, decisionmaking and the design of learning systems. He has publicated more than 200 documents, receiving over 30,000 quotes. In 2014, he and Gary Marcus, Doug Bemis and Ken Stanley cofounded the Geometric Intelligence Company. In 2016, he moved to Uber’s A.I. Labs after Uber had acquired the startup. He became Chief Scientist just after four months, replacing Gary Marcus.
In 2015, Ghahramani was elected Royal Society Fellow. His election certificate reads:
Zoubin Ghahramani is a world leader in machine learning, which makes significant progress in algorithms that can learn from data. In particular, it is known for its fundamental contributions in probabilistic modeling and bayesian nonparametric approaches to machine learning systems and the development of approximate algorithms for scalable learning. He is a pioneer of SML methods, active learning algorithms and sparse Gaussian processes. His development of novel nonparametric dimensional models, such as the infinite latent model, was highly influential.