Deep Residual Mixture Models

06/22/2020 ∙ by Perttu Hämäläinen, et al. ∙ 0

We propose Deep Residual Mixture Models (DRMMs) which share the many desirable properties of Gaussian Mixture Models (GMMs), but with a crucial benefit: The modeling capacity of a DRMM can grow exponentially with depth, while the number of model parameters only grows quadratically. DRMMs allow for extremely flexible conditional sampling, as the conditioning variables can be freely selected without re-training the model, and it is easy to combine the sampling with priors and (in)equality constraints. DRMMs should be applicable where GMMs are traditionally used, but as demonstrated in our experiments, DRMMs scale better to complex, high-dimensional data. We demonstrate the approach in constrained multi-limb inverse kinematics and image completion.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 7

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.