Concept-Oriented Deep Learning: Generative Concept Representations

11/15/2018
by   Daniel T Chang, et al.
0

Generative concept representations have three major advantages over discriminative ones: they can represent uncertainty, they support integration of learning and reasoning, and they are good for unsupervised and semi-supervised learning. We discuss probabilistic and generative deep learning, which generative concept representations are based on, and the use of variational autoencoders and generative adversarial networks for learning generative concept representations, particularly for concepts whose data are sequences, structured data or graphs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/05/2018

Concept-Oriented Deep Learning

Concepts are the foundation of human deep learning, understanding, and k...
research
05/04/2018

Unsupervised learning for concept detection in medical images: a comparative analysis

As digital medical imaging becomes more prevalent and archives increase ...
research
11/01/2019

Variational Autoencoders for Generative Modelling of Water Cherenkov Detectors

Matter-antimatter asymmetry is one of the major unsolved problems in phy...
research
04/14/2021

Is Disentanglement all you need? Comparing Concept-based Disentanglement Approaches

Concept-based explanations have emerged as a popular way of extracting h...
research
04/22/2020

R-VGAE: Relational-variational Graph Autoencoder for Unsupervised Prerequisite Chain Learning

The task of concept prerequisite chain learning is to automatically dete...
research
11/22/2016

Inducing Interpretable Representations with Variational Autoencoders

We develop a framework for incorporating structured graphical models in ...
research
12/10/2020

Generative Deep Learning Techniques for Password Generation

Password guessing approaches via deep learning have recently been invest...

Please sign up or login with your details

Forgot password? Click here to reset