On Need for Topology Awareness of Generative Models

09/07/2019 ∙ by Uyeong Jang, et al. ∙ University of Wisconsin-Madison SRI International 20

Manifold assumption in learning states that: the data lie approximately on a manifold of much lower dimension than the input space. Generative models learn to generate data according to the underlying data distribution. Generative models are used in various tasks, such as data augmentation and generating variation of images. This paper addresses the following question: do generative models need to be aware of the topology of the underlying data manifold in which the data lie? This paper suggests that the answer is yes and demonstrates that these can have ramifications on security-critical applications, such as generative-model based defenses for adversarial examples. We provide theoretical and experimental results to support our claims.



There are no comments yet.


page 18

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.