Task Specific Adversarial Cost Function

09/27/2016
by   Antonia Creswell, et al.
0

The cost function used to train a generative model should fit the purpose of the model. If the model is intended for tasks such as generating perceptually correct samples, it is beneficial to maximise the likelihood of a sample drawn from the model, Q, coming from the same distribution as the training data, P. This is equivalent to minimising the Kullback-Leibler (KL) distance, KL[Q||P]. However, if the model is intended for tasks such as retrieval or classification it is beneficial to maximise the likelihood that a sample drawn from the training data is captured by the model, equivalent to minimising KL[P||Q]. The cost function used in adversarial training optimises the Jensen-Shannon entropy which can be seen as an even interpolation between KL[Q||P] and KL[P||Q]. Here, we propose an alternative adversarial cost function which allows easy tuning of the model for either task. Our task specific cost function is evaluated on a dataset of hand-written characters in the following tasks: Generation, retrieval and one-shot learning.

READ FULL TEXT
research
05/22/2018

Learning to Optimize via Wasserstein Deep Inverse Optimal Control

We study the inverse optimal control problem in social sciences: we aim ...
research
10/02/2004

Applying Policy Iteration for Training Recurrent Neural Networks

Recurrent neural networks are often used for learning time-series data. ...
research
06/20/2022

Policy Optimization with Linear Temporal Logic Constraints

We study the problem of policy optimization (PO) with linear temporal lo...
research
08/13/2018

Stealth Attacks on the Smart Grid

Random attacks that jointly minimize the amount of information acquired ...
research
06/11/2020

Large-Scale Adversarial Training for Vision-and-Language Representation Learning

We present VILLA, the first known effort on large-scale adversarial trai...
research
11/11/2021

Causal KL: Evaluating Causal Discovery

The two most commonly used criteria for assessing causal model discovery...
research
02/06/2018

Critical Percolation as a Framework to Analyze the Training of Deep Networks

In this paper we approach two relevant deep learning topics: i) tackling...

Please sign up or login with your details

Forgot password? Click here to reset