1 Introduction
Generative models provide a powerful mechanism for learning data distributions and simulating samples. Recent years have seen remarkable advances especially on the deep approaches (Goodfellow et al., 2016; Hu et al., 2018b) such as Generative Adversarial Networks (GANs) (Goodfellow et al., 2014)
, Variational Autoencoders (VAEs)
(Kingma and Welling, 2013), autoregressive networks (Larochelle and Murray, 2011; Oord et al., 2016), and so forth. However, it is usually difficult to exploit in these various deep generative models rich problem structures and domain knowledge (e.g., the human body structure in image generation, Figure 1). Many times we have to hope the deep networks can discover the structures from massive data by themselves, leaving much valuable domain knowledge unused. Recent efforts of designing specialized network architectures or learning disentangled representations (Chen et al., 2016; Hu et al., 2017) are usually only applicable to specific knowledge, models, or tasks. It is therefore highly desirable to have a general means of incorporating arbitrary structured knowledge with any types of deep generative models in a principled way.On the other hand, posterior regularization (PR) (Ganchev et al., 2010) is a principled framework to impose knowledge constraints on posterior distributions of probabilistic models, and has shown effectiveness in regulating the learning of models in different context. For example, Hu et al. (2016a)
extends PR to incorporate structured logic rules with neural classifiers. However, the previous approaches are not directly applicable to the general case of deep generative models, as many of the models (e.g., GANs, many autoregressive networks) are not straightforwardly formulated with the probabilistic Bayesian framework and do not possess a posterior distribution or even meaningful latent variables. Moreover, PR has required
a priori fixed constraints. That means users have to fully specify the constraints beforehand, which can be impractical due to heavy engineering, or suboptimal without adaptivity to the data and models. To extend the scope of applicable knowledge and reduce engineering burden, it is necessary to allow users to specify only partial or fuzzy structures, while learning remaining parts of the constraints jointly with the regulated model.To this end, we establish formal connections between the PR framework with a broad set of algorithms in the control and reinforcement learning (RL) domains, and, based on the connections, transfer welldeveloped RL techniques for constraint learning in PR. In particular, though the PR framework and the RL are apparently distinct paradigms applied in different context, we show mathematical correspondence between the model and constraints in PR with the policy and reward in entropyregularized policy optimization (Peters et al., 2010; Schulman et al., 2015; Abdolmaleki et al., 2018), respectively. This thus naturally inspires to leverage relevant approach from the RL domain (specifically, the maximum entropy inverse RL (Ziebart et al., 2008; Finn et al., 2016b)) to learn the PR constraints from data (i.e., demonstrations in RL).
Based on the unified perspective, we drive a practical algorithm with efficient estimations and moderate approximations. The algorithm is efficient to regularize large target space with arbitrary constraints, flexible to couple adapting the constraints with learning the model, and modelagnostic to apply to diverse deep generative models, including implicit models where generative density cannot be evaluated
(Mohamed and Lakshminarayanan, 2016; Goodfellow et al., 2014). We demonstrate the effectiveness of the proposed approach in both image and text generation (Figure
1). Leveraging domain knowledge of structurepreserving constraints, the resulting models improve over base generative models.2 Related Work
It is of increasing interest to incorporate problem structures and domain knowledge in machine learning approaches
(Taskar et al., 2004; Ganchev et al., 2010; Hu et al., 2016a). The added structure helps to facilitate learning, enhance generalization, and improve interpretability. For deep neural models, one of the common ways is to design specialized network architectures or features for specific tasks (e.g., Andreas et al. (2016); Liang et al. (2018); Kusner et al. (2017); Liang et al. (2017)). Such a method typically has a limited scope of applicable tasks, models, or knowledge. On the other hand, for structured probabilistic models, posterior regularization (PR) and related frameworks (Ganchev et al., 2010; Liang et al., 2009; Bellare et al., 2009) provide a general means to impose knowledge constraints during model estimation. Hu et al. (2016a) develops iterative knowledge distillationbased on PR to regularize neural networks with any logic rules. However, the application of PR to the broad class of deep generative models has been hindered, as many of the models do not even possess meaningful latent variables or explicit density evaluation (i.e., implicit models). Previous attempts thus are limited to applying simple maxmargin constraints
(Li et al., 2015). The requirement of a priori fixed constraints has also made PR impractical for complex, uncertain knowledge. Previous efforts to alleviate the issue either require additional manual supervision (Mei et al., 2014) or is limited to regularizing small label space (Hu et al., 2016b). This paper develops a practical algorithm that is generally applicable to any deep generative models and any learnable constraints on arbitrary (large) target space.Our work builds connections between the Bayesian PR framework and reinforcement learning. A relevant, broad research topic of formalizing RL as a probabilistic inference problem has been explored in the RL literature (Dayan and Hinton, 1997; Deisenroth et al., 2013; Neumann et al., 2011; Levine, 2018; Abdolmaleki et al., 2018; Tan et al., 2018), where rich approximate inference tools are used to improve the modeling and reasoning for various RL algorithms. The link between RL and PR has not been previously studied. We establish the mathematical correspondence, and, differing from the RL literature, we in turn transfer the tools from RL to expand the probabilistic PR framework. Inverse reinforcement learning (IRL) seeks to learn a reward function from expert demonstrations. Recent approaches based on maximumentropy IRL (Ziebart et al., 2008) are developed to learn both the reward and policy (Finn et al., 2016b, a; Fu et al., 2017). We adopt the maximumentropy IRL formulation to derive the constraint learning objective in our algorithm, and leverage the unique structure of PR for efficient importance sampling estimation, which differs from these previous approaches.
Components  PR  EntropyReg RL  MaxEnt IRL  (Energy) GANs 

data/generations  actionstate samples  demonstrations  data/generations  
generative model  (old) policy  —  generator  
constraint  reward  reward  discriminator  
variational distr. , Eq.3  (new) policy  policy  — 
3 Connecting Posterior Regularization to Reinforcement Learning
3.1 PR for Deep Generative Models
PR (Ganchev et al., 2010)
was originally proposed to provide a principled framework for incorporating constraints on posterior distributions of probabilistic models with latent variables. The formulation is not generally applicable to deep generative models as many of them (e.g., GANs and autoregressive models) are not formulated within the Bayesian framework and do not possess a valid posterior distribution or even semantically meaningful latent variables. Here we adopt a slightly adapted formulation that makes minimal assumptions on the specifications of the model to regularize. It is worth noting that though we present in the generative model context, the formulations, including the algorithm developed later (sec
4), can straightforwardly be extended to other settings such as discriminative models.Consider a generative model with parameters . Note that generation of can condition on arbitrary other elements (e.g., the source image for image transformation) which are omitted for simplicity of notations. Denote the original objective of with . PR augments the objective by adding a constraint term encoding the domain knowledge. Without loss of generality, consider constraint function , such that a higher value indicates a better in terms of the particular knowledge. Note that can also involve other factors such as latent variables and extra supervisions, and can include a set of multiple constraints.
A straightforward way to impose the constraint on the model is to maximize . Such method is efficient only when is a GANlike implicit generative model or an explicit distribution that can be efficiently reparameterized (e.g., Gaussian Kingma and Welling (2013)). For other models such as the large set of nonreparameterizable explicit distributions, the gradient is usually computed with the logderivative
trick and can suffer from high variance. For broad applicability and efficient optimization, PR instead imposes the constraint on an auxiliary variational distribution
, which is encouraged to stay close to through a KL divergence term:(1) 
where is the weight of the constraint term. The PR objective for learning the model is written as:
(2) 
where
is the balancing hyperparameter. As optimizing the original model objective
is straightforward and depends on the specific generative model of choice, in the following we omit the discussion of and focus on introduced by the framework.The problem is solved using an EMstyle algorithm (Ganchev et al., 2010; Hu et al., 2016a). Specifically, the Estep optimizes Eq.(1) w.r.t , which is convex and has a closedform solution at each iteration given :
(3) 
where is the normalization term. We can see as an energybased distribution with the negative energy defined by . With from the Estep fixed, the Mstep optimizes Eq.(1) w.r.t with:
(4) 
Constraint in PR has to be fullyspecified a priori and is fixed throughout the learning. It would be desirable or even necessary to enable learnable constraints so that practitioners are allowed to specify only the known components of while leaving any unknown or uncertain components automatically learned. For example, for human image generation in Figure 1, left panel, users are able to specify structures on the parsed human parts, while it is impractical to also manually engineer the human part parser that involves recognizing parts from raw image pixels. It is favorable to instead cast the parser as a learnable module in the constraint. Though it is possible to pretrain the module and simply fix in PR, the lack of adaptivity to the data and model can lead to suboptimal results, as shown in the empirical study (Table 2). This necessitates to expand the PR framework to enable joint learning of constraints with the model.
Denote the constraint function with learnable components as , where can be of various forms that are optimizable, such as the free parameters of a structural model, or a graph structure to optimize.
Simple way of learning the constraint. A straightforward way to learn the constraint is to directly optimize Eq.(1) w.r.t in the Mstep, yielding
(5) 
That is, the constraint is trained to fit to the samples from the current regularized model . However, such objective can be problematic as the generated samples can be of low quality, e.g., due to poor state of the generative parameter at initial stages, or insufficient capability of the generative model per se.
In this paper, we propose to treat the learning of constraint as an extrinsic reward, as motivated by the connections between PR with the reinforcement learning domain presented below.
3.2 PR and RL
RL or optimal control has been studied primarily for determining optimal action sequences or strategies, which is significantly different from the context of PR that aims at regulating generative models. However, formulations very similar to PR (e.g., Eqs.1 and 3) have been developed and widely used, in both the (forward) RL for policy optimization and the inverse RL for reward learning.
To make the mathematical correspondence clearer, we intentionally reuse most of the notations from PR. Table 1
lists the correspondence. Specifically, consider a stationary Markov decision process (MDP). An agent in state
draws an action following the policy . The state subsequently transfers to(with some transition probability of the MDP), and a reward is obtained
. Let denote the stateaction pair, and where is the stationary state distribution (Sutton and Barto, 1998).3.2.1 Entropy regularized policy optimization
The goal of policy optimization is to find the optimal policy that maximizes the expected reward. The rich research line of entropy regularized policy optimization has augmented the objective with information theoretic regularizers such as KL divergence between the new policy and the old policy for stabilized learning. With a slight abuse of notations, let denote the new policy and the old one. A prominent algorithm for example is the relative entropy policy search (REPS) (Peters et al., 2010) which follows the objective:
(6) 
where the KL divergence prevents the policy from changing too rapidly. Similar objectives have also been widely used in other workhorse algorithms such as trustregion policy optimization (TRPO) (Schulman et al., 2015), soft Qlearning (Haarnoja et al., 2017; Schulman et al., 2017), and others.
We can see the close resemblance between Eq.(6) with the PR objective in Eq.(1), where the generative model in PR corresponds to the reference policy , while the constraint corresponds to the reward . The new policy can be either a parametric distribution (Schulman et al., 2015) or a nonparametric distribution (Peters et al., 2010; Abdolmaleki et al., 2018). For the latter, the optimization of Eq.(6) precisely corresponds to the Estep of PR, yielding the optimal policy that takes the same form of in Eq.(3), with and replaced with the respective counterparts and , respectively. The parametric policy is subsequently updated with samples from , which is exactly equivalent to the Mstep in PR (Eq.4).
While the above policy optimization algorithms have assumed a reward function given by the external environment, just as the predefined constraint function in PR, the strong connections above inspire us to treat the PR constraint as an extrinsic reward, and utilize the rich tools in RL (especially the inverse RL) for learning the constraint.
3.2.2 Maximum entropy inverse reinforcement learning
Maximum entropy (MaxEnt) IRL (Ziebart et al., 2008) is among the most widelyused methods that induce the reward function from expert demonstrations , where is the empirical demonstration (data) distribution. MaxEnt IRL adopts the same principle as the above entropy regularized RL (Eq.6) that maximizes the expected reward regularized by the relative entropy (i.e., the KL), except that, in MaxEnt IRL, is replaced with a uniform distribution and the regularization reduces to the entropy of . Therefore, same as above, the optimal policy takes the form . MaxEnt IRL assumes the demonstrations are drawn from the optimal policy. Learning the reward function with unknown parameters is then cast as maximizing the likelihood of the distribution :
(7) 
Given the direct correspondence between the policy in MaxEnt IRL and the policy optimization solution of Eq.(6), plus the connection between the regularized distribution of PR (Eq.3) and as built in sec 3.2.1, we can readily link and . This motivates to plug in the above maximum likelihood objective to learn the constraint which is parallel to the reward function . We present the resulting full algorithm in the next section. Table 1 summarizes the correspondence between PR, entropy regularized policy gradient, and maximum entropy IRL.
4 Algorithm
We have formally related PR to the RL methods. With the unified view of these approaches, we derive a practical algorithm for arbitrary learnable constraints on any deep generative models. The algorithm alternates the optimization of the constraint and the generative model .
4.1 Learning the Constraint
As motivated in section 3.2, instead of directly optimizing in the original PR objectives (Eq.5) which can be problematic, we treat as the reward function to be induced with the MaxEnt IRL framework. That is, we maximize the data likelihood of (Eq.3) w.r.t , yielding the gradient:
(8) 
The second term involves estimating the expectation w.r.t an energybased distribution , which is in general very challenging. However, we can exploit the special structure of for efficient approximation. Specifically, we use as the proposal distribution, and obtain the importance sampling estimate of the second term as following:
(9) 
Note that the normalization can also be estimated efficiently with MC sampling: , where . The base generative distribution is a natural choice for the proposal as it is in general amenable to efficient sampling, and is close to as forced by the KL divergence in Eq.(1). Our empirical study shows low variance of the learning process (sec 5). Moreover, using as the proposal distribution allows to be an implicit generative model (as no likelihood evaluation of is needed). Note that the importance sampling estimation is consistent yet biased.
4.2 Learning the Generative Model
Given the current parameter state , and evaluated at the parameters, we continue to update the generative model. Recall that optimization of the generative parameter is performed by minimizing the KL divergence in Eq.(4), which we replicate here:
(10) 
The expectation w.r.t can be estimated as above (Eq.9). A drawback of the objective is the requirement of evaluating the generative density , which is incompatible to the emerging implicit generative models (Mohamed and Lakshminarayanan, 2016) that only permit simulating samples but not evaluating density.
To address the restriction, when it comes to regularizing implicit models, we propose to instead minimize the reverse KL divergence:
(11) 
By noting that , we obtain the gradient w.r.t :
(12) 
That is, the gradient of minimizing the reversed KL divergence equals the gradient of maximizing . Intuitively, the objective encourages the generative model to generate samples that the constraint function assigns high scores. Though the objective for implicit model deviates the original PR framework, reversing KL for computationality was also used previously such as in the classic wakesleep method (Hinton et al., 1995). The resulting algorithm also resembles the adversarial learning in GANs, as we discuss in the next section. Empirical results on implicit models show the effectiveness of the objective.
The resulting algorithm is summarized in Alg.1.
Connections to adversarial learning
For implicit generative models, the two objectives w.r.t and (Eq.8 and Eq.12) are conceptually similar to the adversarial learning in GANs (Goodfellow et al., 2014) and the variants such as energybased GANs (Kim and Bengio, 2016; Zhao et al., 2016; Zhai et al., 2016; Wang and Liu, 2016). Specifically, the constraint can be seen as being optimized to assign lower energy (with the energybased distribution ) to real examples from , and higher energy to fake samples from which is the regularized model of the generator . In contrast, the generator is optimized to generate samples that confuse and obtain lower energy. Such adversarial relation links the PR constraint to the discriminator in GANs (Table 1). Note that here fake samples are generated from and
in the two learning phases, respectively, which differs from previous adversarial methods for energybased model estimation that simulate only from a generator. Besides, distinct from the discriminatorcentric view of the previous work
(Kim and Bengio, 2016; Zhai et al., 2016; Wang and Liu, 2016), we primarily aim at improving the generative model by incorporating learned constraints. Last but not the least, as discussed in sec 3.1, the proposed framework and algorithm are more generally and efficiently applicable to not only implicit generative models as in GANs, but also (non)reparameterizable explicit generative models.5 Experiments
We demonstrate the applications and effectiveness of the algorithm in two tasks related to image and text generation Hu et al. (2018a), respectively.
Method  SSIM  Human  
1  Ma et al. (2018)  0.614  — 
2  Pumarola et al. (2018)  0.747  — 
3  Ma et al. (2017)  0.762  — 
4  Base model  0.676  0.03 
5  With fixed constraint  0.679  0.12 
6  With learned constraint  0.727  0.77 
5.1 Pose Conditional Person Image Generation
Given a person image and a new body pose, the goal is to generate an image of the same person under the new pose (Figure 1, left). The task is challenging due to body selfocclusions and many cloth and shape ambiguities. Complete endtoend generative networks have previously failed (Ma et al., 2017) and existing work designed specialized generative processes or network architectures (Ma et al., 2017; Pumarola et al., 2018; Ma et al., 2018). We show that with an added body part consistency constraint, a plain endtoend generative model can also be trained to produce highly competitive results, significantly improving over base models that do not incorporate the problem structure.
Setup. We follow the previous work (Ma et al., 2017) and obtain from DeepFashion (Liu et al., 2016) a set of triples (source image, pose keypoints, target image) as supervision data. The base generative model is an implicit model that transforms the input source and pose directly to the pixels of generated image (and hence defines a Diracdelta distribution). We use the residual block architecture (Wang et al., 2017) widelyused in image generation for the generative model. The base model is trained to minimize the L1 distance loss between the real and generated pixel values, as well as to confuse a binary discriminator that distinguishes between the generation and the true target image.
Knowledge constraint. Neither the pixelwise distance nor the binary discriminator loss encode any task structures. We introduce a structured consistency constraint that encourages each of the body parts (e.g., head, legs) of the generated image to match the respective part of the true image. Specifically, the constraint includes a human parsing module that classifies each pixel of a person image into possible body parts. The constraint then evaluates cross entropies of the perpixel part distributions between the generated and true images. The average negative cross entropy serves as the constraint score. The parsing module is parameterized as a neural network with parameters , pretrained on an external parsing dataset (Gong et al., 2017), and subsequently adapted within our algorithm jointly with the generative model.
Results. Table 2 compares the full model (with the learned constraint, Row 6) with the base model (Row 4) and the one regularized with the constraint that is fixed after pretraining (Row 5). Human survey is performed by asking annotators to rank the quality of images generated by the three models on each of 200 test cases, and the percentages of ranked as the best are reported (Tied ranking is treated as negative result). We can see great improvement by the proposed algorithm. The model with fixed constraint fails, partially because pretraining on external data does not necessarily fit to the current problem domain. This highlights the necessity of the constraint learning. Figure 3 shows examples further validating the effectiveness of the algorithm.
In sec 4, we have discussed the close connection between the proposed algorithm and (energybased) GANs. The conventional discriminator in GANs can be seen as a special type of constraint. With this connection and given that the generator in the task is an implicit generative model, here we can also apply and learn the structured consistency constraint using GANs, which is equivalent to replacing in Eq.(8) with . Such a variant produces a SSIM score of 0.716, slightly inferior to the result of the full algorithm (Row 6). We suspect this is because fake samples by (instead of ) can help with better constraint learning. It would be interesting to explore this in more applications.
To give a sense of the state of the task, Table 2 also lists the performance of previous work. It is worth noting that these results are not directly comparable, as discussed in (Pumarola et al., 2018), due to different settings (e.g., the test splits) between each of them. We follow (Ma et al., 2017, 2018) mostly, while our generative model is much simpler than these work with specialized, multistage architectures. The proposed algorithm learns constraints with moderate approximations. Figure 2 validates that the training is stable and converges smoothly as the base models.
5.2 Template Guided Sentence Generation
The task is to generate a text sentence that follows a given template (Figure 1, right). Each missing part in the template can contain arbitrary number of words. This differs from previous sentence completion tasks (Fedus et al., 2018; Zweig and Burges, 2011) which designate each masked position to have a single word. Thus directly applying these approaches to the task can be problematic.
Setup. We use an attentional sequencetosequence (seq2seq) (Bahdanau et al., 2014) model as the base generative model for the task. Paired (template, sentence) data is obtained by randomly masking out different parts of sentences from the IMDB corpus (Diao et al., 2014). The base model is trained in an endtoend supervised manner, which allows it to memorize the words in the input template and repeat them almost precisely in the generation. However, the main challenge is to generate meaningful and coherent content to fill in the missing parts.
Knowledge constraint. To tackle the issue, we add a constraint that enforces matching between the generated sentence and the groundtruth text in the missing parts. Specifically, let be the maskedout true text. That is, plugging into the template recovers the true complete sentence. The constraint is defined as which returns a high score if the sentence matches well. The actual implementation of the matching strategy can vary. Here we simply specify as another seq2seq network that takes as input a sentence and evaluates the likelihood of recovering —This is all we have to specify, while the unknown parameters are learned jointly with the generative model. Despite the simplicity, the empirical results show the usefulness of the constraint.
Results. Table 4 shows the results. Row 2 is the base model with an additional binary discriminator that adversarial distinguishes between the generated sentence and the ground truth (i.e., a GAN model). Row 3 is the base model with the constraint learned in the direct way through Eq.(5). We see that the improper learning method for the constraint harms the model performance, partially because of the relatively lowquality model samples the constraint is trained to fit. In contrast, the proposed algorithm effectively improves the model results. Its superiority over the binary discriminator (Row 2) shows the usefulness of incorporating problem structures. Table 4 demonstrates samples by the base and constrained models. Without the explicit constraint forcing infilling content matching, the base model tends to generate less meaningful content (e.g., duplications, short and general expressions).
6 Discussions: Combining Structured Knowledge with Blackbox NNs
We revealed the connections between posterior regularization and reinforcement learning, which motivates to learn the knowledge constraints in PR as reward learning in RL. The resulting algorithm is generally applicable to any deep generative models, and flexible to learn the constraints and model jointly. Experiments on image and text generation showed the effectiveness of the algorithm.
The proposed algorithm, along with the previous work (e.g., Hu et al. (2016a, b); Hinton et al. (2015); LopezPaz et al. (2015); Hu et al. (2017)), represents a general means of adding (structured) knowledge to blackbox neural networks by devising knowledgeinspired losses/constraints that drive the model to learn the desired structures. This differs from the other popular way that embeds domain knowledge into specificallydesigned neural architectures (e.g., the knowledge of translationinvariance in image classification is hardcoded in the convpooling architecture of ConvNet). While the specialized neural architectures can usually be very effective to capture the designated knowledge, incorporating knowledge via specialized losses enjoys the advantage of generality and flexibility:

[leftmargin=0.7cm]

Modelagnostic. The learning framework is applicable to neural models with any architectures, e.g., ConvNets, RNNs, and other specialized ones Hu et al. (2016a).

Richer supervisions. Compared to the conventional endtoend maximum likelihood learning that usually requires fullyannotated or paired data, the knowledgeaware losses provide additional supervisions based on, e.g., structured rules (Hu et al., 2016a), other models (Hinton et al., 2015; Hu et al., 2016b; Yang et al., 2018; Holtzman et al., 2018), and datasets for other related tasks (e.g., the human image generation method in Figure 1, and (Hu et al., 2017)). In particular, Hu et al. (2017) leverages datasets of sentence sentiment and phrase tense to learn to control the both attributes (sentiment and tense) when generating sentences.

Modularized design and learning. With the rich sources of supervisions, design and learning of the model can still be simple and efficient, because each of the supervision sources can be formulated independently to each other and each forms a separate loss term. For example, Hu et al. (2017) separately learns two classifiers, one for sentiment and the other for tense, on two separate datasets, respectively. The two classifiers carry respective semantic knowledge, and are then jointly applied to a text generation model for attribute control. In comparison, mixing and hardcoding multiple knowledge in a single neural architecture can be difficult and quickly becoming impossible when the number of knowledge increases.

Generation with discrimination knowledge. In generation tasks, it can sometimes be difficult to incorporate knowledge directly in the generative process (or model architecture), i.e., defining how to generate
. In contrast, it is often easier to instead specify a evaluation metric that measures the quality of a given sample in terms of the knowledge, i.e., defining
what desired generation is. For example, in the human image generation task (Figure 1), evaluating the structured human part consistency could be easier than designing a generator architecture that hardcodes the structured generation process for the human parts.
It is worth noting that the two paradigms are not mutually exclusive. A model with knowledgeinspired specialized architecture can still be learned by optimizing knowledgeinspired losses. Different types of knowledge can be best fit for either architecture hardcoding or loss optimization. It would be interesting to explore the combination of both in the above tasks and others.
Acknowledgment
This material is based upon work supported by the National Science Foundation grant IIS1563887. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
References
 Abdolmaleki et al. (2018) A. Abdolmaleki, J. T. Springenberg, Y. Tassa, R. Munos, N. Heess, and M. Riedmiller. Maximum a posteriori policy optimisation. In ICLR, 2018.
 Andreas et al. (2016) J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Learning to compose neural networks for question answering. arXiv preprint arXiv:1601.01705, 2016.
 Bahdanau et al. (2014) D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
 Bellare et al. (2009) K. Bellare, G. Druck, and A. McCallum. Alternating projections for learning with expectation constraints. In UAI, pages 43–50. AUAI Press, 2009.
 Chen et al. (2016) X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016.

Dayan and Hinton (1997)
P. Dayan and G. E. Hinton.
Using expectationmaximization for reinforcement learning.
Neural Computation, 9(2):271–278, 1997.  Deisenroth et al. (2013) M. P. Deisenroth, G. Neumann, J. Peters, et al. A survey on policy search for robotics. Foundations and Trends® in Robotics, 2(1–2):1–142, 2013.
 Diao et al. (2014) Q. Diao, M. Qiu, C.Y. Wu, A. J. Smola, J. Jiang, and C. Wang. Jointly modeling aspects, ratings and sentiments for movie recommendation (JMARS). In KDD, pages 193–202. ACM, 2014.
 Fedus et al. (2018) W. Fedus, I. Goodfellow, and A. M. Dai. MaskGAN: Better text generation via filling in the _. arXiv preprint arXiv:1801.07736, 2018.
 Finn et al. (2016a) C. Finn, P. Christiano, P. Abbeel, and S. Levine. A connection between generative adversarial networks, inverse reinforcement learning, and energybased models. arXiv preprint arXiv:1611.03852, 2016a.
 Finn et al. (2016b) C. Finn, S. Levine, and P. Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In ICML, pages 49–58, 2016b.
 Fu et al. (2017) J. Fu, K. Luo, and S. Levine. Learning robust rewards with adversarial inverse reinforcement learning. arXiv preprint arXiv:1710.11248, 2017.
 Ganchev et al. (2010) K. Ganchev, J. Gillenwater, B. Taskar, et al. Posterior regularization for structured latent variable models. JMLR, 11(Jul):2001–2049, 2010.
 Gong et al. (2017) K. Gong, X. Liang, X. Shen, and L. Lin. Look into person: Selfsupervised structuresensitive learning and a new benchmark for human parsing. In CVPR, pages 6757–6765, 2017.
 Goodfellow et al. (2014) I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, pages 2672–2680, 2014.
 Goodfellow et al. (2016) I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org.
 Haarnoja et al. (2017) T. Haarnoja, H. Tang, P. Abbeel, and S. Levine. Reinforcement learning with deep energybased policies. arXiv preprint arXiv:1702.08165, 2017.
 Hinton et al. (2015) G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
 Hinton et al. (1995) G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal. The “wakesleep” algorithm for unsupervised neural networks. Science, 268(5214):1158, 1995.
 Holtzman et al. (2018) A. Holtzman, J. Buys, M. Forbes, A. Bosselut, D. Golub, and Y. Choi. Learning to write with cooperative discriminators. In ACL, 2018.
 Hu et al. (2016a) Z. Hu, X. Ma, Z. Liu, E. Hovy, and E. Xing. Harnessing deep neural networks with logic rules. In ACL, 2016a.
 Hu et al. (2016b) Z. Hu, Z. Yang, R. Salakhutdinov, and E. P. Xing. Deep neural networks with massive learned knowledge. In EMNLP, 2016b.
 Hu et al. (2017) Z. Hu, Z. Yang, X. Liang, R. Salakhutdinov, and E. P. Xing. Toward controlled generation of text. In ICML, 2017.
 Hu et al. (2018a) Z. Hu, H. Shi, Z. Yang, B. Tan, T. Zhao, J. He, W. Wang, X. Yu, L. Qin, D. Wang, et al. Texar: A modularized, versatile, and extensible toolkit for text generation. arXiv preprint arXiv:1809.00794, 2018a.
 Hu et al. (2018b) Z. Hu, Z. Yang, R. Salakhutdinov, and E. P. Xing. On unifying deep generative models. In ICLR, 2018b.
 Kim and Bengio (2016) T. Kim and Y. Bengio. Deep directed generative models with energybased probability estimation. arXiv preprint arXiv:1606.03439, 2016.
 Kingma and Welling (2013) D. P. Kingma and M. Welling. Autoencoding variational Bayes. arXiv preprint arXiv:1312.6114, 2013.
 Kusner et al. (2017) M. J. Kusner, B. Paige, and J. M. HernándezLobato. Grammar variational autoencoder. arXiv preprint arXiv:1703.01925, 2017.
 Larochelle and Murray (2011) H. Larochelle and I. Murray. The neural autoregressive distribution estimator. In AISTATS, 2011.
 Levine (2018) S. Levine. Reinforcement learning and control as probabilistic inference: Tutorial and review. arXiv preprint arXiv:1805.00909, 2018.
 Li et al. (2015) C. Li, J. Zhu, T. Shi, and B. Zhang. Maxmargin deep generative models. In NIPS, pages 1837–1845, 2015.
 Liang et al. (2009) P. Liang, M. I. Jordan, and D. Klein. Learning from measurements in exponential families. In ICML, pages 641–648. ACM, 2009.
 Liang et al. (2017) X. Liang, Z. Hu, H. Zhang, C. Gan, and E. P. Xing. Recurrent topictransition GAN for visual paragraph generation. In ICCV, 2017.
 Liang et al. (2018) X. Liang, Z. Hu, and E. Xing. Symbolic graph reasoning meets convolutions. In NIPS, 2018.
 Liu et al. (2016) Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In CVPR, pages 1096–1104, 2016.
 LopezPaz et al. (2015) D. LopezPaz, L. Bottou, B. Schölkopf, and V. Vapnik. Unifying distillation and privileged information. arXiv preprint arXiv:1511.03643, 2015.
 Ma et al. (2017) L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool. Pose guided person image generation. In NIPS, pages 405–415, 2017.
 Ma et al. (2018) L. Ma, Q. Sun, S. Georgoulis, L. Van Gool, B. Schiele, and M. Fritz. Disentangled person image generation. In CVPR, 2018.
 Mei et al. (2014) S. Mei, J. Zhu, and J. Zhu. Robust regBayes: Selectively incorporating firstorder logic domain knowledge into bayesian models. In ICML, pages 253–261, 2014.
 Mohamed and Lakshminarayanan (2016) S. Mohamed and B. Lakshminarayanan. Learning in implicit generative models. arXiv preprint arXiv:1610.03483, 2016.
 Neumann et al. (2011) G. Neumann et al. Variational inference for policy search in changing situations. In ICML, pages 817–824, 2011.
 Oord et al. (2016) A. v. d. Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
 Peters et al. (2010) J. Peters, K. Mülling, and Y. Altun. Relative entropy policy search. In AAAI, pages 1607–1612. Atlanta, 2010.
 Pumarola et al. (2018) A. Pumarola, A. Agudo, A. Sanfeliu, and F. MorenoNoguer. Unsupervised person image synthesis in arbitrary poses. In CVPR, 2018.
 Schulman et al. (2015) J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz. Trust region policy optimization. In ICML, pages 1889–1897, 2015.
 Schulman et al. (2017) J. Schulman, X. Chen, and P. Abbeel. Equivalence between policy gradients and soft Qlearning. arXiv preprint arXiv:1704.06440, 2017.
 Sutton and Barto (1998) R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.
 Tan et al. (2018) B. Tan, Z. Hu, Z. Yang, R. Salakhutdinov, and E. Xing. Connecting the dots between MLE and RL for text generation. 2018.
 Taskar et al. (2004) B. Taskar, C. Guestrin, and D. Koller. Maxmargin Markov networks. In NIPS, pages 25–32, 2004.
 Wang and Liu (2016) D. Wang and Q. Liu. Learning to draw samples: With application to amortized MLE for generative adversarial learning. arXiv preprint arXiv:1611.01722, 2016.
 Wang et al. (2017) T.C. Wang, M.Y. Liu, J.Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro. Highresolution image synthesis and semantic manipulation with conditional GANs. arXiv preprint arXiv:1711.11585, 2017.
 Wang et al. (2004) Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004.
 Yang et al. (2018) Z. Yang, Z. Hu, C. Dyer, E. Xing, and T. BergKirkpatrick. Unsupervised text style transfer using language models as discriminators. In NIPS, 2018.
 Zhai et al. (2016) S. Zhai, Y. Cheng, R. Feris, and Z. Zhang. Generative adversarial networks as variational training of energy based models. arXiv preprint arXiv:1611.01799, 2016.
 Zhao et al. (2016) J. Zhao, M. Mathieu, and Y. LeCun. Energybased generative adversarial network. arXiv preprint arXiv:1609.03126, 2016.
 Ziebart et al. (2008) B. D. Ziebart, A. L. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In AAAI, volume 8, pages 1433–1438. Chicago, IL, USA, 2008.
 Zweig and Burges (2011) G. Zweig and C. J. Burges. The Microsoft Research sentence completion challenge. Technical report, Citeseer, 2011.
Comments
There are no comments yet.