Deep Generative Models with Learnable Knowledge Constraints

06/26/2018 ∙ by Zhiting Hu, et al. ∙ Petuum, Inc. Carnegie Mellon University 4

The broad set of deep generative models (DGMs) has achieved remarkable advances. However, it is often difficult to incorporate rich structured domain knowledge with the end-to-end DGMs. Posterior regularization (PR) offers a principled framework to impose structured constraints on probabilistic models, but has limited applicability to the diverse DGMs that can lack a Bayesian formulation or even explicit density evaluation. PR also requires constraints to be fully specified a priori, which is impractical or suboptimal for complex knowledge with learnable uncertain parts. In this paper, we establish mathematical correspondence between PR and reinforcement learning (RL), and, based on the connection, expand PR to learn constraints as the extrinsic reward in RL. The resulting algorithm is model-agnostic to apply to any DGMs, and is flexible to adapt arbitrary constraints with the model jointly. Experiments on human image generation and templated sentence generation show models with learned knowledge constraints by our algorithm greatly improve over base generative models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Generative models provide a powerful mechanism for learning data distributions and simulating samples. Recent years have seen remarkable advances especially on the deep approaches (Goodfellow et al., 2016; Hu et al., 2018b) such as Generative Adversarial Networks (GANs) (Goodfellow et al., 2014)

, Variational Autoencoders (VAEs) 

(Kingma and Welling, 2013), auto-regressive networks (Larochelle and Murray, 2011; Oord et al., 2016), and so forth. However, it is usually difficult to exploit in these various deep generative models rich problem structures and domain knowledge (e.g., the human body structure in image generation, Figure 1). Many times we have to hope the deep networks can discover the structures from massive data by themselves, leaving much valuable domain knowledge unused. Recent efforts of designing specialized network architectures or learning disentangled representations (Chen et al., 2016; Hu et al., 2017) are usually only applicable to specific knowledge, models, or tasks. It is therefore highly desirable to have a general means of incorporating arbitrary structured knowledge with any types of deep generative models in a principled way.

On the other hand, posterior regularization (PR) (Ganchev et al., 2010) is a principled framework to impose knowledge constraints on posterior distributions of probabilistic models, and has shown effectiveness in regulating the learning of models in different context. For example, Hu et al. (2016a)

extends PR to incorporate structured logic rules with neural classifiers. However, the previous approaches are not directly applicable to the general case of deep generative models, as many of the models (e.g., GANs, many auto-regressive networks) are not straightforwardly formulated with the probabilistic Bayesian framework and do not possess a posterior distribution or even meaningful latent variables. Moreover, PR has required

a priori fixed constraints. That means users have to fully specify the constraints beforehand, which can be impractical due to heavy engineering, or suboptimal without adaptivity to the data and models. To extend the scope of applicable knowledge and reduce engineering burden, it is necessary to allow users to specify only partial or fuzzy structures, while learning remaining parts of the constraints jointly with the regulated model.

To this end, we establish formal connections between the PR framework with a broad set of algorithms in the control and reinforcement learning (RL) domains, and, based on the connections, transfer well-developed RL techniques for constraint learning in PR. In particular, though the PR framework and the RL are apparently distinct paradigms applied in different context, we show mathematical correspondence between the model and constraints in PR with the policy and reward in entropy-regularized policy optimization (Peters et al., 2010; Schulman et al., 2015; Abdolmaleki et al., 2018), respectively. This thus naturally inspires to leverage relevant approach from the RL domain (specifically, the maximum entropy inverse RL (Ziebart et al., 2008; Finn et al., 2016b)) to learn the PR constraints from data (i.e., demonstrations in RL).

Based on the unified perspective, we drive a practical algorithm with efficient estimations and moderate approximations. The algorithm is efficient to regularize large target space with arbitrary constraints, flexible to couple adapting the constraints with learning the model, and model-agnostic to apply to diverse deep generative models, including implicit models where generative density cannot be evaluated 

(Mohamed and Lakshminarayanan, 2016; Goodfellow et al., 2014)

. We demonstrate the effectiveness of the proposed approach in both image and text generation (Figure 

1). Leveraging domain knowledge of structure-preserving constraints, the resulting models improve over base generative models.

Figure 1: Two example applications of imposing learnable knowledge constraints on generative models. Left: Given a person image and a target pose (defined by key points), the goal is to generate an image of the person under the new pose. The constraint is to force the human parts (e.g., head) of the generated image to match those of the true target image. Right: Given a text template, the goal is to generate a complete sentence following the template. The constraint is to force the match between the infilling content of the generated sentence with the true content. (See sec 5 for more details.)

2 Related Work

It is of increasing interest to incorporate problem structures and domain knowledge in machine learning approaches 

(Taskar et al., 2004; Ganchev et al., 2010; Hu et al., 2016a). The added structure helps to facilitate learning, enhance generalization, and improve interpretability. For deep neural models, one of the common ways is to design specialized network architectures or features for specific tasks (e.g., Andreas et al. (2016); Liang et al. (2018); Kusner et al. (2017); Liang et al. (2017)). Such a method typically has a limited scope of applicable tasks, models, or knowledge. On the other hand, for structured probabilistic models, posterior regularization (PR) and related frameworks (Ganchev et al., 2010; Liang et al., 2009; Bellare et al., 2009) provide a general means to impose knowledge constraints during model estimation. Hu et al. (2016a) develops iterative knowledge distillation

based on PR to regularize neural networks with any logic rules. However, the application of PR to the broad class of deep generative models has been hindered, as many of the models do not even possess meaningful latent variables or explicit density evaluation (i.e., implicit models). Previous attempts thus are limited to applying simple max-margin constraints 

(Li et al., 2015). The requirement of a priori fixed constraints has also made PR impractical for complex, uncertain knowledge. Previous efforts to alleviate the issue either require additional manual supervision (Mei et al., 2014) or is limited to regularizing small label space (Hu et al., 2016b). This paper develops a practical algorithm that is generally applicable to any deep generative models and any learnable constraints on arbitrary (large) target space.

Our work builds connections between the Bayesian PR framework and reinforcement learning. A relevant, broad research topic of formalizing RL as a probabilistic inference problem has been explored in the RL literature (Dayan and Hinton, 1997; Deisenroth et al., 2013; Neumann et al., 2011; Levine, 2018; Abdolmaleki et al., 2018; Tan et al., 2018), where rich approximate inference tools are used to improve the modeling and reasoning for various RL algorithms. The link between RL and PR has not been previously studied. We establish the mathematical correspondence, and, differing from the RL literature, we in turn transfer the tools from RL to expand the probabilistic PR framework. Inverse reinforcement learning (IRL) seeks to learn a reward function from expert demonstrations. Recent approaches based on maximum-entropy IRL (Ziebart et al., 2008) are developed to learn both the reward and policy (Finn et al., 2016b, a; Fu et al., 2017). We adopt the maximum-entropy IRL formulation to derive the constraint learning objective in our algorithm, and leverage the unique structure of PR for efficient importance sampling estimation, which differs from these previous approaches.

Components PR Entropy-Reg RL MaxEnt IRL (Energy) GANs
data/generations action-state samples demonstrations data/generations
generative model (old) policy generator
constraint reward reward discriminator
variational distr. , Eq.3 (new) policy policy
Table 1: Unified perspective of the different approaches, showing mathematical correspondence of PR with the entropy-regularized RL (sec 3.2.1) and maximum entropy IRL (sec 3.2.2), and its (conceptual) relations to (energy-based) GANs (sec 4).

3 Connecting Posterior Regularization to Reinforcement Learning

3.1 PR for Deep Generative Models

PR (Ganchev et al., 2010)

was originally proposed to provide a principled framework for incorporating constraints on posterior distributions of probabilistic models with latent variables. The formulation is not generally applicable to deep generative models as many of them (e.g., GANs and autoregressive models) are not formulated within the Bayesian framework and do not possess a valid posterior distribution or even semantically meaningful latent variables. Here we adopt a slightly adapted formulation that makes minimal assumptions on the specifications of the model to regularize. It is worth noting that though we present in the generative model context, the formulations, including the algorithm developed later (sec 

4), can straightforwardly be extended to other settings such as discriminative models.

Consider a generative model with parameters . Note that generation of can condition on arbitrary other elements (e.g., the source image for image transformation) which are omitted for simplicity of notations. Denote the original objective of with . PR augments the objective by adding a constraint term encoding the domain knowledge. Without loss of generality, consider constraint function , such that a higher value indicates a better in terms of the particular knowledge. Note that can also involve other factors such as latent variables and extra supervisions, and can include a set of multiple constraints.

A straightforward way to impose the constraint on the model is to maximize . Such method is efficient only when is a GAN-like implicit generative model or an explicit distribution that can be efficiently reparameterized (e.g., Gaussian Kingma and Welling (2013)). For other models such as the large set of non-reparameterizable explicit distributions, the gradient is usually computed with the log-derivative

trick and can suffer from high variance. For broad applicability and efficient optimization, PR instead imposes the constraint on an auxiliary variational distribution

, which is encouraged to stay close to through a KL divergence term:

(1)

where is the weight of the constraint term. The PR objective for learning the model is written as:

(2)

where

is the balancing hyperparameter. As optimizing the original model objective

is straightforward and depends on the specific generative model of choice, in the following we omit the discussion of and focus on introduced by the framework.

The problem is solved using an EM-style algorithm (Ganchev et al., 2010; Hu et al., 2016a). Specifically, the E-step optimizes Eq.(1) w.r.t , which is convex and has a closed-form solution at each iteration given :

(3)

where is the normalization term. We can see as an energy-based distribution with the negative energy defined by . With from the E-step fixed, the M-step optimizes Eq.(1) w.r.t with:

(4)

Constraint in PR has to be fully-specified a priori and is fixed throughout the learning. It would be desirable or even necessary to enable learnable constraints so that practitioners are allowed to specify only the known components of while leaving any unknown or uncertain components automatically learned. For example, for human image generation in Figure 1, left panel, users are able to specify structures on the parsed human parts, while it is impractical to also manually engineer the human part parser that involves recognizing parts from raw image pixels. It is favorable to instead cast the parser as a learnable module in the constraint. Though it is possible to pre-train the module and simply fix in PR, the lack of adaptivity to the data and model can lead to suboptimal results, as shown in the empirical study (Table 2). This necessitates to expand the PR framework to enable joint learning of constraints with the model.

Denote the constraint function with learnable components as , where can be of various forms that are optimizable, such as the free parameters of a structural model, or a graph structure to optimize.

Simple way of learning the constraint. A straightforward way to learn the constraint is to directly optimize Eq.(1) w.r.t in the M-step, yielding

(5)

That is, the constraint is trained to fit to the samples from the current regularized model . However, such objective can be problematic as the generated samples can be of low quality, e.g., due to poor state of the generative parameter at initial stages, or insufficient capability of the generative model per se.

In this paper, we propose to treat the learning of constraint as an extrinsic reward, as motivated by the connections between PR with the reinforcement learning domain presented below.

3.2 PR and RL

RL or optimal control has been studied primarily for determining optimal action sequences or strategies, which is significantly different from the context of PR that aims at regulating generative models. However, formulations very similar to PR (e.g., Eqs.1 and 3) have been developed and widely used, in both the (forward) RL for policy optimization and the inverse RL for reward learning.

To make the mathematical correspondence clearer, we intentionally re-use most of the notations from PR. Table 1

lists the correspondence. Specifically, consider a stationary Markov decision process (MDP). An agent in state

draws an action following the policy . The state subsequently transfers to

(with some transition probability of the MDP), and a reward is obtained

. Let denote the state-action pair, and where is the stationary state distribution (Sutton and Barto, 1998).

3.2.1 Entropy regularized policy optimization

The goal of policy optimization is to find the optimal policy that maximizes the expected reward. The rich research line of entropy regularized policy optimization has augmented the objective with information theoretic regularizers such as KL divergence between the new policy and the old policy for stabilized learning. With a slight abuse of notations, let denote the new policy and the old one. A prominent algorithm for example is the relative entropy policy search (REPS) (Peters et al., 2010) which follows the objective:

(6)

where the KL divergence prevents the policy from changing too rapidly. Similar objectives have also been widely used in other workhorse algorithms such as trust-region policy optimization (TRPO) (Schulman et al., 2015), soft Q-learning (Haarnoja et al., 2017; Schulman et al., 2017), and others.

We can see the close resemblance between Eq.(6) with the PR objective in Eq.(1), where the generative model in PR corresponds to the reference policy , while the constraint corresponds to the reward . The new policy can be either a parametric distribution (Schulman et al., 2015) or a non-parametric distribution (Peters et al., 2010; Abdolmaleki et al., 2018). For the latter, the optimization of Eq.(6) precisely corresponds to the E-step of PR, yielding the optimal policy that takes the same form of in Eq.(3), with and replaced with the respective counterparts and , respectively. The parametric policy is subsequently updated with samples from , which is exactly equivalent to the M-step in PR (Eq.4).

While the above policy optimization algorithms have assumed a reward function given by the external environment, just as the pre-defined constraint function in PR, the strong connections above inspire us to treat the PR constraint as an extrinsic reward, and utilize the rich tools in RL (especially the inverse RL) for learning the constraint.

3.2.2 Maximum entropy inverse reinforcement learning

Maximum entropy (MaxEnt) IRL (Ziebart et al., 2008) is among the most widely-used methods that induce the reward function from expert demonstrations , where is the empirical demonstration (data) distribution. MaxEnt IRL adopts the same principle as the above entropy regularized RL (Eq.6) that maximizes the expected reward regularized by the relative entropy (i.e., the KL), except that, in MaxEnt IRL, is replaced with a uniform distribution and the regularization reduces to the entropy of . Therefore, same as above, the optimal policy takes the form . MaxEnt IRL assumes the demonstrations are drawn from the optimal policy. Learning the reward function with unknown parameters is then cast as maximizing the likelihood of the distribution :

(7)

Given the direct correspondence between the policy in MaxEnt IRL and the policy optimization solution of Eq.(6), plus the connection between the regularized distribution of PR (Eq.3) and as built in sec 3.2.1, we can readily link and . This motivates to plug in the above maximum likelihood objective to learn the constraint which is parallel to the reward function . We present the resulting full algorithm in the next section. Table 1 summarizes the correspondence between PR, entropy regularized policy gradient, and maximum entropy IRL.

4 Algorithm

We have formally related PR to the RL methods. With the unified view of these approaches, we derive a practical algorithm for arbitrary learnable constraints on any deep generative models. The algorithm alternates the optimization of the constraint and the generative model .

4.1 Learning the Constraint

As motivated in section 3.2, instead of directly optimizing in the original PR objectives (Eq.5) which can be problematic, we treat as the reward function to be induced with the MaxEnt IRL framework. That is, we maximize the data likelihood of (Eq.3) w.r.t , yielding the gradient:

(8)

The second term involves estimating the expectation w.r.t an energy-based distribution , which is in general very challenging. However, we can exploit the special structure of for efficient approximation. Specifically, we use as the proposal distribution, and obtain the importance sampling estimate of the second term as following:

(9)

Note that the normalization can also be estimated efficiently with MC sampling: , where . The base generative distribution is a natural choice for the proposal as it is in general amenable to efficient sampling, and is close to as forced by the KL divergence in Eq.(1). Our empirical study shows low variance of the learning process (sec 5). Moreover, using as the proposal distribution allows to be an implicit generative model (as no likelihood evaluation of is needed). Note that the importance sampling estimation is consistent yet biased.

4.2 Learning the Generative Model

Given the current parameter state , and evaluated at the parameters, we continue to update the generative model. Recall that optimization of the generative parameter is performed by minimizing the KL divergence in Eq.(4), which we replicate here:

(10)

The expectation w.r.t can be estimated as above (Eq.9). A drawback of the objective is the requirement of evaluating the generative density , which is incompatible to the emerging implicit generative models (Mohamed and Lakshminarayanan, 2016) that only permit simulating samples but not evaluating density.

To address the restriction, when it comes to regularizing implicit models, we propose to instead minimize the reverse KL divergence:

(11)

By noting that , we obtain the gradient w.r.t :

(12)

That is, the gradient of minimizing the reversed KL divergence equals the gradient of maximizing . Intuitively, the objective encourages the generative model to generate samples that the constraint function assigns high scores. Though the objective for implicit model deviates the original PR framework, reversing KL for computationality was also used previously such as in the classic wake-sleep method (Hinton et al., 1995). The resulting algorithm also resembles the adversarial learning in GANs, as we discuss in the next section. Empirical results on implicit models show the effectiveness of the objective.

The resulting algorithm is summarized in Alg.1.

0:  The base generative model    The (set of) constraints
1:  Initialize generative parameter and constraint parameter
2:  repeat
3:     Optimize constraints with Eq.(8)
4:     if  is an implicit model then
5:        Optimize model with Eq.(12) along with minimizing original model objective
6:     else
7:        Optimize model with Eq.(10) along with minimizing
8:     end if
9:  until convergence
9:  Jointly learned generative model and constraints
Algorithm 1 Joint Learning of Deep Generative Model and Constraints
Connections to adversarial learning

For implicit generative models, the two objectives w.r.t and (Eq.8 and Eq.12) are conceptually similar to the adversarial learning in GANs (Goodfellow et al., 2014) and the variants such as energy-based GANs (Kim and Bengio, 2016; Zhao et al., 2016; Zhai et al., 2016; Wang and Liu, 2016). Specifically, the constraint can be seen as being optimized to assign lower energy (with the energy-based distribution ) to real examples from , and higher energy to fake samples from which is the regularized model of the generator . In contrast, the generator is optimized to generate samples that confuse and obtain lower energy. Such adversarial relation links the PR constraint to the discriminator in GANs (Table 1). Note that here fake samples are generated from and

in the two learning phases, respectively, which differs from previous adversarial methods for energy-based model estimation that simulate only from a generator. Besides, distinct from the discriminator-centric view of the previous work 

(Kim and Bengio, 2016; Zhai et al., 2016; Wang and Liu, 2016), we primarily aim at improving the generative model by incorporating learned constraints. Last but not the least, as discussed in sec 3.1, the proposed framework and algorithm are more generally and efficiently applicable to not only implicit generative models as in GANs, but also (non-)reparameterizable explicit generative models.

5 Experiments

We demonstrate the applications and effectiveness of the algorithm in two tasks related to image and text generation Hu et al. (2018a), respectively.

Method SSIM Human
1 Ma et al. (2018) 0.614
2 Pumarola et al. (2018) 0.747
3 Ma et al. (2017) 0.762
4 Base model 0.676 0.03
5 With fixed constraint 0.679 0.12
6 With learned constraint 0.727 0.77
Table 2: Results of image generation on Structural Similarity (SSIM) (Wang et al., 2004) between generated and true images, and human survey where the full model yields better generations than the base models (Rows 5-6) on 77% test cases. See the text for more results and discussion.
Figure 2: Training losses of the three models. The model with learned constraint converges smoothly as base models.
Figure 3: Samples generated by the models in Table 2. The model with learned human part constraint generates correct poses and preserves human body structure much better.

5.1 Pose Conditional Person Image Generation

Given a person image and a new body pose, the goal is to generate an image of the same person under the new pose (Figure 1, left). The task is challenging due to body self-occlusions and many cloth and shape ambiguities. Complete end-to-end generative networks have previously failed (Ma et al., 2017) and existing work designed specialized generative processes or network architectures (Ma et al., 2017; Pumarola et al., 2018; Ma et al., 2018). We show that with an added body part consistency constraint, a plain end-to-end generative model can also be trained to produce highly competitive results, significantly improving over base models that do not incorporate the problem structure.

Setup. We follow the previous work (Ma et al., 2017) and obtain from DeepFashion (Liu et al., 2016) a set of triples (source image, pose keypoints, target image) as supervision data. The base generative model is an implicit model that transforms the input source and pose directly to the pixels of generated image (and hence defines a Dirac-delta distribution). We use the residual block architecture (Wang et al., 2017) widely-used in image generation for the generative model. The base model is trained to minimize the L1 distance loss between the real and generated pixel values, as well as to confuse a binary discriminator that distinguishes between the generation and the true target image.

Knowledge constraint. Neither the pixel-wise distance nor the binary discriminator loss encode any task structures. We introduce a structured consistency constraint that encourages each of the body parts (e.g., head, legs) of the generated image to match the respective part of the true image. Specifically, the constraint includes a human parsing module that classifies each pixel of a person image into possible body parts. The constraint then evaluates cross entropies of the per-pixel part distributions between the generated and true images. The average negative cross entropy serves as the constraint score. The parsing module is parameterized as a neural network with parameters , pre-trained on an external parsing dataset (Gong et al., 2017), and subsequently adapted within our algorithm jointly with the generative model.

Results. Table 2 compares the full model (with the learned constraint, Row 6) with the base model (Row 4) and the one regularized with the constraint that is fixed after pre-training (Row 5). Human survey is performed by asking annotators to rank the quality of images generated by the three models on each of 200 test cases, and the percentages of ranked as the best are reported (Tied ranking is treated as negative result). We can see great improvement by the proposed algorithm. The model with fixed constraint fails, partially because pre-training on external data does not necessarily fit to the current problem domain. This highlights the necessity of the constraint learning. Figure 3 shows examples further validating the effectiveness of the algorithm.

In sec 4, we have discussed the close connection between the proposed algorithm and (energy-based) GANs. The conventional discriminator in GANs can be seen as a special type of constraint. With this connection and given that the generator in the task is an implicit generative model, here we can also apply and learn the structured consistency constraint using GANs, which is equivalent to replacing in Eq.(8) with . Such a variant produces a SSIM score of 0.716, slightly inferior to the result of the full algorithm (Row 6). We suspect this is because fake samples by (instead of ) can help with better constraint learning. It would be interesting to explore this in more applications.

To give a sense of the state of the task, Table 2 also lists the performance of previous work. It is worth noting that these results are not directly comparable, as discussed in (Pumarola et al., 2018), due to different settings (e.g., the test splits) between each of them. We follow (Ma et al., 2017, 2018) mostly, while our generative model is much simpler than these work with specialized, multi-stage architectures. The proposed algorithm learns constraints with moderate approximations. Figure 2 validates that the training is stable and converges smoothly as the base models.

Model Perplexity Human 1 Base model 30.30 0.19 2 With binary D 30.01 0.20 3 With constraint updated 31.27 0.15 in M-step (Eq.5) 4 With learned constraint 28.69 0.24
Table 3: Sentence generation results on test set perplexity and human survey. Samples by the full model are considered as of higher quality in 24% cases.
acting the acting is the acting . the acting is also very good . out of 10 .          10 out of 10 . I will give the movie 7 out of 10 .
Table 4: Two test examples, including the template, the sample by the base model, and the sample by the constrained model.

5.2 Template Guided Sentence Generation

The task is to generate a text sentence that follows a given template (Figure 1, right). Each missing part in the template can contain arbitrary number of words. This differs from previous sentence completion tasks (Fedus et al., 2018; Zweig and Burges, 2011) which designate each masked position to have a single word. Thus directly applying these approaches to the task can be problematic.

Setup. We use an attentional sequence-to-sequence (seq2seq) (Bahdanau et al., 2014) model as the base generative model for the task. Paired (template, sentence) data is obtained by randomly masking out different parts of sentences from the IMDB corpus (Diao et al., 2014). The base model is trained in an end-to-end supervised manner, which allows it to memorize the words in the input template and repeat them almost precisely in the generation. However, the main challenge is to generate meaningful and coherent content to fill in the missing parts.

Knowledge constraint. To tackle the issue, we add a constraint that enforces matching between the generated sentence and the ground-truth text in the missing parts. Specifically, let be the masked-out true text. That is, plugging into the template recovers the true complete sentence. The constraint is defined as which returns a high score if the sentence matches well. The actual implementation of the matching strategy can vary. Here we simply specify as another seq2seq network that takes as input a sentence and evaluates the likelihood of recovering —This is all we have to specify, while the unknown parameters are learned jointly with the generative model. Despite the simplicity, the empirical results show the usefulness of the constraint.

Results. Table 4 shows the results. Row 2 is the base model with an additional binary discriminator that adversarial distinguishes between the generated sentence and the ground truth (i.e., a GAN model). Row 3 is the base model with the constraint learned in the direct way through Eq.(5). We see that the improper learning method for the constraint harms the model performance, partially because of the relatively low-quality model samples the constraint is trained to fit. In contrast, the proposed algorithm effectively improves the model results. Its superiority over the binary discriminator (Row 2) shows the usefulness of incorporating problem structures. Table 4 demonstrates samples by the base and constrained models. Without the explicit constraint forcing in-filling content matching, the base model tends to generate less meaningful content (e.g., duplications, short and general expressions).

6 Discussions: Combining Structured Knowledge with Black-box NNs

We revealed the connections between posterior regularization and reinforcement learning, which motivates to learn the knowledge constraints in PR as reward learning in RL. The resulting algorithm is generally applicable to any deep generative models, and flexible to learn the constraints and model jointly. Experiments on image and text generation showed the effectiveness of the algorithm.

The proposed algorithm, along with the previous work (e.g., Hu et al. (2016a, b); Hinton et al. (2015); Lopez-Paz et al. (2015); Hu et al. (2017)), represents a general means of adding (structured) knowledge to black-box neural networks by devising knowledge-inspired losses/constraints that drive the model to learn the desired structures. This differs from the other popular way that embeds domain knowledge into specifically-designed neural architectures (e.g., the knowledge of translation-invariance in image classification is hard-coded in the conv-pooling architecture of ConvNet). While the specialized neural architectures can usually be very effective to capture the designated knowledge, incorporating knowledge via specialized losses enjoys the advantage of generality and flexibility:

  • [leftmargin=0.7cm]

  • Model-agnostic. The learning framework is applicable to neural models with any architectures, e.g., ConvNets, RNNs, and other specialized ones Hu et al. (2016a).

  • Richer supervisions. Compared to the conventional end-to-end maximum likelihood learning that usually requires fully-annotated or paired data, the knowledge-aware losses provide additional supervisions based on, e.g., structured rules (Hu et al., 2016a), other models (Hinton et al., 2015; Hu et al., 2016b; Yang et al., 2018; Holtzman et al., 2018), and datasets for other related tasks (e.g., the human image generation method in Figure 1, and (Hu et al., 2017)). In particular, Hu et al. (2017) leverages datasets of sentence sentiment and phrase tense to learn to control the both attributes (sentiment and tense) when generating sentences.

  • Modularized design and learning. With the rich sources of supervisions, design and learning of the model can still be simple and efficient, because each of the supervision sources can be formulated independently to each other and each forms a separate loss term. For example, Hu et al. (2017) separately learns two classifiers, one for sentiment and the other for tense, on two separate datasets, respectively. The two classifiers carry respective semantic knowledge, and are then jointly applied to a text generation model for attribute control. In comparison, mixing and hard-coding multiple knowledge in a single neural architecture can be difficult and quickly becoming impossible when the number of knowledge increases.

  • Generation with discrimination knowledge. In generation tasks, it can sometimes be difficult to incorporate knowledge directly in the generative process (or model architecture), i.e., defining how to generate

    . In contrast, it is often easier to instead specify a evaluation metric that measures the quality of a given sample in terms of the knowledge, i.e., defining

    what desired generation is. For example, in the human image generation task (Figure 1), evaluating the structured human part consistency could be easier than designing a generator architecture that hard-codes the structured generation process for the human parts.

It is worth noting that the two paradigms are not mutually exclusive. A model with knowledge-inspired specialized architecture can still be learned by optimizing knowledge-inspired losses. Different types of knowledge can be best fit for either architecture hard-coding or loss optimization. It would be interesting to explore the combination of both in the above tasks and others.

Acknowledgment

This material is based upon work supported by the National Science Foundation grant IIS1563887. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

References