1 Introduction
Deep neural networks provide a powerful mechanism for learning patterns from massive data, achieving new levels of performance on image classification (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012), machine translation (Bahdanau et al., 2014), playing strategic board games (Silver et al., 2016), and so forth.
Despite the impressive advances, the widelyused DNN methods still have limitations. The high predictive accuracy has heavily relied on large amounts of labeled data; and the purely datadriven learning can lead to uninterpretable and sometimes counterintuitive results (Szegedy et al., 2014; Nguyen et al., 2015). It is also difficult to encode human intention to guide the models to capture desired patterns, without expensive direct supervision or adhoc initialization.
On the other hand, the cognitive process of human beings have indicated that people learn not only from concrete examples (as DNNs do) but also from different forms of general knowledge and rich experiences (Minksy, 1980; Lake et al., 2015). Logic rules provide a flexible declarative language for communicating highlevel cognition and expressing structured knowledge. It is therefore desirable to integrate logic rules into DNNs, to transfer human intention and domain knowledge to neural models, and regulate the learning process.
In this paper, we present a framework capable of enhancing general types of neural networks, such as convolutional networks (CNNs) and recurrent networks (RNNs), on various tasks, with logic rule knowledge. Combining symbolic representations with neural methods have been considered in different contexts. Neuralsymbolic systems (Garcez et al., 2012) construct a network from a given rule set to execute reasoning. To exploit a priori knowledge in general neural architectures, recent work augments each raw data instance with useful features (Collobert et al., 2011), while network training, however, is still limited to instancelabel supervision and suffers from the same issues mentioned above. Besides, a large variety of structural knowledge cannot be naturally encoded in the featurelabel form.
Our framework enables a neural network to learn simultaneously from labeled instances as well as logic rules, through an iterative rule knowledge distillation
procedure that transfers the structured information encoded in the logic rules into the network parameters. Since the general logic rules are complementary to the specific data labels, a natural “sideproduct” of the integration is the support for semisupervised learning where unlabeled data is used to better absorb the logical knowledge. Methodologically, our approach can be seen as a combination of the knowledge distillation
(Hinton et al., 2015; Buciluǎ et al., 2006) and the posterior regularization (PR) method (Ganchev et al., 2010). In particular, at each iteration we adapt the posterior constraint principle from PR to construct a ruleregularized teacher, and train the student network of interest to imitate the predictions of the teacher network. We leverage soft logic to support flexible rule encoding.We apply the proposed framework on both CNN and RNN, and deploy on the task of sentiment analysis (SA) and named entity recognition (NER), respectively. With only a few (one or two) very intuitive rules, both the distilled networks and the joint teacher networks strongly improve over their basic forms (without rules), and achieve better or comparable performance to stateoftheart models which typically have more parameters and complicated architectures.
To the best of our knowledge, this is the first work to integrate logic rules with general workhorse types of deep neural networks in a principled framework. The encouraging results indicate our method can be potentially useful for incorporating richer types of human knowledge, and improving other application domains.
2 Related Work
Combination of logic rules and neural networks has been considered in different contexts. Neuralsymbolic systems (Garcez et al., 2012), such as KBANN (Towell et al., 1990) and CILP++ (França et al., 2014), construct network architectures from given rules to perform reasoning and knowledge acquisition. A related line of research, such as Markov logic networks (Richardson and Domingos, 2006), derives probabilistic graphical models (rather than neural networks) from the rule set.
With the recent success of deep neural networks in a vast variety of application domains, it is increasingly desirable to incorporate structured logic knowledge into general types of networks to harness flexibility and reduce uninterpretability. Recent work that trains on extra features from domain knowledge (Collobert et al., 2011), while producing improved results, does not go beyond the datalabel paradigm. Kulkarni et al. (2015) uses a specialized training procedure with careful ordering of training instances to obtain an interpretable neural layer of an image network. Karaletsos et al. (2016) develops a generative model jointly over datalabels and similarity knowledge expressed in triplet format to learn improved disentangled representations.
Though there do exist general frameworks that allow encoding various structured constraints on latent variable models (Ganchev et al., 2010; Zhu et al., 2014; Liang et al., 2009), they either are not directly applicable to the NN case, or could yield inferior performance as in our empirical study. Liang et al. (2008) transfers predictive power of pretrained structured models to unstructured ones in a pipelined fashion.
Our proposed approach is distinct in that we use an iterative rule distillation process to effectively transfer rich structured knowledge, expressed in the declarative firstorder logic language, into parameters of general neural networks. We show that the proposed approach strongly outperforms an extensive array of other either adhoc or general integration methods.
3 Method
In this section we present our framework which encapsulates the logical structured knowledge into a neural network. This is achieved by forcing the network to emulate the predictions of a ruleregularized teacher, and evolving both models iteratively throughout training (section 3.2). The process is agnostic to the network architecture, and thus applicable to general types of neural models including CNNs and RNNs. We construct the teacher network in each iteration by adapting the posterior regularization principle in our logical constraint setting (section 3.3), where our formulation provides a closedform solution. Figure 1 shows an overview of the proposed framework.
3.1 Learning Resources: Instances and Rules
Our approach allows neural networks to learn from both specific examples and general rules. Here we give the settings of these “learning resources”.
Assume we have input variable and target variable . For clarity, we focus on way classification, where is the
dimensional probability simplex and
is a onehot encoding of the class label. However, our method specification can straightforwardly be applied to other contexts such as regression and sequence learning (e.g., NER tagging, which is a sequence of classification decisions). The training data
is a set of instantiations of .Further consider a set of firstorder logic (FOL) rules with confidences, denoted as , where is the th rule over the inputtarget space , and is the confidence level with indicating a hard rule, i.e., all groundings are required to be true (=1). Here a grounding is the logic expression with all variables being instantiated. Given a set of examples (e.g., a minibatch from ), the set of groundings of are denoted as . In practice a rule grounding is typically relevant to only a single or subset of examples, though here we give the most general form on the entire set.
We encode the FOL rules using soft logic (Bach et al., 2015) for flexible encoding and stable optimization. Specifically, soft logic allows continuous truth values from the interval instead of , and the Boolean logic operators are reformulated as:
(1) 
Here and are two different approximations to logical conjunction (Foulds et al., 2015): is useful as a selection operator (e.g., when , and when ), while is an averaging operator.
3.2 Rule Knowledge Distillation
A neural network defines a conditional probability by using a softmax output layer that produces a
dimensional soft prediction vector denoted as
. The network is parameterized by weights . Standard neural network training has been to iteratively update to produce the correct labels of training instances. To integrate the information encoded in the rules, we propose to train the network to also imitate the outputs of a ruleregularized projection of , denoted as , which explicitly includes rule constraints as regularization terms. In each iteration is constructed by projecting into a subspace constrained by the rules, and thus has desirable properties. We present the construction in the next section. The prediction behavior of reveals the information of the regularized subspace and structured rules. Emulating the outputs serves to transfer this knowledge into . The new objective is then formulated as a balancing between imitating the soft predictions of and predicting the true hard labels:(2) 
where
denotes the loss function selected according to specific applications (e.g., the cross entropy loss for classification);
is the soft prediction vector of on at iteration ; and is the imitation parameter calibrating the relative importance of the two objectives.A similar imitation procedure has been used in other settings such as model compression (Buciluǎ et al., 2006; Hinton et al., 2015) where the process is termed distillation. Following them we call the “student” and the “teacher”, which can be intuitively explained in analogous to human education where a teacher is aware of systematic general rules and she instructs students by providing her solutions to particular questions (i.e., the soft predictions). An important difference from previous distillation work, where the teacher is obtained beforehand and the student is trained thereafter, is that our teacher and student are learned simultaneously during training.
Though it is possible to combine a neural network with rule constraints by projecting the network to the ruleregularized subspace after it is fully trained as before with only datalabel instances, or by optimizing projected network directly, we found our iterative teacherstudent distillation approach provides a much superior performance, as shown in the experiments. Moreover, since distills the rule information into the weights instead of relying on explicit rule representations, we can use for predicting new examples at test time when the rule assessment is expensive or even unavailable (i.e., the privileged information setting (LopezPaz et al., 2016)) while still enjoying the benefit of integration. Besides, the second loss term in Eq.(2) can be augmented with rich unlabeled data in addition to the labeled examples, which enables semisupervised learning for better absorbing the rule knowledge.
3.3 Teacher Network Construction
We now proceed to construct the teacher network at each iteration from . The iteration index is omitted for clarity. We adapt the posterior regularization principle in our logic constraint setting. Our formulation ensures a closedform solution for and thus avoids any significant increases in computational overhead.
Recall the set of FOL rules . Our goal is to find the optimal that fits the rules while at the same time staying close to . For the first property, we apply a commonlyused strategy that imposes the rule constraints on through an expectation operator. That is, for each rule (indexed by ) and each of its groundings (indexed by ) on , we expect , with confidence . The constraints define a ruleregularized space of all valid distributions. For the second property, we measure the closeness between and with KLdivergence, and wish to minimize it. Combining the two factors together and further allowing slackness for the constraints, we finally get the following optimization problem:
(3) 
where is the slack variable for respective logic constraint; and is the regularization parameter. The problem can be seen as projecting into the constrained subspace. The problem is convex and can be efficiently solved in its dual form with closedform solutions. We provide the detailed derivation in the supplementary materials and directly give the solution here:
(4) 
Intuitively, a strong rule with large will lead to low probabilities of predictions that fail to meet the constraints. We discuss the computation of the normalization factor in section 3.4.
Our framework is related to the posterior regularization (PR) method (Ganchev et al., 2010) which places constraints over model posterior in unsupervised setting. In classification, our optimization procedure is analogous to the modified EM algorithm for PR, by using crossentropy loss in Eq.(2) and evaluating the second loss term on unlabeled data differing from , so that Eq.(4) corresponds to the Estep and Eq.(2) is analogous to the Mstep. This sheds light from another perspective on why our framework would work. However, we found in our experiments (section 5) that to produce strong performance it is crucial to use the same labeled data in the two losses of Eq.(2) so as to form a direct tradeoff between imitating soft predictions and predicting correct hard labels.
3.4 Implementations
The procedure of iterative distilling optimization of our framework is summarized in Algorithm 1.
During training we need to compute the soft predictions of at each iteration, which is straightforward through direct enumeration if the rule constraints in Eq.(4) are factored in the same way as the base neural model (e.g., the “but”rule of sentiment classification in section 4.1). If the constraints introduce additional dependencies, e.g., bigram dependency as the transition rule in the NER task (section 4.2), we can use dynamic programming for efficient computation. For higherorder constraints (e.g., the listing rule in NER), we approximate through Gibbs sampling that iteratively samples from for each position . If the constraints span multiple instances, we group the relevant instances in minibatches for joint inference (and randomly break some dependencies when a group is too large). Note that calculating the soft predictions is efficient since only one NN forward pass is required to compute the base distribution (and few more, if needed, for calculating the truth values of relevant rules).
v.s. at Test Time
At test time we can use either the distilled student network , or the teacher network after a final projection. Our empirical results show that both models substantially improve over the base network that is trained with only datalabel instances. In general performs better than . Particularly, is more suitable when the logic rules introduce additional dependencies (e.g., spanning over multiple examples), requiring joint inference. In contrast, as mentioned above, is more lightweight and efficient, and useful when rule evaluation is expensive or impossible at prediction time. Our experiments compare the performance of and extensively.
Imitation Strength
The imitation parameter in Eq.(2) balances between emulating the teacher soft predictions and predicting the true hard labels. Since the teacher network is constructed from , which, at the beginning of training, would produce lowquality predictions, we thus favor predicting the true labels more at initial stage. As training goes on, we gradually bias towards emulating the teacher predictions to effectively distill the structured knowledge. Specifically, we define at iteration , where specifies the speed of decay and is a lower bound.
4 Applications
We have presented our framework that is general enough to improve various types of neural networks with rules, and easy to use in that users are allowed to impose their knowledge and intentions through the declarative firstorder logic. In this section we illustrate the versatility of our approach by applying it on two workhorse network architectures, i.e., convolutional network and recurrent network, on two representative applications, i.e., sentencelevel sentiment analysis which is a classification problem, and named entity recognition which is a sequence learning problem.
For each task, we first briefly describe the base neural network. Since we are not focusing on tuning network architectures, we largely use the same or similar networks to previous successful neural models. We then design the linguisticallymotivated rules to be integrated.
4.1 Sentiment Classification
Sentencelevel sentiment analysis is to identify the sentiment (e.g., positive or negative) underlying an individual sentence. The task is crucial for many opinion mining applications. One challenging point of the task is to capture the contrastive sense (e.g., by conjunction “but”) within a sentence.
Base Network
We use the singlechannel convolutional network proposed in (Kim, 2014). The simple model has achieved compelling performance on various sentiment classification benchmarks. The network contains a convolutional layer on top of word vectors of a given sentence, followed by a maxovertime pooling layer and then a fullyconnected layer with softmax output activation. A convolution operation is to apply a filter to word windows. Multiple filters with varying window sizes are used to obtain multiple features. Figure 2, left panel, shows the network architecture.
Logic Rules
One difficulty for the plain neural network is to identify contrastive sense in order to capture the dominant sentiment precisely. The conjunction word “but” is one of the strong indicators for such sentiment changes in a sentence, where the sentiment of clauses following “but” generally dominates. We thus consider sentences with an “AbutB” structure, and expect the sentiment of the whole sentence to be consistent with the sentiment of clause . The logic rule is written as:
(5) 
where is an indicator function that takes 1 when its argument is true, and 0 otherwise; class ‘+’ represents ‘positive’; and is the element of for class ’+’. By Eq.(1), when has the ‘AbutB’ structure, the truth value of the above logic rule equals to when , and otherwise ^{1}^{1}1Replacing with in Eq.(5) leads to a probably more intuitive rule which takes the value when , and otherwise.. Note that here we assume twoway classification (i.e., positive and negative), though it is straightforward to design rules for finer grained sentiment classification.
4.2 Named Entity Recognition
NER is to locate and classify elements in text into entity categories such as “persons” and “organizations”. It is an essential first step for downstream language understanding applications. The task assigns to each word a named entity tag in an “XY” format where X is one of BIEOS (Beginning, Inside, End, Outside, and Singleton) and Y is the entity category. A valid tag sequence has to follow certain constraints by the definition of the tagging scheme. Besides, text with structures (e.g., lists) within or across sentences can usually expose some consistency patterns.
Base Network
The base network has a similar architecture with the bidirectional LSTM recurrent network (called BLSTMCNN) proposed in (Chiu and Nichols, 2015) for NER which has outperformed most of previous neural models. The model uses a CNN and pretrained word vectors to capture character and wordlevel information, respectively. These features are then fed into a bidirectional RNN with LSTM units for sequence tagging. Compared to (Chiu and Nichols, 2015) we omit the character type and capitalization features, as well as the additive transition matrix in the output layer. Figure 2, right panel, shows the network architecture.
Logic Rules
The base network largely makes independent tagging decisions at each position, ignoring the constraints on successive labels for a valid tag sequence (e.g., IORG cannot follow BPER). In contrast to recent work (Lample et al., 2016) which adds a conditional random field (CRF) to capture bigram dependencies between outputs, we instead apply logic rules which does not introduce extra parameters to learn. An example rule is:
(6) 
The confidence levels are set to to prevent any violation.
We further leverage the list structures within and across sentences of the same documents. Specifically, named entities at corresponding positions in a list are likely to be in the same categories. For instance, in “1. Juventus, 2. Barcelona, 3. …” we know “Barcelona” must be an organization rather than a location, since its counterpart entity “Juventus” is an organization. We describe our simple procedure for identifying lists and counterparts in the supplementary materials. The logic rule is encoded as:
(7) 
where is the onehot encoding of (the class prediction of ); collapses the probability mass on the labels with the same categories into a single probability, yielding a vector with length equaling to the number of categories. We use distance as a measure for the closeness between predictions of and its counterpart . Note that the distance takes value in which is a proper soft truth value. The list rule can span multiple sentences (within the same document). We found the teacher network that enables explicit joint inference provides much better performance over the distilled student network (section 5).
5 Experiments
We validate our framework by evaluating its applications of sentiment classification and named entity recognition on a variety of public benchmarks. By integrating the simple yet effective rules with the base networks, we obtain substantial improvements on both tasks and achieve stateoftheart or comparable results to previous bestperforming systems. Comparison with a diverse set of other rule integration methods demonstrates the unique effectiveness of our framework. Our approach also shows promising potentials in the semisupervised learning and sparse data context.
Throughout the experiments we set the regularization parameter to . In sentiment classification we set the imitation parameter to , while in NER to downplay the noisy listing rule. The confidence levels of rules are set to , except for hard constraints whose confidence is
. For neural network configuration, we largely followed the reference work, as specified in the following respective sections. All experiments were performed on a Linux machine with eight 4.0GHz CPU cores, one Tesla K40c GPU, and 32GB RAM. We implemented neural networks using Theano
^{2}^{2}2http://deeplearning.net/software/theano, a popular deep learning platform.
5.1 Sentiment Classification
5.1.1 Setup
We test our method on a number of commonly used benchmarks, including 1) SST2, Stanford Sentiment Treebank (Socher et al., 2013) which contains 2 classes (negative and positive), and 6920/872/1821 sentences in the train/dev/test sets respectively. Following (Kim, 2014) we train models on both sentences and phrases since all labels are provided. 2) MR (Pang and Lee, 2005), a set of 10,662 onesentence movie reviews with negative or positive sentiment. 3) CR (Hu and Liu, 2004), customer reviews of various products, containing 2 classes and 3,775 instances. For MR and CR, we use 10fold cross validation as in previous work. In each of the three datasets, around 15% sentences contains the word “but”.
For the base neural network we use the “nonstatic” version in (Kim, 2014) with the exact same configurations. Specifically, word vectors are initialized using word2vec (Mikolov et al., 2013) and finetuned throughout training, and the neural parameters are trained using SGD with the Adadelta update rule (Zeiler, 2012).
Model  SST2  MR  CR  
1  CNN (Kim, 2014)  87.2  81.30.1  84.30.2 
2  CNNRule  88.8  81.60.1  85.00.3 
3  CNNRule  89.3  81.70.1  85.30.3 
4  MGNCCNN (Zhang et al., 2016)  88.4  –  – 
5  MVCNN (Yin and Schutze, 2015)  89.4  –  – 
6  CNNmultichannel (Kim, 2014)  88.1  81.1  85.0 
7  ParagraphVec (Le and Mikolov, 2014)  87.8  –  – 
8  CRFPR (Yang and Cardie, 2014)  –  –  82.7 
9  RNTN (Socher et al., 2013)  85.4  –  – 
10  GDropout (Wang and Manning, 2013)  –  79.0  82.1 
one standard deviation using 10fold cross validation.
5.1.2 Results
Table 1 shows the sentiment classification performance. Rows 13 compare the base neural model with the models enhanced by our framework with the “but”rule (Eq.(5)). We see that our method provides a strong boost on accuracy over all three datasets. The teacher network further improves over the student network , though the student network is more widely applicable in certain contexts as discussed in sections 3.2 and 3.4. Rows 410 show the accuracy of recent topperforming methods. On the MR and CR datasets, our model outperforms all the baselines. On SST2, MVCNN (Yin and Schutze, 2015) (Row 5) is the only system that shows a slightly better result than ours. Their neural network has combined diverse sets of pretrained word embeddings (while we use only word2vec) and contained more neural layers and parameters than our model.
To further investigate the effectiveness of our framework in integrating structured rule knowledge, we compare with an extensive array of other possible integration approaches. Table 2 lists these methods and their performance on the SST2 task. We see that: 1) Although all methods lead to different degrees of improvement, our framework outperforms all other competitors with a large margin. 2) In particular, compared to the pipelined method in Row 6 which is in analogous to the structure compilation work (Liang et al., 2008), our iterative distillation (section 3.2) provides better performance. Another advantage of our method is that we only train one set of neural parameters, as opposed to two separate sets as in the pipelined approach. 3) The distilled student network “Rule” achieves much superior accuracy compared to the base CNN, as well as “project” and “optproject” which explicitly project CNN to the ruleconstrained subspace. This validates that our distillation procedure transfers the structured knowledge into the neural parameters effectively. The inferior accuracy of “optproject” can be partially attributed to the poor performance of its neural network part which achieves only 85.1% accuracy and leads to inaccurate evaluation of the “but”rule in Eq.(5).
Model  Accuracy (%)  
1  CNN (Kim, 2014)  87.2 
2  butclause  87.3 
3  reg  87.5 
4  project  87.9 
5  optproject  88.3 
6  pipeline  87.9 
7  Rule  88.8 
8  Rule  89.3 
Data size  5%  10%  30%  100%  

1  CNN  79.9  81.6  83.6  87.2 
2  Rule  81.5  83.2  84.5  88.8 
3  Rule  82.5  83.9  85.6  89.3 
4  semiPR  81.5  83.1  84.6  – 
5  semiRule  81.7  83.3  84.7  – 
6  semiRule  82.7  84.2  85.7  – 
We next explore the performance of our framework with varying numbers of labeled instances as well as the effect of exploiting unlabeled data. Intuitively, with less labeled examples we expect the general rules would contribute more to the performance, and unlabeled data should help better learn from the rules. This can be a useful property especially when data are sparse and labels are expensive to obtain. Table 3 shows the results. The subsampling is conducted on the sentence level. That is, for instance, in “5%” we first selected 5% training sentences uniformly at random, then trained the models on these sentences as well as their phrases. The results verify our expectations. 1) Rows 13 give the accuracy of using only datalabel subsets for training. In every setting our methods consistently outperform the base CNN. 2) “Rule” provides higher improvement on 5% data (with margin 2.6%) than on larger data (e.g., 2.3% on 10% data, and 2.0% on 30% data), showing promising potential in the sparse data context. 3) By adding unlabeled instances for semisupervised learning as in Rows 56, we get further improved accuracy. 4) Row 4, “semiPR” is the posterior regularization (Ganchev et al., 2010) which imposes the rule constraint through only unlabeled data during training. Our distillation framework consistently provides substantially better results.
5.2 Named Entity Recognition
5.2.1 Setup
We evaluate on the wellestablished CoNLL2003 NER benchmark (Tjong Kim Sang and De Meulder, 2003), which contains 14,987/3,466/3,684 sentences and 204,567/51,578/46,666 tokens in train/dev/test sets, respectively. The dataset includes 4 categories, i.e., person, location, organization, and misc. BIOES tagging scheme is used. Around 1.7% named entities occur in lists.
We use the mostly same configurations for the base BLSTM network as in (Chiu and Nichols, 2015), except that, besides the slight architecture difference (section 4.2), we apply Adadelta for parameter updating. GloVe (Pennington et al., 2014) word vectors are used to initialize word features.
Model  F1  
1  BLSTM  89.55 
2  BLSTMRuletrans  : 89.80, : 91.11 
3  BLSTMRules  : 89.93, : 91.18 
4  NNlex (Collobert et al., 2011)  89.59 
5  SLSTM (Lample et al., 2016)  90.33 
6  BLSTMlex (Chiu and Nichols, 2015)  90.77 
7  BLSTMCRF (Lample et al., 2016)  90.94 
8  JointNEREL (Luo et al., 2015)  91.20 
9  BLSTMCRF (Ma and Hovy, 2016)  91.21 
5.2.2 Results
Table 4 presents the performance on the NER task. By incorporating the bigram transition rules (Row 2), the joint teacher model achieves 1.56 improvement in F1 score that outperforms most previous neural based methods (Rows 47), including the BLSTMCRF model (Lample et al., 2016) which applies a conditional random field (CRF) on top of a BLSTM in order to capture the transition patterns and encourage valid sequences. In contrast, our method implements the desired constraints in a more straightforward way by using the declarative logic rule language, and at the same time does not introduce extra model parameters to learn. Further integration of the list rule (Row 3) provides a second boost in performance, achieving an F1 score very close to the bestperforming systems including JointNEREL (Luo et al., 2015) (Row 8), a probabilistic graphical model optimizing NER and entity linking jointly with massive external resources, and BLSTMCRF (Ma and Hovy, 2016), a combination of BLSTM and CRF with more parameters than our ruleenhanced neural networks.
From the table we see that the accuracy gap between the joint teacher model and the distilled student is relatively larger than in the sentiment classification task (Table 1). This is because in the NER task we have used logic rules that introduce extra dependencies between adjacent tag positions as well as multiple instances, making the explicit joint inference of useful for fulfilling these structured constraints.
6 Discussion and Future Work
We have developed a framework which combines deep neural networks with firstorder logic rules to allow integrating human knowledge and intentions into the neural models. In particular, we proposed an iterative distillation procedure that transfers the structured information of logic rules into the weights of neural networks. The transferring is done via a teacher network constructed using the posterior regularization principle. Our framework is general and applicable to various types of neural architectures. With a few intuitive rules, our framework significantly improves base networks on sentiment analysis and named entity recognition, demonstrating the practical significance of our approach.
Though we have focused on firstorder logic rules, we leveraged soft logic formulation which can be easily extended to general probabilistic models for expressing structured distributions and performing inference and reasoning (Lake et al., 2015)
. We plan to explore these diverse knowledge representations to guide the DNN learning. The proposed iterative distillation procedure also reveals connections to recent neural autoencoders
(Kingma and Welling, 2014; Rezende et al., 2014) where generative models encode probabilistic structures and neural recognition models distill the information through iterative optimization (Rezende et al., 2016; Johnson et al., 2016; Karaletsos et al., 2016).The encouraging empirical results indicate a strong potential of our approach for improving other application domains such as vision tasks, which we plan to explore in the future. Finally, we also would like to generalize our framework to automatically learn the confidence of different rules, and derive new rules from data.
Acknowledgments
We thank the anonymous reviewers for their valuable comments. This work is supported by NSF IIS1218282, NSF IIS1447676, Air Force FA872105C0003, and FA87501220342.
References
 Bach et al. (2015) Bach, S. H., Broecheler, M., Huang, B., and Getoor, L. (2015). Hingeloss Markov random fields and probabilistic soft logic. arXiv preprint arXiv:1505.04406.
 Bahdanau et al. (2014) Bahdanau, D., Cho, K., and Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. Proc. of ICLR.
 Buciluǎ et al. (2006) Buciluǎ, C., Caruana, R., and NiculescuMizil, A. (2006). Model compression. In Proc. of KDD, pages 535–541. ACM.
 Chiu and Nichols (2015) Chiu, J. P. and Nichols, E. (2015). Named entity recognition with bidirectional LSTMCNNs. arXiv preprint arXiv:1511.08308.
 Collobert et al. (2011) Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., and Kuksa, P. (2011). Natural language processing (almost) from scratch. JMLR, 12:2493–2537.
 Foulds et al. (2015) Foulds, J., Kumar, S., and Getoor, L. (2015). Latent topic networks: A versatile probabilistic programming framework for topic models. In Proc. of ICML, pages 777–786.
 França et al. (2014) França, M. V., Zaverucha, G., and Garcez, A. S. d. (2014). Fast relational learning using bottom clause propositionalization with artificial neural networks. Machine learning, 94(1):81–104.
 Ganchev et al. (2010) Ganchev, K., Graça, J., Gillenwater, J., and Taskar, B. (2010). Posterior regularization for structured latent variable models. JMLR, 11:2001–2049.
 Garcez et al. (2012) Garcez, A. S. d., Broda, K., and Gabbay, D. M. (2012). Neuralsymbolic learning systems: foundations and applications. Springer Science & Business Media.
 Hinton et al. (2012) Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A.r., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N., et al. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82–97.
 Hinton et al. (2015) Hinton, G., Vinyals, O., and Dean, J. (2015). Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
 Hu and Liu (2004) Hu, M. and Liu, B. (2004). Mining and summarizing customer reviews. In Proc. of KDD, pages 168–177. ACM.
 Johnson et al. (2016) Johnson, M. J., Duvenaud, D., Wiltschko, A. B., Datta, S. R., and Adams, R. P. (2016). Structured VAEs: Composing probabilistic graphical models and variational autoencoders. arXiv preprint arXiv:1603.06277.
 Karaletsos et al. (2016) Karaletsos, T., Belongie, S., Tech, C., and Rätsch, G. (2016). Bayesian representation learning with oracle constraints. In Proc. of ICLR.
 Kim (2014) Kim, Y. (2014). Convolutional neural networks for sentence classification. Proc. of EMNLP.
 Kingma and Welling (2014) Kingma, D. P. and Welling, M. (2014). Autoencoding variational Bayes. In Proc. of ICLR.
 Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Proc. of NIPS, pages 1097–1105.
 Kulkarni et al. (2015) Kulkarni, T. D., Whitney, W. F., Kohli, P., and Tenenbaum, J. (2015). Deep convolutional inverse graphics network. In Proc. of NIPS, pages 2530–2538.
 Lake et al. (2015) Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. (2015). Humanlevel concept learning through probabilistic program induction. Science, 350(6266):1332–1338.
 Lample et al. (2016) Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., and Dyer, C. (2016). Neural architectures for named entity recognition. In Proc. of NAACL.
 Le and Mikolov (2014) Le, Q. V. and Mikolov, T. (2014). Distributed representations of sentences and documents. Proc. of ICML.
 Liang et al. (2008) Liang, P., Daumé III, H., and Klein, D. (2008). Structure compilation: trading structure for features. In Proc. of ICML, pages 592–599. ACM.
 Liang et al. (2009) Liang, P., Jordan, M. I., and Klein, D. (2009). Learning from measurements in exponential families. In Proc. of ICML, pages 641–648. ACM.
 LopezPaz et al. (2016) LopezPaz, D., Bottou, L., Schölkopf, B., and Vapnik, V. (2016). Unifying distillation and privileged information. Prof. of ICLR.
 Luo et al. (2015) Luo, G., Huang, X., Lin, C.Y., and Nie, Z. (2015). Joint named entity recognition and disambiguation. In Proc. of EMNLP.
 Ma and Hovy (2016) Ma, X. and Hovy, E. (2016). Endtoend sequence labeling via bidirectional LSTMCNNsCRF. In Proc. of ACL.
 Mikolov et al. (2013) Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Proc. of NIPS, pages 3111–3119.
 Minksy (1980) Minksy, M. (1980). Learning meaning. Technical Report AI Lab Memo. Project MAC. MIT.
 Nguyen et al. (2015) Nguyen, A., Yosinski, J., and Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proc. of CVPR, pages 427–436. IEEE.
 Pang and Lee (2005) Pang, B. and Lee, L. (2005). Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proc. of ACL, pages 115–124.
 Pennington et al. (2014) Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In Proc. of EMNLP, volume 14, pages 1532–1543.
 Rezende et al. (2016) Rezende, D. J., Mohamed, S., Danihelka, I., Gregor, K., and Wierstra, D. (2016). Oneshot generalization in deep generative models. arXiv preprint arXiv:1603.05106.

Rezende et al. (2014)
Rezende, D. J., Mohamed, S., and Wierstra, D. (2014).
Stochastic backpropagation and approximate inference in deep generative models.
Proc. of ICML.  Richardson and Domingos (2006) Richardson, M. and Domingos, P. (2006). Markov logic networks. Machine learning, 62(12):107–136.
 Silver et al. (2016) Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489.
 Socher et al. (2013) Socher, R., Perelygin, A., Wu, J. Y., Chuang, J., Manning, C. D., Ng, A. Y., and Potts, C. (2013). Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. of EMNLP, volume 1631, page 1642. Citeseer.
 Szegedy et al. (2014) Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2014). Intriguing properties of neural networks. Proc. of ICLR.
 Tjong Kim Sang and De Meulder (2003) Tjong Kim Sang, E. F. and De Meulder, F. (2003). Introduction to the conll2003 shared task: Languageindependent named entity recognition. In Proc. of CoNLL, pages 142–147. Association for Computational Linguistics.

Towell et al. (1990)
Towell, G. G., Shavlik, J. W., and Noordewier, M. O. (1990).
Refinement of approximate domain theories by knowledgebased neural
networks.
In
Proceedings of the eighth National conference on Artificial intelligence
, pages 861–866. Boston, MA.  Wang and Manning (2013) Wang, S. and Manning, C. (2013). Fast dropout training. In Proc. of ICML, pages 118–126.
 Yang and Cardie (2014) Yang, B. and Cardie, C. (2014). Contextaware learning for sentencelevel sentiment analysis with posterior regularization. In Proc. of ACL, pages 325–335.
 Yin and Schutze (2015) Yin, W. and Schutze, H. (2015). Multichannel variablesize convolution for sentence classification. Proc. of CONLL.
 Zeiler (2012) Zeiler, M. D. (2012). Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701.
 Zhang et al. (2016) Zhang, Y., Roller, S., and Wallace, B. (2016). MGNCCNN: A simple approach to exploiting multiple word embeddings for sentence classification. Proc. of NAACL.
 Zhu et al. (2014) Zhu, J., Chen, N., and Xing, E. P. (2014). Bayesian inference with posterior regularization and applications to infinite latent SVMs. JMLR, 15(1):1799–1847.
Appendix A Appendix
a.1 Solving Problem Eq.(3), Section 3.3
We provide the detailed derivation for solving the problem in Eq.(3), Section 3.3, which we repeat here:
(A.1) 
The following derivation is largely adapted from (Ganchev et al., 2010) for the logic rule constraint setting, with some reformulation that produces closedform solution.
(A.5) 
(A.6) 
Let . Plugging into
(A.7) 
Since monotonically decreases as increases, and from Eq.(A.6) we have , therefore:
(A.8) 
a.2 Identifying Lists for NER
We design a simple patternmatching based method to identify lists and counterparts in the NER task. We ensure high precision and do not expect high recall. In particular, we only retrieve lists that with the pattern “1. … 2. … 3. …” (i.e., indexed by numbers), and “ …  …  …” (i.e., each item marked with “”). We require at least 3 items to form a list.
We further require the text of each item follows certain patterns to ensure the text is highly likely to be named entities, and rule out those lists whose item text is largely free text. Specifically, we require 1) all words of the item text all start with capital letters; 2) referring the text between punctuations as “block”, each block includes no more than 3 words.
We detect both intrasentence lists and intersentence lists in documents. We found the above patterns are effective to identify true lists. A better list detection method is expected to further improve our NER results.
Comments
There are no comments yet.