Deriving Verb Predicates By Clustering Verbs with Arguments

08/01/2017 ∙ by João Sedoc, et al. ∙ 0

Hand-built verb clusters such as the widely used Levin classes (Levin, 1993) have proved useful, but have limited coverage. Verb classes automatically induced from corpus data such as those from VerbKB (Wijaya, 2016), on the other hand, can give clusters with much larger coverage, and can be adapted to specific corpora such as Twitter. We present a method for clustering the outputs of VerbKB: verbs with their multiple argument types, e.g. "marry(person, person)", "feel(person, emotion)." We make use of a novel low-dimensional embedding of verbs and their arguments to produce high quality clusters in which the same verb can be in different clusters depending on its argument type. The resulting verb clusters do a better job than hand-built clusters of predicting sarcasm, sentiment, and locus of control in tweets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

English verbs are limited in number (Levin’s classes, for instance, include almost 3,100 verbs) and highly polysemous. Depending on its argument realization, a verb may have different semantics or senses Rappaport Hovav and Levin (1998). Therefore, including the verb arguments and their semantic types in the semantic analysis should help with sense disambiguation of verbs and their arguments, especially the subject and object. Indeed, verb selectional preferences: the tendencies of verbs to selectively co-occur with specific types of arguments e.g., the verb “eat” usually takes a type of food as an object argument – have been shown to be strong indicators of verb diathesis alternations (McCarthy, 2001). Furthermore, these selectional preferences can be assigned to the majority of Levin verb classes in VerbNet (Schuler, 2005). In this paper we show that clustering verbs along with their subject and object types yields better verb clusters. Verbs are ’disambiguated’, such that the same verb ends up in different clusters based on its argument types. Our verb clusters reflect the distribution of verb arguments in social media language, and provide useful features for modeling this language.

We propose a method of clustering the governing verbs and their arguments, including the subject, object, and the prepositional phrase. We use as a baseline, Levin’s verb classes and propose new methods for distributional categorization of verbs and their arguments. Unlike Levin’s verb classes, our categorization is not limited to verbs; we generate semantic categorization of verbs and their arguments.

A wealth of studies have explored the relation between linguistic features in social media and human traits. However, most studies have used open-vocabulary or bag-of-word approach and few have focused on taking the role of syntactic/semantic contexts and verb argument structure into account. In this study, we show that the verb predicates that we derive improve performance when used as features in models predicting attributes of Facebook messages and Tweets. Specifically, we look at predicting sarcasm, sentiment, and locus of control: whether the author feels in control or being controlled by the other people. While sarcasm and sentiment are more widely studied, locus of control is a relatively novel task. Our clustering method in effect disambiguates verbs (a highly ambiguous part of speech), and groups together similar verbs by making using of their argument structure. We show that our automatically derived verb clusters help more in these three prediction tasks than alternatives such as the Levin’s classes.

In summary, our main contributions are:

  • [noitemsep,topsep=0pt]

  • we present a novel method for learning the low-dimensional embeddings of verbs and their arguments that takes into account the verb selectional preferences and distribution (section 5.3)

  • we present an algorithm for clustering verbs and their arguments based on the embeddings (section 3)

  • we show that our verb clusters outperform hand-built verb classes when used as features for predicting control, sarcasm, and sentiment in tweets (section 6)

2 Related Work

Our approach draws on two different strands of prior work: verb clustering and verb embedding.

Verb Clustering

Verb clusters have proved useful for a variety of NLP tasks and applications including e.g., metaphor detection (Shutova et al., 2010), semantic role labeling (Palmer et al., 2010), language acquisition (Hartshorne et al., 2016), and information extraction (Nakashole and Mitchell, 2016). Verb classes are useful because they support generalization and abstraction. VerbNet (Schuler, 2005) is a widely-used hand-built verb classification which lists over 6,00 verbs that are categorized into 280 classes. The classification is based on Levin’s verb classification (Levin, 1993), which is motivated by the hypothesis that verbs taking similar diathesis alternations tend to share the same meaning and are organized into semantically coherent classes. Hand-crafted verb classifications however, suffer from low coverage. This problem has been addressed by various methods to automatically induce verb clusters from corpus data (Sun and Korhonen, 2009; Nakashole et al., 2012; Kawahara et al., 2014; Fader et al., 2011). Most recent release is VerbKB (Wijaya, 2016; Wijaya and Mitchell, 2016), which contains large-scale verb clusters automatically induced from ClueWeb (Callan et al., 2009). Unlike previous approaches, VerbKB induces clusters of typed verbs: verbs (+ prepositions) whose subjects and objects are semantically typed with categories in NELL knowledge base (Carlson et al., 2010) e.g., “marry on(person, date)”, “marry(person, person)”.

VerbKB clusters 65,000 verbs (+prepositions) and outperforms other large-scale verb clustering methods in terms of how well its clusters align to hand-built verb classes. Unlike these previous works which evaluate the quality of the verb clusters based on their similarities to hand-built verb classes, we evaluate our verb clusters directly against hand-built verb classes (Levin, VerbNet) on their utility in building predictive models for assessing control, sarcasm, and sentiment.

Verb Embeddings

Word embeddings are vector space models that represent words as real-valued vectors in a low-dimensional semantic space based on their contexts in large corpora. Recent approaches for learning these vectors such as word2vec

(Mikolov et al., 2013) and Glove (Pennington et al., 2014) are widely used. However, these models represent each word with a single unique vector. Since verbs are highly polysemous, individual verb senses should potentially each have their own embeddings. Sense-aware word embeddings such as (Reisinger and Mooney, 2010; Huang et al., 2012; Neelakantan et al., 2014; Li and Jurafsky, 2015) can be useful. However, they base their representations solely on distributional statistics obtained from corpora, ignoring semantic roles or types of the verb arguments. Recent study by Schwartz et al. (2016)

has observed that verbs are different than other parts of speech in that their distributional representation can benefit from taking verb argument role into accounts. These argument roles or types can be provided by existing semantic resources. However, learning sense-aware embeddings that take into account information from existing semantic resources

(Iacobacci et al., 2015) requires large amounts of sense-annotated corpora. Since we have only data in the form of (subject, verb, object) triples extracted from ClueWeb, the limited context111window size of 1, limited syntactic information, and no sentence or whole document context also means that traditional word embedding models or word sense disambiguation systems may not learn well on the data (Melamud et al., 2016).

Motivated by previous works that have shown verb selectional preferences to be useful for verb clustering (Sun and Korhonen, 2009; Wijaya, 2016) and that verb distributional representation can benefit from taking into account the verb argument roles (Schwartz et al., 2016), we cluster VerbKB typed verbs by first learning novel, low-dimensional representations of the typed verbs, thus encoding information about the verb selectional preferences and distribution in the data.

We learn embeddings of typed verbs (verbs plus the type of their subjects and objects) in VerbKB. Unlike traditional one-word-one-vector embedding, we learn embeddings for each typed verb e.g., the embedding for “abandon(person, person)” is separate from the embedding for “abandon(person, religion)”. Using only triples in the form of (subject, verb, object) extracted from ClueWeb, we learn verb embeddings by treating each verb as a relation between its subject and object (Bordes et al., 2013). Since verbs are predicates that express relations between the arguments and adjuncts in sentences, we believe this is a natural way for representing verbs.

We cluster typed verbs based on their embeddings. Then, at run time, given any text containing a verb and its arguments, we straightforwardly map the text to the verb clusters by assigning types to the verb arguments using NELL’s noun phrase to category mapping222publicly available at http://rtw.ml.cmu.edu/rtw/nps to obtain the typed verb and hence, its corresponding verb clusters. This differs from sense-aware embedding approaches that require the text at run time to be sense-disambiguated with the learned senses, a difficult problem by itself.

3 Method

Given the embeddings of the typed verbs, the main goal of our clustering is to create representations of verbs using their argument structure similar in concept to the hand curated Levin classes, but with higher coverage and precision. Our method comprises four steps:

  • [noitemsep,topsep=0pt]

  • shallow parsing the sentence into subject, verb (+ preposition), and object

  • labeling the subject and object into their NELL categories

  • identifying the clustering within each verb (+ preposition) as in figure 1

  • indexing into the cluster of between verb cluster embeddings as shown in figure 2.

We use algorithm 1 for creating verbal argument clusters for each verb, and algorithm 2 to cluster between the verbal argument clusters. This process results in verb predicate clusters with are conceptually similar to Levin class, but which include prepositions as well as arguments and are in practice closer to VerbNet and FrameNet classes.

Step 1: Parsing and lemmatization

The first step in our pipeline for labeling the verb predicate is to parse the sentence or tweet (detailed in section 5.2). Then, we extracted the words in in the nominal subject, direct object position, and the prepositional phrases and reduced morphological variations by lemmatizing the verbs and their arguments. This whole process captured the sentence kernel.

Step 2: Subject and object NELL categorization

Subsequently, the subject and object noun phrases are mapped to NELL categories. This categorization creates an abstract view of the verbal arguments into types.

Step 3: Verb-specific verb argument clusters

In order to create verb (+ preposition) argument clusters for each verb, all typed embeddings for the verb are clustered using spectral clustering method of

Yu and Shi (2003) for multiclass normalized cuts. The number of clusters is limited to the WordNet 3.1 (Miller, 1995) senses for each verb. The centers of the clusters are the representative embedding for the cluster. One can interpret these clusters as “synsets” of verbal arguments which are similar in embedding space. This created a mapping from the verb with its preposition , the subject NELL category , and the object category to the verb arguments cluster and the cluster’s representative embedding.

1:  Input: Embeddings, , for a set of typed verbs containing the verb (+ preposition) , its subject type and object type
2:  for Each verb (+ preposition) over all arguments,  do
3:     Set to the number of word senses from WordNet 3.1 (Miller, 1995) with a default of 2 for missing verbs.
4:

     Calculate the affinity matrix

using a cosine similarity between each embedding from

.
5:     Find clusters () from .
6:     Keep a map from from verb , subject type , and object type to the cluster number .
7:     Calculate the mean of the embeddings .
8:  end for
9:  Output: The verb sense embeddings for all verbs, the mapping function .
Algorithm 1 Verb-Specific Argument Clustering Algorithm

The main output from algorithm 1 are verb argument clusters and embeddings . These clusters can be considered as verb “sense” clusters. In figure 1 we showed the plotted with respect to the first and second principle components in the verb sense embedding space. “stimulate.0” is further from the rest of the verb sense embeddings for “stimulate”.

Figure 1: After algorithm 1 of the clustering algorithm, the different argument types of each verb are clustered. For example, the verb “stimulate” here has 6 clusters (The number of clusters came from the number of WordNet senses for the verb “stimulate”.)

Step 4: Clustering between verb argument clusters

The final component in the procedure is to cluster across verb argument clusters i.e., “verb senses” using the clusters’ representative embeddings. Here we also include side thesaurus information in order to maintain semantic similarity particularly by including antonym information. We follow the procedure of Sedoc et al. (2016) which extends spectral clustering to account for negative edges.

1:  Input: Cluster embeddings from Algorithm 1 , the thesaurus, , and the number of clusters .
2:  Calculate the verb senses affinity matrix

using the radial basis function of the Euclidean distance between

and .
3:  Find clusters using signed spectral clustering of and .
4:  Keep a function from to the cluster number
5:  Output: The verb sense embeddings , the mapping function , and .
Algorithm 2 Verb Predicate Clustering Algorithm

The main result having run algorithm 2 are verb predicate clusters of typed verbs from .

Figure 2: The final output of the clustering algorithm 2 is the clusters of verb senses. This example cluster shows one sense of the verb “stimulate”: “stimulate.0” which is clustered with different senses of “move”. The small points represent additional words groups in the cluster which are not displayed.

Figure 2 corresponds to a verb predicate cluster which includes “stimulate.0” but not other senses of “stimulate”. Furthermore, “stimulate.0” is grouped with various senses of “move”. This shows how the two step clustering algorithm is effective in creating clusters which are similar in purpose to Levin classes.

4 Prediction tasks

We use the verb predicate clusters as features in three prediction tasks: estimating locus of control, sarcasm, and sentiment from social media language. We now briefly describe these three tasks and the data set we use for them.

4.1 Locus of control

Locus of control, or ”control,” is defined as the degree to which a person is in control of others or situation or being controlled by them. A large number of studies explored the role of control (or locus of control, LoC) on the physical and mental health. They have found that a person’s perceived LoC can influence their health Lachman and Weaver (1998), well-being Krause and Stryker (1984), and career prospects Judge et al. (2002). All of these studies are limited to small populations (mainly based on questionnaires) and none of them propose automated large-scale methods

We deployed a survey on Qualtrics, comprising several demographic questions as well as a set of 128 items, and invited users to share access to their Facebook status updates. 2465 subjects reported their age, gender and items indicative of their general health and well-being. We split each Facebook status update into multiple sentences and asked three trained annotators to determine for each sentences if the author is in control (internal control) or being controlled by others or circumstances (external control). The inter-annotator agreement between the three annotators was around %76. We took the majority vote of the annotator for each message and assigned binary labels for internal and external control.

4.2 Sarcasm

Several number of studies have used surface linguistic features Carvalho et al. (2009); Davidov et al. (2010), language patterns Davidov et al. (2010), lexical features and emotions González-Ibánez et al. (2011)

, counter-factuals, unexpectedness, emotions, and n-grams

Reyes et al. (2013). Other works have explored the role of social context in detecting sarcasm as well Rajadesingan et al. (2015); Bamman and Smith (2015). Schifanella et al. (2016) worked on multimodal sarcasm analysis and detection. Our method advances on predicting sarcasm using word embeddings (Ghosh et al., 2015; Joshi et al., 2016) to verb predicates.

Here we use the dataset from Bamman and Smith (2015) including 17,000 tweets. The tweets are semi-automaticaly annotated for sarcasm (e.g. using #sarcasm). The dataset contains 51% sarcastic and 49% non-sarcastic manually annotated tweets (not likely to reflect of real-world rates of sarcastic tweets).

4.3 Sentiment

Sentiment has been extremely widely studied (Pang et al., 2008; Liu and Zhang, 2012). Both surface level as well as lexical structure have been shown to be useful in the task of sentiment prediction (Neviarouskaya et al., 2009). Large corpora are available, both at the document level as well as the tweet level where sentiment has been assessed. In our work, we used the sentiment prediction task to compare verb predicate clusters with hand-curated verb classes on this task.

5 Data preprocessing

5.1 Social media text corpus

Our corpus for verb clustering consists of the status updates of 15,000 Facebook users, a subset of the ones who volunteered to share their posts in the “MyPersonality” application Kosinski et al. (2013), between January 2009 and October 2011. The users had English as a primary language and were less than 65 years old (due to data sparsity beyond this age).

5.2 Data processing and extracting verb arguments

We first perform a text normalization pipeline that cleans each tweet or Facebook status update (removes emoticon, URLs, email addresses, handles, hashtags, etc.), does spelling correction and partial abbreviation expansion, and reduces the number of repeated characters. Then, we tokenize and split Facebook status updates into sentences (we keep tweets as single sentences). We tokenize the tweets using CMU ARK Twitter Twokenize script (Owoputi et al., 2013; O’Connor et al., 2010)

. Next, we obtained dependency parses of our corpus using SyntaxNet with Parsey McParseface model

333https://github.com/tensorflow/models/tree/master/syntaxnet that provides universal dependencies in (relation, head, dependent) triples444In our in-house evaluation SyntaxNet with Parsey McParseface model outperformed Stanford Parser Socher et al. (2013) on social media domain and it is essentially better than the Tweebo Parser (Kong et al., 2014) that does not provide dependency relations. We extracted subject, verb, object, preposition and the object of preposition from the dependency trees, lemmatizing each word using NLTK wordNet lemmatizer Bird et al. (2009). Given the nature of twitter data the parses of the tweets are very noisy and created errors, such as, “rying(’t.t’, None)” from “I’ve planted my ca t.t rying to grow cat tails for Halloween .” Nonetheless, parsing twitter is out of scope for this paper and we used the same parse for all methods.

5.3 Typed verb embeddings

Typed verbs in VerbKB (Wijaya, 2016) are created by extracting subject, verb (lemmatized), object, preposition and the object of preposition from the dependency trees in the ClueWeb corpus(Callan et al., 2009). Triples in the form of (subject, verb (+preposition), object) are extracted, and the subjects and objects are typed using the NELL knowledge base categories (Carlson et al., 2010). The type signatures of verbs e.g., (person, person) for “marry” are then selected based on their frequencies of occurrence in the corpus using Resnik’s selectional association scores (Resnik, 1997). The result is a collection of triples of typed verbs with their subject and object noun phrases (NPs) in ClueWeb e.g., (Barack_Obama, marry(person, person), Michelle_Obama), (Tom_Hanks, marry(person, person), Rita_Wilson).

Inspired by Bordes et al. (2013), who model relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities, we learn low-dimensional representations of the typed verbs by interpreting them as translations operating on the low-dimensional embeddings of their subject and object noun phrases. Specifically, given a set of triples: (, , ) composed of the subject and object NP , (the set of NPs) and the typed verb , we want the embedding of the object NP to be a nearest neighbor of i.e., when (, , ) is observed in ClueWeb and far away otherwise. Using distance , following Bordes et al. (2013), to learn the embeddings we minimize over the set of triples observed in ClueWeb:

where denotes the positive part of ,

is a hyperparameter and

is the set of corrupted triples constructed as in Bordes et al. (2013).

For typed intransitives (e.g., “sleep(person)”), since they do not have object NPs, we learn their embeddings by making use of their prepositions and objects e.g., “sleep in(person, location)” whose triples are observed in ClueWeb. Specifically, given triples in the form of (, , ) composed of the intransitive verb , the preposition and the preposition object NP e.g., (sleep(person), in, adjacent_room), we want the embeddings to be when (, , ) is observed in ClueWeb and far away otherwise.

We use a fast implementation (Lin et al., 2015) of Bordes et al. (2013)

to learn 300 dimensional embeddings for transitive and intransitive typed verbs using this approach with 100 epochs. We use the implementation’s default setting for other parameters.

5.4 GloVe Embedding

As a baseline, we used the 200 dimensional word embeddings from Pennington et al. (2014) 555http://nlp.stanford.edu/projects/glove/. trained using Wikipedia 2014 + Gigaword 5 (6B tokens). GloVe has been shown to have better correlation with semantic relations than Word2Vec Skip-Gram embeddings from Mikolov et al. (2013) (Pennington et al., 2014).

6 Clustering Results

Baselines

We used several baselines for clustering. Levin classes are split into several forms. We used the most fine-grained classes, which clusters verbs into 199 categories. GloVe clusters were created using K-means clustering. The clustering was done by averaging the subject, verb, and object vectors

Verb Predicate Clusters

We took a subset of VerbKB typed verb embeddings from the extracted vocabulary of 15,000 parsed Facebook posts as well as our control, sarcasm, and sentiment data. From the vocabulary of Levin verbs, verbs from Facebook status updates with subject, verb, object that occur more than twice, and verbs from Twitter sentiment and control data, we obtain 6,747 verbs. This is subsequently intersected with the VerbKB typed verbs vocabulary of 46,960 verbs with prepositions attached, which results in 3791 verbs (+prepositions) Finally, once arguments are added the vocabulary expands to 322,564 typed verbs which are clustered according to algorithm 1 and algorithm  2 to yield the final verb predicate clusters.

Table 1 shows an example of different verb senses that have the same object type, which are clustered in the same verb predicate cluster.

verb subject object
clarify jobposition emotion
erode event emotion
lose personcanada emotion
deny writer emotion
lament athlete emotion
exploit jobposition emotion
fidget person emotion
prove celebrity emotion
raise filmfestival emotion
make militaryconflict emotion
Table 1: This is a subset of the verb predicate cluster that has emotion as object.

Table 2 shows various verb predicate clusters of the verb “beat”, which is particularly interesting for predicting control. For example, “The Patriots beat the Falcons.”, “I beat John with a stick.”, and “My father beat me.”, will all have different measures of control.

verb subject object cluster #
beat personus person 138
beat personasia person 138
beat personmexico person 138
beat personus athlete 195
beat personcanada athlete 195
beat coach organization 195
Table 2: There are multiple senses of “beat” which are shown to be in different clusters. Cluster number 138 includes “hit” and “crash”. Whereas, “block”, “run”, and “win” are members of cluster 195.

7 Results and Discussion

We perform a set of experiments to extrinsically evaluate verb predicate clusters. As baselines we use Levin classes, VerbNet, as well as clusters of subject, verb, object GloVe embeddings. In order to evaluate the verb predicate clusters, we used the clustering method to make various clusters using both transitive as well as intransitive verb clusters.

The results from table 3

show that our verb predicate clusters outperform Levin classes, VerbNet categories, as well as clusters of GloVe vector averaging the subject, verb and object (S-V-O clusters). We also tried other baselines, including logistic regression of GloVe embeddings instead of clustering and the results where F-score of 0.657, 0.612, and 0.798 for control, sarcasm, and sentiment respectively. We also tried to change the number of clusters to 200 to match the fine grained Levin classes.

control sarcasm sentiment
Levin 0.660 0.619 0.804
VerbNet 0.679 0.628 0.796
S-V-O clusters 0.685 0.621 0.795
Verb Predicate 0.721 0.637 0.807
Table 3: Comparison of the F-score of the Levin classes, VerbNet, GloVe embedding clusters and our verb predicate clusters for predicting control, sentiment, and sarcasm of tweets. Ten fold cross-validation was used on the datasets.

One shortfall of typed verb embeddings is due to the poor coverage for common nouns in NELL KB. In order to alleviate this issue we tried creating a manual list of the most frequent common nouns in our dataset to NELL categories. Unfortunately, this problem is systemic and only a union with something akin to WordNet would suffice to solve this issue. For instance, the sense of “root” is categorized with “poke”, “forage”, “snoop”, “rummage” and others in this sense; however, the sense as well as all of the afore mentioned words aside from “root” are not covered by type verb embedding. This is definitely an avenue of improvement which should be explored in the future.

8 Conclusion

Verb predicates are empirically driven clusters which disambiguate both verb sense as well as synonym set. Verbal predicates were shown to outperform Levin classes, in predicting control, sarcasm, and sentiment. These verbal predicates are similar to Levin classes in spirit while having increased precision and coverage.

For future work, we intend to integrate social media data in to build better verb arguments clusters, i.e. clusters that help with better prediction.

References

  • Bamman and Smith (2015) David Bamman and Noah A Smith. 2015. Contextualized sarcasm detection on twitter. In ICWSM. Citeseer, pages 574–577.
  • Bird et al. (2009) Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. ” O’Reilly Media, Inc.”.
  • Bordes et al. (2013) Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems. pages 2787–2795.
  • Callan et al. (2009) Jamie Callan, Mark Hoy, Changkuk Yoo, and Le Zhao. 2009. Clueweb09 data set.
  • Carlson et al. (2010) Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka Jr, and Tom M Mitchell. 2010. Toward an architecture for never-ending language learning. In AAAI. volume 5, page 3.
  • Carvalho et al. (2009) Paula Carvalho, Luís Sarmento, Mário J Silva, and Eugénio De Oliveira. 2009. Clues for detecting irony in user-generated contents: oh…!! it’s so easy;-. In

    Proceedings of the 1st international CIKM workshop on Topic-sentiment analysis for mass opinion

    . ACM, pages 53–56.
  • Davidov et al. (2010) Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-supervised recognition of sarcastic sentences in twitter and amazon. In Proceedings of the fourteenth conference on computational natural language learning. Association for Computational Linguistics, pages 107–116.
  • Fader et al. (2011) Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1535–1545.
  • Ghosh et al. (2015) Debanjan Ghosh, Weiwei Guo, and Smaranda Muresan. 2015. Sarcastic or not: Word embeddings to predict the literal or sarcastic meaning of words. In EMNLP. pages 1003–1012.
  • González-Ibánez et al. (2011) Roberto González-Ibánez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in twitter: a closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers-Volume 2. Association for Computational Linguistics, pages 581–586.
  • Hartshorne et al. (2016) Joshua K Hartshorne, Timothy J O’Donnell, Yasutada Sudo, Miki Uruwashi, Miseon Lee, and Jesse Snedeker. 2016. Psych verbs, the linking problem, and the acquisition of language. Cognition 157:268–288.
  • Huang et al. (2012) Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1. Association for Computational Linguistics, pages 873–882.
  • Iacobacci et al. (2015) Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. Sensembed: Learning sense embeddings for word and relational similarity. In Proceedings of the 53th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 95–105.
  • Joshi et al. (2016) Aditya Joshi, Pushpak Bhattacharyya, and Mark James Carman. 2016. Automatic sarcasm detection: A survey. arXiv preprint arXiv:1602.03426 .
  • Judge et al. (2002) Timothy A Judge, Amir Erez, Joyce E Bono, and Carl J Thoresen. 2002. Are measures of self-esteem, neuroticism, locus of control, and generalized self-efficacy indicators of a common core construct?
  • Kawahara et al. (2014) Daisuke Kawahara, Daniel Peterson, and Martha Palmer. 2014. A step-wise usage-based method for inducing polysemy-aware verb classes. In ACL (1). pages 1030–1040.
  • Kong et al. (2014) Lingpeng Kong, Nathan Schneider, Swabha Swayamdipta, Archna Bhatia, Chris Dyer, and Noah A Smith. 2014. A dependency parser for tweets .
  • Kosinski et al. (2013) Michal Kosinski, David Stillwell, and Thore Graepel. 2013. Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences 110(15):5802–5805.
  • Krause and Stryker (1984) Neal Krause and Sheldon Stryker. 1984. Stress and well-being: The buffering role of locus of control beliefs. Social Science & Medicine 18(9):783–790.
  • Lachman and Weaver (1998) Margie E Lachman and Suzanne L Weaver. 1998. The sense of control as a moderator of social class differences in health and well-being. Journal of personality and social psychology 74(3):763.
  • Levin (1993) Beth Levin. 1993. English verb classes and alternations: A preliminary investigation. University of Chicago press.
  • Li and Jurafsky (2015) Jiwei Li and Dan Jurafsky. 2015. Do multi-sense embeddings improve natural language understanding? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1722–1732.
  • Lin et al. (2015) Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015.

    Learning entity and relation embeddings for knowledge graph completion.

    In

    Twenty-Ninth AAAI Conference on Artificial Intelligence

    .
  • Liu and Zhang (2012) Bing Liu and Lei Zhang. 2012. A survey of opinion mining and sentiment analysis. In Mining text data, Springer, pages 415–463.
  • McCarthy (2001) Diana McCarthy. 2001. Lexical acquisition at the syntax-semantics interface: diathesis alternations, subcategorization frames and selectional preferences.. Ph.D. thesis, University of Sussex.
  • Melamud et al. (2016) Oren Melamud, David McClosky, Siddharth Patwardhan, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embeddings. In Proceedings of NAACL-HLT 2016. Association for Computational Linguistics, pages 1030–1040.
  • Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119.
  • Miller (1995) George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39–41.
  • Nakashole and Mitchell (2016) Ndapandula Nakashole and Tom M Mitchell. 2016. Machine reading with background knowledge. arXiv preprint arXiv:1612.05348 .
  • Nakashole et al. (2012) Ndapandula Nakashole, Gerhard Weikum, and Fabian Suchanek. 2012. Patty: a taxonomy of relational patterns with semantic types. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, pages 1135–1145.
  • Neelakantan et al. (2014) Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient non-parametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 1059–1069.
  • Neviarouskaya et al. (2009) Alena Neviarouskaya, Helmut Prendinger, and Mitsuru Ishizuka. 2009. Semantically distinct verb classes involved in sentiment analysis. In IADIS AC (1). pages 27–35.
  • O’Connor et al. (2010) Brendan O’Connor, Michel Krieger, and David Ahn. 2010. Tweetmotif: Exploratory search and topic summarization for twitter. In ICWSM. pages 384–385.
  • Owoputi et al. (2013) Olutobi Owoputi, Brendan O’Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. Association for Computational Linguistics.
  • Palmer et al. (2010) Martha Palmer, Daniel Gildea, and Nianwen Xue. 2010. Semantic role labeling. Synthesis Lectures on Human Language Technologies 3(1):1–103.
  • Pang et al. (2008) Bo Pang, Lillian Lee, et al. 2008. Opinion mining and sentiment analysis. Foundations and Trends® in Information Retrieval 2(1–2):1–135.
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532–1543.
  • Rajadesingan et al. (2015) Ashwin Rajadesingan, Reza Zafarani, and Huan Liu. 2015. Sarcasm detection on twitter: A behavioral modeling approach. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining. ACM, pages 97–106.
  • Rappaport Hovav and Levin (1998) Malka Rappaport Hovav and Beth Levin. 1998. Building verb meanings. The projection of arguments: Lexical and compositional factors pages 97–134.
  • Reisinger and Mooney (2010) Joseph Reisinger and Raymond J Mooney. 2010. Multi-prototype vector-space models of word meaning. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 109–117.
  • Resnik (1997) Philip Resnik. 1997. Selectional preference and sense disambiguation. In Proceedings of the ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, What, and How. Washington, DC, pages 52–57.
  • Reyes et al. (2013) Antonio Reyes, Paolo Rosso, and Tony Veale. 2013. A multidimensional approach for detecting irony in twitter. Language resources and evaluation 47(1):239–268.
  • Schifanella et al. (2016) Rossano Schifanella, Paloma de Juan, Joel Tetreault, and Liangliang Cao. 2016. Detecting sarcasm in multimodal social platforms. In Proceedings of the 2016 ACM on Multimedia Conference. ACM, pages 1136–1145.
  • Schuler (2005) Karin Kipper Schuler. 2005.

    Verbnet: A broad-coverage, comprehensive verb lexicon .

  • Schwartz et al. (2016) Roy Schwartz, Roi Reichart, and Ari Rappoport. 2016. Symmetric patterns and coordinations: Fast and enhanced representations of verbs and adjectives. In Proceedings of NAACL-HLT. pages 499–505.
  • Sedoc et al. (2016) João Sedoc, Jean Gallier, Lyle Ungar, and Dean Foster. 2016. Semantic Word Clusters Using Signed Normalized Graph Cuts. arXiv preprint arXiv:1601.05403 .
  • Shutova et al. (2010) Ekaterina Shutova, Lin Sun, and Anna Korhonen. 2010. Metaphor identification using verb and noun clustering. In Proceedings of the 23rd International Conference on Computational Linguistics. Association for Computational Linguistics, pages 1002–1010.
  • Socher et al. (2013) Richard Socher, John Bauer, Christopher D Manning, and Andrew Y Ng. 2013. Parsing with compositional vector grammars. In ACL (1). pages 455–465.
  • Sun and Korhonen (2009) Lin Sun and Anna Korhonen. 2009. Improving verb clustering with automatically acquired selectional preferences. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2-Volume 2. Association for Computational Linguistics, pages 638–647.
  • Wijaya (2016) Derry Tanti Wijaya. 2016. VerbKB: A Knowledge Base of Verbs for Natural Language Understanding. Ph.D. thesis, Carnegie Mellon University.
  • Wijaya and Mitchell (2016) Derry Tanti Wijaya and Tom M Mitchell. 2016. Mapping verbs in different languages to knowledge base relations using web text as interlingua. In Proceedings of NAACL-HLT. pages 818–827.
  • Yu and Shi (2003) Stella X Yu and Jianbo Shi. 2003. Multiclass spectral clustering. In Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on. IEEE, pages 313–319.