Log In Sign Up

PyPlutchik: visualising and comparing emotion-annotated corpora

The increasing availability of textual corpora and data fetched from social networks is fuelling a huge production of works based on the model proposed by psychologist Robert Plutchik, often referred simply as the “Plutchik Wheel”. Related researches range from annotation tasks description to emotions detection tools. Visualisation of such emotions is traditionally carried out using the most popular layouts, as bar plots or tables, which are however sub-optimal. The classic representation of the Plutchik's wheel follows the principles of proximity and opposition between pairs of emotions: spatial proximity in this model is also a semantic proximity, as adjacent emotions elicit a complex emotion (a primary dyad) when triggered together; spatial opposition is a semantic opposition as well, as positive emotions are opposite to negative emotions. The most common layouts fail to preserve both features, not to mention the need of visually allowing comparisons between different corpora in a blink of an eye, that is hard with basic design solutions. We introduce PyPlutchik, a Python library specifically designed for the visualisation of Plutchik's emotions in texts or in corpora. PyPlutchik draws the Plutchik's flower with each emotion petal sized after how much that emotion is detected or annotated in the corpus, also representing three degrees of intensity for each of them. Notably, PyPlutchik allows users to display also primary, secondary, tertiary and opposite dyads in a compact, intuitive way. We substantiate our claim that PyPlutchik outperforms other classic visualisations when displaying Plutchik emotions and we showcase a few examples that display our library's most compelling features.


GoodNewsEveryone: A Corpus of News Headlines Annotated with Emotions, Semantic Roles, and Reader Perception

Most research on emotion analysis from text focuses on the task of emoti...

Experiencer-Specific Emotion and Appraisal Prediction

Emotion classification in NLP assigns emotions to texts, such as sentenc...

User Guide for KOTE: Korean Online Comments Emotions Dataset

Sentiment analysis that classifies data into positive or negative has be...

Discovering Basic Emotion Sets via Semantic Clustering on a Twitter Corpus

A plethora of words are used to describe the spectrum of human emotions,...

Web-based Semantic Similarity for Emotion Recognition in Web Objects

In this project we propose a new approach for emotion recognition using ...

Sharing emotions at scale: The Vent dataset

The continuous and increasing use of social media has enabled the expres...

APPReddit: a Corpus of Reddit Posts Annotated for Appraisal

Despite the large number of computational resources for emotion recognit...

1 Introduction

The recent availability of massive textual corpora has enhanced an extensive research over the emotional dimension underlying human-produced texts. Sentences, conversations, posts, tweets and many other pieces of text can be labelled according to a variety of schemes, that refer to as many psychological theoretical frameworks. Such frameworks are commonly divided into categorical models [21][27][26][33], based on a finite set of labels, and dimensional models [53][69][39]

, that position data points as continuous values in an N-dimensional vector space of emotions.

Regardless of their categorical or dimensional nature, these models provide a complex and multifaceted characterisation of emotions, which often necessitates dedicated and innovative ways to visualise them. This is the case of Plutchik’s model of emotions [45], a categorical model based on 8 labels (Joy, Trust, Fear, Surprise, Sadness, Disgust, Anger and Anticipation). According to the model, emotions are displayed in a flower-shaped representation, famously known as Plutchik’s wheel, which has become since then a classic reference in this domain. The model, described in detail in Section 2, leverages the disposition of the petals around the wheel to highlight the similar (or opposite) flavour of the emotions, as well as how similar emotions, placed in the same "hemisphere" of the wheel, can combine into primary, secondary and tertiary dyads, depending on how many petals away they are located on the flower. It is clear that such a complex and elaborated solution plays a central role in defining the model itself. Still, as detailed in Sec.2, many studies that resort to Plutchik’s model display their results using standard data visualisation layouts, such as bar plots, tables, pie charts and scatter plots, most likely due to the lack of an easy, plug-and-play implementation of the Plutchik’s wheel.

On these premises, we argue that the most common layouts fail to preserve the characterising features of Plutchik’s model, not to mention the need of visually allowing comparisons between different corpora at a glance, that is hard with basic design solutions. We contribute to fill the gap in the data visualisation tools by introducing PyPlucthik

, a Python library for visualising texts and corpora annotated according to the Plutchik’s model of emotions. Given the preeminence of Python as a programming language in the field of data science and, particularly, in the area of Natural Language Processing (NLP), we believe that the scientific community will benefit from a ready-to-use Python tool to fulfil this particular need. Of course, other packages and libraries may be released for other languages in the future.

PyPlutchik provides an off-the-shelf Python implementation of the Plutchik’s wheel. Each petal of the flower is sized after the amount of the correspondent emotion in the corpus: the more traces of an emotion are detected in a corpus, the bigger the petal is drawn. Along with the 8 basic emotions, PyPlutchik displays also three degrees of intensity for each emotion (see Table 1).

PyPlutchik is built on top of Python data visualisation library matplotlib [25], and it is fully scriptable, hence it can be used for representing the emotion annotation of single texts (e.g. for a single tweet), as well as of entire corpora (e.g. a collection of tweets), offering a tool for a proper representation of such annotated texts, which at the best of our knowledge was missing. The two-dimensional Plutchik’s wheel is immediately recognisable, but it is a mere qualitative illustration. PyPlutchik introduces a quantitative dimension to this representation, making it a tool suitable for representing how much an emotion is detected in a corpus. The library accepts as an input a score for each of the 24 emotions in the model (8 basics emotions, 3 degrees of intensity each). This score can be interpreted as a binary flag that represents if emotion was detected or not, or the amount of texts in which emotion was detected or not. Please note that, since the same text cannot express two different degrees of the same emotion, all the scores of the emotions belonging to the same branch must sum to 1. Each emotion petal is then sized according to this score. In fig.1 we can see an example of the versatility of the PyPlutchik representation of the annotated emotions: in (i) we see a pseudo-text in which only Joy, Trust and Sadness have been detected; in (ii) for each emotion, the percentage of pseudo-texts in a pseudo-corpus that show that emotion; finally (iii) contains a detail of (ii), where the three degrees of intensity have been annotated separately.
Most importantly, PyPlutchik is respectful of the original spatial and aesthetic features of the wheel of emotions intended by its author. The colour code has been hard-coded in the library, as it is a distinctive feature of the wheel that belongs to collective imagination (for instance, it is respected also in user interfaces displaying Plutchik’s emotions). Spatial distribution of the emotion is also a standard, non customisable feature, as the displacement of each petal of the flower is non arbitrary, because it reflects a semantic proximity of close emotions, and a semantic contrariety of opposite emotions (see Section 2).
Representing emotions detected in texts can be hard without a proper tool, but it is a need for the many scientists that work on text and emotions. As of today, the query "Plutchik wheel" produces 3480 results on Google Scholar, of which 1620 publications have been submitted after 2017. A great variety of newly available digital text has been explored in order to uncover emotional patterns; the necessity of a handy instrument to easily display such information is still unsatisfied.

In the following sections, after introducing the reader to the topic of emotion models and their applications in corpora annotation, we will focus on the Plutchik’s emotion model and the current state of the art of its representations. A detailed technical explanation of the PyPlutchik library will follow, with several use cases on a wide range of datasets to help substantiating our claim that PyPlutchik outperforms other classic visualisations.

2 Related Work

Visualising textual data

Visualising quantitative information associated to textual data might not be an easy task, due to "the categorical nature of text and its high dimensionality, that makes it very challenging to display graphically" [40]. Several scientific areas leverage visualisations techniques to extract meaning from texts, such as digital humanities [8] or social media analysis [71].

Textual data visualisations often usually provide tools for literacy and citation analysis; e.g., PhraseNet [64], Word Tree [70], Web Seer 111, and Themail [66] introduced many different ways to generate visual overviews of unstructured texts. Many of these projects were connected to ManyEyes, that was launched in 2007 by Viégas, Wattenberg et al. [67]

at IBM, and closed in 2015 to be included in IBM Analytics. ManyEyes represented probably a step forward in the exploitation of relationships among different artworks: it was designed as a web site where people could upload their data, to create interactive visualisations, and to establish conversations with other authors. The ambitious goal was to create a social style of data analysis so that visualisations can be tools to create collaboration and carry on discussions.

Nevertheless, all of these classic visualisation tools did not allow the exploration of more advanced textual semantic features that can be analysed nowadays due to the numerous developments of Natural Language Processing techniques; in fact, text technologies have enabled researchers and professional analysts with new tools to find more complex patterns in textual data. Algorithms for topic detection, sentiment analysis, stance detection and emotion detection allow us to convert very large amounts of textual data to actionable knowledge; still, the outputs of such algorithms can be too hard to consume if not with an appropriate data visualisation 

[20]. During the last decade, many works have been carried out to fill this gap in the areas of (hierarchical) topic visualisation [19, 36, 35, 18], sentiment visualisation (a comprehensive survey can be found in [31]), online hate speech detection [12], stance detection [14] and many more. Our work lies within the domain of visualisation of emotions in texts, as we propose a novel Python implementation of Plutchik’s wheel of emotions.

Understanding emotions from texts

In the last few years, many digital text sources such as social media, digital libraries or television transcripts have been exploited for emotion-based analyses. To mention just a few examples, researchers have studied the display of emotions in online social networks like Twitter [52][68][13][59] and Facebook [46][9][50], in literature corpora [30][37], in television conversations [3], in dialogues excerpts from call centres conversations [65], in human-human video conversations [2].
Among categorical emotion models, Plutchik’s wheel of emotions is one of the most popular. Categorical (or discrete) emotions model root to the works of Paul Ekman [21], who first recognised six basic emotions universal to the human kind (Anger, Disgust, Fear, Happiness, Sadness, Surprise). Although basicality of emotions is debated [55], categorical emotions are very popular in natural language processing research, because of their practicality in annotation. In recent years many other categorical emotion models have been proposed, each with a distinctive set of basic emotions: the model first proposed by James [27] presents 6 basic emotions, Plutchik’s model 8, Izard’s model [26] 12, Lazarus et al. [33] model 15, Ekman’s extended model [22] 18, Cowen et al. [17] 27. Parrott [44] proposed a tree-structured model with 6 basic emotions on a first level, 25 on a second level and more than one hundred on a third level. Susanto et al. in [60] propose a revisited version of the hourglass of emotions by Cambria et al. [11], an interesting model that moves from Plutchik’s one by positioning emotions in an hourglass-shaped design.

However, annotation of big corpora of texts is easier if labels are in a small number, clearly distinct from each other; on the other hand, a categorical classification of complex human emotions into a handful of basic labels may be limiting.

Lower intensity Emotion Higher intensity
Annoyance Anger Rage
Interest Anticipation Vigilance
Serenity Joy Ecstasy
Acceptance Trust Admiration
Apprehension Fear Terror
Distraction Surprise Amazement
Pensiveness Sadness Grief
Boredom Disgust Loathing
Table 1: Plutchik’s 8 basic emotions with 3 degrees of intensity each. Emotions are commonly referred as the middle intensity degree ones.

Plutchik’s model’s popularity is probably due to a peculiar characteristic. In its wheel of emotions, there are 8 basic emotions (Joy, Trust, Fear, Surprise, Sadness, Disgust, Anger and Anticipation) with three intensity degrees each, as shown in Table 1. Even if each emotion is a category on its own, emotions are related each other by their spatial displacement. In fact, four emotions (Anger, Anticipation, Joy, Trust) are respectively opposed to the other four (Fear, Surprise, Sadness, Disgust); for instance, Joy is the opposite of Sadness, hence it is displayed symmetrically with respect to the centre of the wheel. When elicited together, two emotions raise a dyad, a complex emotion. Dyads are divided into primary (when triggered by two adjacent emotions), secondary (when triggered by two emotions that are 2 petals away), tertiary (when triggered by two emotions that are 3 petals away) and opposite (when triggered by opposite emotions). This mechanism allows to annotate a basic set of only 8 emotions, while triggering eventually up to 28 more complex nuances, that better map the complexity of human emotions. When representing corpora annotated following Plutchik’s model, it is important then to highlight spatial adjacency or spatial opposition of emotions in a graphical way. We will refer to these feature as semantic proximity and semantic opposition of two emotions.

From a data visualisation point of view, PyPlutchik’s closest relatives can be found in bar plots, radar plots and Windrose diagrams. Bar plots correctly display the quantitative representation of categorical data, while radar plots (also known as spider plot) correctly displace elements in a polar coordinate system, close to the original Plutchik’s one. Windrose diagrams combine both advantages, displaying categorical data on a polar coordinate system. PyPlutchik is inspired to this representation, and it adapts this idea to the collective imagination of Plutchik’s wheel of emotion graphical picture.

Representing Plutchik’s emotions wheel

If we skim through the first 100 publications after 2017 retrieved by the aforementioned query, we notice that 25 over 100 papers needed to display the distribution of emotions in a corpus, and without a dedicated tool they all settled for a practical but sub-optimal solution. In some way, each of the following representations does not respect the standard spatial or aesthetic features of Plutchik’s wheel of emotions:

  • Tables, as used in [34], [48], [57], [1], [29] and [43]. Tables are a practical way to communicate exact amounts in an unambiguous way. However, tables are not a proper graphical display, so they miss all the features of the original wheel of emotions: there is not a proper colour code and both semantic proximity and semantic opposition are dismantled. Confronted with a plot, texts are harder to read: plots deliver the same information earlier and easier.

  • Bar plots, as used in [10], [42], [15], [6], [63], [7] and [51]. Bar plots are a traditional option to allow numerical comparisons across categories. In this domain, each bar would represent how many times a emotion is shown in a given corpus. However, bar plots are sub-optimal for two reasons. Firstly, the spatial displacement of the bars does not reflect semantic opposition of two emotions, that are opposites in the Plutchik’s wheel. Secondly, Plutchik’s wheel is circular, meaning that there is a semantic proximity between the first and the last of the 8 emotions branches, which is not represented in a bar plot. PyPlutchik preserves both semantic opposition and semantic proximity: the mass distribution of the ink in Fig. 12 (i and vi), for instance, immediately communicates of a positive corpus, as positive emotions are way more expressed than their opposites.

  • Pie charts, as used in [49], [32], [74], [56] and [29]

    . Pie charts are a better approximation of the Plutchik’s wheel, as they respect the colour code and they almost respect the spatial displacement of emotions. However, the actual displacement may depend on the emotion distribution: with a skewed distribution toward one or two emotions, all the remaining sectors may be shrunk and translated to a different position. Pie charts do not guarantee a correct spatial positioning of each category. There is also an underlying conceptual flaw in pie charts: they do not handle well items annotated with more than one tag, in this case texts annotated with more than one emotion. In a pie chart, the sum of the sectors’ sizes must equal the number of all the items; each sector would count how many items fall into a category. If multiple annotation on the same item are allowed, the overall sum of sectors’ sizes will exceed the number of actual items in the corpus. Null-annotated items, i.e. those without a noticeable emotion within, must be represented as a ninth, neutral sector. PyPlutchik handles multi-annotated and null-annotated items: for instance, Fig.

    1 (ii) shows a pseudo-corpus where Anger and Disgust both are valued one, because they appear in 100% of the pseudo-texts within. Fig.1 (i) shows a text with several emotions missing.

  • Heatmaps, as used in [54], [61]. Both papers coded the intensity of the 8 basic emotions depending on a second variable, respectively time and principal components of a vectorial representation of texts. Although heatmaps naturally fit the idea of an intensity score at the crossroad of two variables, the final display are sub-optimal in both cases, because they fail to preserve both the Plutchik’s wheel’s colour code and spatial displacement of emotions. As described in Sect. 3, PyPlutchik can be easily scripted for reproducing small-multiples. In Sect. 5 we provide an example of a small-multiple, displaying the evolution of the distribution of emotions in a corpus over time.

  • Scatter plots, as used in [72]. Scatter plot are intended to display data points in a two- or three-dimensional space, where each axis maps a continuous variable. In [72], x-axis represents the rank of each emotion on each of the three corpora they analyse, thus producing a descending sorting of emotion labels. This choice was probably made in order to have three descending, more readable series of scatters on the plot. However, this representation breaks both the colour code and the spatial displacement of emotions. PyPlutchik can be easily scripted for a side-by-side comparison of more than one corpus (see Sect. 3), allowing readers to immediately grasp high level discrepancies.

  • Line plots, as used in [62]. As well as scatter plots, line plots are appropriate for displaying a trend in a two-dimensional space, where each dimension maps a continuous variable. It is not the case of discrete emotions. Authors plotted the distribution of each emotion over time as a separate line. They managed to colour each line with the corresponding colour in the Plutchik’s wheel, reporting the colour code in a separate legend. As stated before in similar cases, this representation breaks the semantic proximity (opposition) of close (opposite) emotions. Again, in Sect. 3 we provide details about how to script PyPlutchik to produce a small-multiple plot, while in Section 5 we showcase the distribution of emotions by time on a real corpus.

  • Radar plots, as used in [4], [73] and [28]. Radar plots, a.k.a. Circular Column Graphs or Star Graphs, successfully preserve spatial proximity of emotions. Especially when the radar area is filled with a non-transparent colour, radars correctly distribute more mass where emotions are more expressed, giving to the reader an immediate sense of how shifted a corpus is against a neutral one. However, on a minor note, continuity of lines and shapes do not properly separate each emotion as a discrete objects per se. Furthermore, radars do not naturally reproduce the right colour code. Lastly, radars are not practical to reproduce stacked values, like the three degrees of intensity in Fig.1(i). Of course, all of these minor issues can be solved with an extension of the basic layout, or also adopting a Nightingale Rose Chart (also referred as Polar Area Chart or Windrose diagram), as in [59, 58]. However, the main drawback with radar plots and derivatives is that semantic opposition is lost, and we do not have a direct way to represents dyads and their occurrences. PyPlutchik, conversely, has been tailored on the original emotion’s wheel, and it naturally represents both semantic proximity and opposition, as well as the occurrences of dyads in our corpora (see Sect. 4).

3 Visualising Primary Emotions with PyPlutchik

PyPlutchik is designed to be integrated in data visualisation with the Python library matplotlib. It spans the printable area in a range of [-1.6, 1.6] inches on both axes, taking the space to represent a petal of maximum length 1, plus the outer labels and the inner white circle. Each petal overlaps on one of the 8 axis of the polar coordinate system. Four transversal minor grid lines cross each axis, spaced of 0.2 inches each, making it a visual reference for a quick evaluation of the petal size and for a comparison between non adjacent petals. Outside the range 0-1, corresponding to each petal, two labels represent the emotion and the associated numerical score. Colour code is strictly hard-coded, following Plutchik’s wheel of emotions classic representation.
PyPlutchik can be used either to plot only the 8 basic emotions, or to show the full intensity spectrum of each emotion, assigning three scores for the three intensity levels. In the latter case, each petal is divided into three sections, with colour intensity decreasing from the centre. In both cases PyPlutchik accepts as input a dict data structure, with exactly 8 items. Keys must be the 8 basic emotions names. dict is a natural Python data structure for representing JSON files, making PyPlutchik an easy choice to display JSONs. In case of basic emotions only, values in the dict must be numeric , while in case of intensity degrees they must be presented as an iterable of length three, whose entries must sum to maximum 1. Fig. 2 and Fig. 3 show how straightforward it is to plug a dict into the library to obtain the visualisation. Furthermore, PyPlutchik can be used to display the occurrences of primary, secondary, and tertiary dyads in our corpora. This more advanced feature will be described in Sect. 4.

[width=0.25*][ht] from pyplutchik import plutchik from matplotlib.pyplot import plt from random import uniform

fig, ax = plt.subplots(nrow = 5, ncol = 5, figsize = (8*5, 8*5))

emotions = [’anger’, ’anticipation’, ’joy’, ’trust’, ’fear’, ’surprise’, ’sadness’, ’disgust’]

i = 0 for row in range(5): for col in range(5):

# get axes (i+1) plt.subplot(5, 5, i + 1)

# generate random data emo = key: uniform(0, 1) for key in emotions

# draw plutchik(emo, ax = plt.gca())

# update i i += 1

Listing 1: Code that produces the visualization in Fig. 5

. Data visualized is random.

Figure 2: Plutchik’s wheel generated by code on the right. Each entry in the Python dict is a numeric value .
Figure 3: Plutchik’s wheel generated by code on the right. Each entry in the Python dict is a three-sized array, whose sum must be .

Due to the easy integration with Python basic data structures and the matplotlib library, PyPlutchik is also completely scriptable to display several plots side by side as small-multiple. Default font family is sans-serif, and text is printed with light weight and size 15 by default. However, it is possible to modify these features by the means of the corresponding parameters fontsize, fontfamily and fontweight. These features can be also changed with standard matplotlib syntax. The polar coordinates beneath petals and the labels outside can be hidden by setting the according parameter show_coordinates (default is True). This feature leaves only the flower on screen, improving visibility of small flowers in small-multiple plots. Also the petals aspect can be modified, by making them thinner or thicker, by tuning the parameter height_width_ratio: the lower the ratio, the thicker the petal (default is 1). Fig. 5 shows a small-multiple, with hidden polar coordinates and labels, computed on synthetic random data that have been artificially created only for illustrative purposes. Code for such representation is in List 1.
As a further customisation option, we allow the user to select a set of petals to be highlighted. This selective presentation feature follows a focus-plus-context [16] approach to the need of emphasising those emotions that might be more distinctive, according to the case under consideration. We chose to apply a focus-plus-context visualisation by filling petals’ areas selectively, without adopting other common techniques, as with fish-eye views [23], in order to avoid distortions and to preserve the spatial relations between the petals. This option can be enabled through the parameters highlight_emotions (default is all), that takes as input a string or a list of main emotions to highlight, and show_intensity_labels (default is none), that takes as input a string or a list of main emotions as well, and it allows to show all three intensity scores for each emotion in the list, while for the others it will display the cumulative scores only. We showcase this feature in Fig. 4.

Figure 4: A side-by-side comparison between the same synthetic plot of Fig. 1(iii) and the same plot, but with only two emotions highlighted. We highlighted and displayed the three intensity scores of Anticipation and Joy by the means of the parameters highlight_emotions and show_intensity_scores.
Figure 5: Small-multiple of a series of Plutchik’s wheel built from synthetic data. Polar coordinates beneath the flowers and labels around have been hidden to improve the immediate readability of the flowers, resulting in a collection of emotional fingerprints of different corpora.

4 Showing Dyads with PyPlutchik

Dyads are a crucial feature in Plutchik’s model. As explained in Section 1, the high flexibility of the model derives also from the spatial disposition of the emotions. Primary emotions can combine with their direct neighbours, forming primary dyads, or with emotions that are two or three petals away, forming respectively secondary and tertiary dyads. Opposite dyads can be formed as well, by combining emotions belonging to opposite petals. This feature dramatically enriches the spectrum of emotions of the model, beyond the primary ones. Therefore, a comprehensive visualisation of Plutchik’s model must offer a way to visualise dyads.

The design of such a feature is non trivial. Indeed, while the flower of primary emotions is inherent to the model itself, no standard design is provided to visualise dyads. For our implementation we decided to stick with the flower-shaped graphics, in order not to deviate too much from the original visualisation philosophy. Examples that show all levels of dyads can be seen in Fig. 13 and  12. While the core of the visual remains the same, a few modifications are introduced. In more detail:

  • the radial axes are progressively rotated by 45 degrees in each level, to enhance the spatial shift from primary emotions to dyads;

  • the petals are two-tone, according to the colours of the primary emotions that define each dyad;

  • a textual annotation in the center gives an indication of what kind of dyad is represented: "1" for primary dyads, "2" for secondary dyads, "3" for tertiary dyads, "opp." for opposite dyads.

  • while the dyads labels all come in the same colour (default is black), an additional circular layer has been added in order to visualise the labels and the colours of the primary emotions that define each dyad.

This last feature is particularly useful to give the user an immediate sense of the primary emotions involved in the formation of the dyad. Fig. 6 provides an example of the wheel produced if the user inputs a dict containing primary dyads instead of emotions. PyPlutchik automatically checks for the kind of input wheel and for its coherence: specifically, the library retrieves an error if the input dictionary contains a mix of emotions from different kind of dyads, as they cannot be displayed on the same plot. In Fig. 7 we show a representation of basic emotions, primary dyads, secondary dyads, tertiary dyads and opposite dyads, based on synthetic data. This representation easily conveys the full spectrum of emotions and their combinations according to Plutchik’s model, allowing for a quick but in-depth analysis of emotions detected in a corpus.

Figure 6: Primary dyads’ wheel generated by code on the right. Each entry in the Python dict is a numeric value .
Figure 7: Representation of emotions and primary, secondary, tertiary and opposite dyads. The data displayed is random.

5 Case Studies

We now showcase some useful examples of data visualisation using PyPlutchik. We argue that PyPlutchik is more suitable than any other graphical tool to narrate the stories of these examples, because it is the natural conversion of the original qualitative model to a quantitative akin, tailored to visually represent occurrences of emotions and dyads in an annotated corpus.

5.1 Amazon office products reviews

As a further use case we exploit a dataset of products review on Amazon [38]. This dataset contains almost 142.8 millions reviews spanning May 1996 - July 2014. Products are rated by the customers on a 1-5 stars scale, along with a textual review. Emotions in these textual reviews have been annotated using the Python library NRCLex [5]

, which checks the text against a lexicon for word-emotion associations; we do not have any ambition of scientific accuracy of the results, as this example is meant for showcasing our visualisation layouts.

In Fig. 8 we plot the average emotion scores in a sample of reviews of office products, grouped by the star-ratings. We can sense a trend: moving from left to right, i.e. from low-rates to high-rates products, we see the petals in the top half of the flower slowly growing in size at the expense of the bottom half petals. The decreasing effect is particularly visible in Fear, Anger and Disgust.
This visualisation is effective in communicating the increasing satisfaction of the customers; nevertheless, this improvement is very gradual and can hardly be noticed by comparing to subsequent steps. As we can see from Fig. 9(a), it is much more evident if we compare one-star-rated products to five-star-rated product reviews. The selective presentation feature of our library (Fig.9(b)) is a good way to enhance this result: it allows to put emphasis on the desired emotions without losing sight of the others, that are left untouched in their size or shape but are overshadowed, deprived of their coloured fill.

Figure 8: Average emotion scores in a sample of textual reviews of office products on Amazon. Rating of products goes from one star (worst) to five (best). On the left, emotions detected in negative reviews (one star), on the right the emotions detected in positive reviews (five star). While positive emotions stay roughly the same, negative emotions such Anger, Disgust and Fear substantially drop as the ratings get higher. Data from [38].
Figure 9: Focus-plus-context: the selective presentation feature of PyPlutchik allows to put emphasis on some particular emotions, without losing sight of the others; we can compare different subgroups of the same Amazon corpus placing our visualisations side-by-side, and highlighting only Anger, Disgust and Fear petals, to easily spot how these negative emotions are under represented in 5-stars reviews than in 1-star reviews.

5.2 Emotions in IMDB movie synopses

In Fig. 10

is shown the emotion detected in the short synopses from the top 1000 movies on the popular website IMDB (Internet Movie Data Base). Data is an excerpt of only four genres (namely Romance, Biography, Mystery and Animation) taken from Kaggle

[24], and emotions have been annotated again with the Python library NRCLex[5]. As in the previous case, both the dataset and the methodology are flawed for the task: for instance the synopsis of the movie may describe a summary of the main events or of the characters, but with detachment; the library lexicon may not be suited for the movie language domain. However, data here is presented for visualisation purposes only, and not intended as a contribution in the NLP area.
Romance shows a slight prominence of positive emotions over negative ones, especially over Disgust. The figure aside represents the Biography genre, and it is immediately distinctive for the high Trust score, other than higher Fear, Sadness and Anger scores. While high Trust represents the high admiration for the subject of the biopic, the other scores are in line with Propp’s narration scheme [47], where the initial equilibrium is threatened by a menace the hero is called to solve. A fortiori, Mystery’s genre conveys even more Anger and more Sadness than Biography, coupled with a higher sense of Anticipation and a very high score for Fear, as expected. Last, the Animation genre arouses many emotions, both positive and negative, with high levels of Joy, Fear, Anticipation and Surprise, as a children cartoon is probably supposed to do. Printed together, these four shapes are immediately distinct from each other, and they return an intuitive graphical representation of each genre’s peculiarities. Shapes are easily recognisable as positive or negative, bigger petals are predominant and petals’ sizes are easy to compare with the aid of the thin grid behind them.

Figure 10: Emotions in the synopses of the top 1000 movies in the IMDB database, divided by four genres. The shapes are immediately distinct from each other, and they return an intuitive graphical representation of each genre’s peculiarities.
Figure 11:

Emotions in the synopses of the 20 most common movie genres in the IMDB database. Coordinates, grids and labels are not visible: this is an overall view of the corpus, meant to showcase general trends and to spot outliers that can be analysed at a later stage, in dedicated plot.

Data represented in Fig. 10 is a larger excerpt of the same IMDB dataset, which covers 21 genres. The whole dataset gives us the chance to show a small-multiple representation without visible coordinates, as described in Sect. 3: we plotted in Fig. 11

the most common 20 genres of movies within the top 1000, 5 by row. We hid the grid and the labels, leaving the flower to speak for itself. Data represented this way is not intended to be read with exactness on numbers. Instead, it is intended to be read as an overall overview on the corpus. Peculiarities, outliers and one-of-a-kind shapes catch the eye immediately, and they can be accurately scrutinised later with a dedicated plot that zooms into the details. For instance, the Film-Noir genre contains only a handful of movies, whose synopses are almost always annotated as emotion-heavy. The resulting shape is a clear outlier in this corpus, with extremely high scores on 5 of 8 emotions. Thrillers and Action movies share a similar emotion distribution, while Music and Musical classify for the happiest.

5.3 Trump or Clinton?

In Fig. 12 and Fig. 13 we visualise the basic emotions and dyads found in tweets in favour and against Donald Trump and Hillary Clinton, the 2016 United States Presidential Elections principal candidates. Data is the training set released for a SemEval 2016 task, namely a corpus of annotated stances, sentiments and emotions in tweets [41]. Each candidate is represented in both plots on a different row, and each row displays five subplots, respectively basic emotions, primary dyads, secondary dyads, tertiary dyads and opposite dyads. Tweets supporting either Trump or Clinton present higher amounts of positive emotions (Fig. 12(i) and (vi)), namely Anticipation, Joy and Trust, and from lower to no amounts of negative emotions, especially Sadness and Disgust. On the contrary, tweets critical of each candidate (Fig. 13(i) and (vi)) show high values of Anger, coupled with Disgust, probably in the form of disapproval.
There are also significant differences between the two candidates. Donald Trump collects higher levels of Trust and Anticipation from his supporters than Hillary Clinton, possibly meaning higher expectations from his electoral base. Users that are skeptical of Hillary Clinton show more Disgust towards her than Donald Trump’s opponents towards him.

Figure 12: Tweets in favour of Donald Trump and Hillary Clinton from the 2016 StanceDetection task in SemEval. From left to right: basic emotions, primary dyads, secondary dyads, tertiary dyads and opposite dyads for both candidates (Donald Trump on the first row, Hillary Clinton on the second one). Despite the high amounts of Anticipation, Joy and Trust for both the candidates, which result in similar primary dyads, there is a significant spike on the secondary dyad Hope among Trump’s supporters that is not present in Clinton’s supporters.
Figure 13: Similarly to Figure 12, here are shown the emotions captured in the tweets against Donald Trump and Hillary Clinton from the 2016 StanceDetection task in SemEval. We see a clear prevalence of negative emotions, particularly Anger and Disgust. This combination is often expressed together, as can be seen from the primary emotions plots (ii and vii), where there is a spike in Contempt.

Besides basic emotions, PyPlutchik can display the distribution of dyads as well, as described in Section 4. Dyads allow for a deeper understanding of the data. We can see how the tweets against the presidential candidates in Fig. 13 are dominated by the negative basic emotion of Anger, with an important presence of Disgust and Anticipation (subplots (i) and (vi)); the dominant primary dyad is therefore the co-occurence of Anger and Disgust (subplot (ii)), i.e. the primary dyad Contempt, but not Aggressiveness, the primary dyad formed by Anger and Anticipation: the latter rarely co-occurs with the other two, which means that expectations and contempt are two independent drives in such tweets. The other dyads are relatively scarcer as we progress on the secondary and tertiary level (subplots (iii)-(v)). The supporting tweets in Fig. 12 are characterised by positive emotions, both in the primary flower and in the dyads, with these again being a reflection of the co-occurrence of the most popular primary emotions. Although Anticipation, Joy and Trust are present in different amounts, primary dyads Optimism and Love do occur in a comparable number of cases (subplot (ii)). Interestingly, the pro-Trump tweets show a remarkable quantity of Hope, the secondary dyad that combines Anticipation and Trust, suggesting that Trump’s supporters expressed towards him all the three dominant basic emotions together more often that Clinton’s supporters did.

Generally speaking, we notice that there are not many dyads expressed in the tweets. We can ascribe this to many factors: first and foremost, a dataset annotated only on primary emotions and not also explicitly on the dyads will naturally show less dyads, since their presence will depend only on the casual co-occurrence of primary emotions. Still, this does not concern us, since our current purpose is to showcase the potential of PyPlutchik: with this regard, we note that we were immediately able to track the unexpected presence of the secondary dyad Hope, that stood out among the others.

6 Conclusion

The increasing production of studies that explore the emotion patterns of human-produced texts often requires dedicated data visualisation techniques. This is particularly true for those studies that label emotions according to a model such as Plutchik’s one, which is heavily based on the principles of semantic proximity and opposition between pairs of emotions. Indeed, the so-called Plutchik’s wheel is inherent to the definition of the model itself, as it provides the perfect visual metaphor that best explains this theoretical framework.
Nonetheless, by checking the most recent literature it appears evident that too often this aspect is neglected, instead entrusting the visual representation of the results to standard, sub-optimal solutions such as bar charts, tables, pie charts. We believe that this choice does not do justice neither to the said studies, nor to Plutchik’s model itself, and it is mainly due to the lack of an easy, plug-and-play tool to create adequate visuals.
With this in mind we introduced PyPlutchik, a Python library for the correct visualisation of Plutchik’s emotion traces in texts and corpora. PyPlutchik fills the gap of a suitable tool specifically designed for representing Plutchik’s model of emotions. Most importantly, it goes beyond the mere qualitative display of the Plutchik’s wheel, that lacks a quantitative dimension, allowing the user to display also the quantities related to the emotions detected in the text or in the corpora. Moreover, PyPlutchik goes the extra mile by implementing a new way of visualising primary, secondary, tertiary and opposite dyads, whose presence is a distinctive and essential feature of the Plutchik’s model of emotions.
This library is built on top of the popular Python library matplotlib, its APIs being written in a matplotlib style. PyPlutchik is designed for an easy plug and play of JSON files, and it is entirely scriptable. It is designed for single plots, pair-wise and group-wise side-by-side comparisons, and small-multiples representations. The original Plutchik’s wheel of emotions displace the 8 basic emotions according to proximity and opposition principles; the same principles are respected in PyPlutchik layout.
As we pointed out, there are currently thousands of empirical works on Plutchik’s model of emotions. Many of these works need a correct representation of the emotions detected or annotated in data. It is our hope that our library will help the scientific community by providing them with an alternative to the sub-optimal representations of Plutchik’s emotion currently available in literature.


  • [1] M. Abdul-Mageed and L. Ungar (2017-07)

    EmoNet: fine-grained emotion detection with gated recurrent neural networks

    In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, Canada, pp. 718–728. External Links: Link, Document Cited by: 1st item.
  • [2] S. Abrilian, L. Devillers, and J. Martin (2006-01) Annotation of emotions in real-life video interviews: variability between coders. pp. . Cited by: §2.
  • [3] S. Abrilian, L. Devillers, S. Buisine, and J. Martin (2005-01) EmoTV1: annotation of real-life emotions for the specifications of multimodal a ective interfaces. pp. . Cited by: §2.
  • [4] N. Bader, O. Mokryn, and J. Lanir (2017) Exploring emotions in online movie reviews for online browsing. In Proceedings of the 22nd International Conference on Intelligent User Interfaces Companion, IUI ’17 Companion, New York, NY, USA, pp. 35–38. External Links: ISBN 9781450348935, Link, Document Cited by: 7th item.
  • [5] M. M. Bailey (2019)(Website) External Links: Link Cited by: §5.1, §5.2.
  • [6] V. Balakrishnan and W. Kaur (2019) String-based multinomial naïve bayes for emotion detection among facebook diabetes community. Procedia Computer Science 159, pp. 30 – 37. Note: Knowledge-Based and Intelligent Information & Engineering Systems: Proceedings of the 23rd International Conference KES2019 External Links: ISSN 1877-0509, Document, Link Cited by: 2nd item.
  • [7] P. Balouchian and H. Foroosh (2018-10) Context-sensitive single-modality image emotion analysis: a unified architecture from dataset construction to cnn classification. pp. 1932–1936. External Links: Document Cited by: 2nd item.
  • [8] A. J. Bradley, M. El-Assady, K. Coles, E. Alexander, M. Chen, C. Collins, S. Jänicke, and D. J. Wrisley (2018) Visualization and the digital humanities. IEEE computer graphics and applications 38 (6), pp. 26–38. Cited by: §2.
  • [9] M. Burke and M. Develin (2016) Once more with feeling: supportive responses to social sharing on facebook. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, CSCW ’16, New York, NY, USA, pp. 1462–1474. External Links: ISBN 9781450335928, Link, Document Cited by: §2.
  • [10] L. J. Caluza (2017-12) Deciphering west philippine sea: a plutchik and vader algorithm sentiment analysis. Indian Journal of Science and Technology 11, pp. 1–12. External Links: Document Cited by: 2nd item.
  • [11] E. Cambria, A. Livingstone, and A. Hussain (2012) The hourglass of emotions. In Cognitive behavioural systems, pp. 144–157. Cited by: §2.
  • [12] A. T. Capozzi, V. Patti, G. Ruffo, and C. Bosco (2018) A data viz platform as a support to study, analyze and understand the hate speech phenomenon. In Proceedings of the 2nd International Conference on Web Studies, pp. 28–35. Cited by: §2.
  • [13] R. CBalabantaray, M. mohd, and N. Sharma (2012-09) Multi-class twitter emotion classification: a new approach. International Journal of Applied Information Systems 4, pp. 48–53. External Links: Document Cited by: §2.
  • [14] J. Chamberlain, U. Kruschwitz, and O. Hoeber (2018) Scalable visualisation of sentiment and stance. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Cited by: §2.
  • [15] P. Chesi, M. Marini, G. Mancardi, F. Patti, L. Alivernini, A. Bisecco, G. Borriello, S. Bucello, F. Caleri, P. Cavalla, E. Cocco, C. Cordioli, M. Giuseppe, R. Fantozzi, M. Gattuso, F. Granella, M. Liguori, L. Locatelli, A. Lugaresi, and P. Valentino (2020-03) Listening to the neurological teams for multiple sclerosis: the smart project. Neurological Sciences 41, pp. . External Links: Document Cited by: 2nd item.
  • [16] A. Cockburn, A. Karlson, and B. B. Bederson (2009) A review of overview+ detail, zooming, and focus+ context interfaces. ACM Computing Surveys (CSUR) 41 (1), pp. 1–31. Cited by: §3.
  • [17] A. S. Cowen and D. Keltner (2017) Self-report captures 27 distinct categories of emotion bridged by continuous gradients. Proceedings of the National Academy of Sciences 114 (38), pp. E7900–E7909. Cited by: §2.
  • [18] W. Cui, S. Liu, L. Tan, C. Shi, Y. Song, Z. Gao, H. Qu, and X. Tong (2011) Textflow: towards better understanding of evolving topics in text. IEEE transactions on visualization and computer graphics 17 (12), pp. 2412–2421. Cited by: §2.
  • [19] W. Cui, S. Liu, Z. Wu, and H. Wei (2014) How hierarchical topics evolve in large text corpora. IEEE transactions on visualization and computer graphics 20 (12), pp. 2281–2290. Cited by: §2.
  • [20] W. Dou and S. Liu (2016) Topic-and time-oriented visual text analysis. IEEE computer graphics and applications 36 (4), pp. 8–13. Cited by: §2.
  • [21] P. Ekman (1992) An argument for basic emotions. Cognition & emotion 6 (3-4), pp. 169–200. Cited by: §1, §2.
  • [22] P. Ekman (1999) Basic emotions. Handbook of cognition and emotion 98 (45-60), pp. 16. Cited by: §2.
  • [23] G. W. Furnas (1986) Generalized fisheye views. Acm Sigchi Bulletin 17 (4), pp. 16–23. Cited by: §3.
  • [24] O. Hany (2021)(Website) External Links: Link Cited by: §5.2.
  • [25] J. D. Hunter (2007) Matplotlib: a 2d graphics environment. Computing in Science & Engineering 9 (3), pp. 90–95. External Links: Document Cited by: §1.
  • [26] C. E. Izard, D. Z. Libero, P. Putnam, and O. M. Haynes (1993) Stability of emotion experiences and their relations to traits of personality.. Journal of personality and social psychology 64 (5), pp. 847. Cited by: §1, §2.
  • [27] W. James (2007) The principles of psychology. Vol. 1, Cosimo, Inc.. Cited by: §1, §2.
  • [28] J. Jenkins (2020-05) Detecting emotional ambiguity in text. 4, pp. 55–57. External Links: Document Cited by: 7th item.
  • [29] N. Kagita (2018) Role of emotions in the fmcg branding and their purchase intentions. Vidwat 11 (1), pp. 24–28. Cited by: 1st item, 3rd item.
  • [30] E. Kim and R. Klinger (2018-08) Who feels what and why? annotation of a literature corpus with semantic roles of emotions. In Proceedings of the 27th International Conference on Computational Linguistics, Santa Fe, New Mexico, USA, pp. 1345–1359. External Links: Link Cited by: §2.
  • [31] K. Kucher, C. Paradis, and A. Kerren (2018) The state of the art in sentiment visualization. In Computer Graphics Forum, Vol. 37, pp. 71–96. Cited by: §2.
  • [32] K. Kukk (2019) Correlation between emotional tweets and stock prices. Cited by: 3rd item.
  • [33] R. S. Lazarus and B. N. Lazarus (1994) Passion and reason: making sense of our emotions. Oxford University Press, USA. Cited by: §1, §2.
  • [34] C. Liu, M. Osama, and A. de Andrade (2019) DENS: A dataset for multi-class emotion analysis. CoRR abs/1910.11769. External Links: Link, 1910.11769 Cited by: 1st item.
  • [35] S. Liu, J. Yin, X. Wang, W. Cui, K. Cao, and J. Pei (2015) Online visual analytics of text streams. IEEE transactions on visualization and computer graphics 22 (11), pp. 2451–2466. Cited by: §2.
  • [36] S. Liu, M. X. Zhou, S. Pan, Y. Song, W. Qian, W. Cai, and X. Lian (2012)

    Tiara: interactive, topic-based visual text summarization and analysis

    ACM Transactions on Intelligent Systems and Technology (TIST) 3 (2), pp. 1–28. Cited by: §2.
  • [37] V. Lombardo, R. Damiano, C. Battaglino, and A. Pizzo (2015-11) Automatic annotation of characters’ emotions in stories. pp. 117–129. External Links: ISBN 978-3-319-27035-7, Document Cited by: §2.
  • [38] J. McAuley, C. Targett, Q. Shi, and A. Van Den Hengel (2015) Image-based recommendations on styles and substitutes. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pp. 43–52. Cited by: Figure 8, §5.1.
  • [39] A. Mehrabian (1980) Basic dimensions for a general psychological theory: implications for personality, social, environmental, and developmental studies. Vol. 2, Oelgeschlager, Gunn & Hain Cambridge, MA. Cited by: §1.
  • [40] I. Meirelles (2013) Design for information: an introduction to the histories, theories, and best practices behind effective information visualizations. Rockport publishers. Cited by: §2.
  • [41] S. M. Mohammad, S. Kiritchenko, P. Sobhani, X. Zhu, and C. Cherry (2016-06) Semeval-2016 task 6: detecting stance in tweets. In Proceedings of the International Workshop on Semantic Evaluation, SemEval ’16, San Diego, California. Cited by: §5.3.
  • [42] M. A. Mohsin and A. Beltiukov (2019) Summarizing emotions from text using plutchik’s wheel of emotions. Cited by: 2nd item.
  • [43] E. Öhman, M. Pàmies, K. Kajava, and J. Tiedemann (2020) XED: a multilingual dataset for sentiment analysis and emotion detection. External Links: 2011.01612 Cited by: 1st item.
  • [44] W. G. Parrott (2001) Emotions in social psychology: essential readings. psychology press. Cited by: §2.
  • [45] R. Plutchik (2001) The nature of emotions: human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. American scientist 89 (4), pp. 344–350. Cited by: §1.
  • [46] D. Preoţiuc-Pietro, H. Schwartz, G. Park, J. Eichstaedt, M. Kern, L. Ungar, and E. Shulman (2016-01) Modelling valence and arousal in facebook posts. pp. 9–15. External Links: Document Cited by: §2.
  • [47] V. Propp (2010) Morphology of the folktale. Vol. 9, University of Texas Press. Cited by: §5.2.
  • [48] A. Rakhmetullina, D. Trautmann, and G. Groh (2018) Distant supervision for emotion classification task using emoji 2 emotion. Cited by: 1st item.
  • [49] G. Ranco, D. Aleksovski, G. Caldarelli, M. Grčar, and I. Mozetič (2015-09) The effects of twitter sentiment on stock price returns. PloS one 10, pp. e0138441. External Links: Document Cited by: 3rd item.
  • [50] F. Rangel, D. I. H. Farías, P. Rosso, and A. Reyes (2014) Emotions and irony per gender in facebook. Cited by: §2.
  • [51] H. Rashkin, A. Bosselut, M. Sap, K. Knight, and Y. Choi (2018-07) Modeling naive psychology of characters in simple commonsense stories. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, pp. 2289–2299. External Links: Link, Document Cited by: 2nd item.
  • [52] K. Roberts, M. Roach, J. Johnson, J. Guthrie, and S. Harabagiu (2012-01) EmpaTweet: annotating and detecting emotions on twitter. Proc. Language Resources and Evaluation Conf, pp. . Cited by: §2.
  • [53] J. A. Russell (1980) A circumplex model of affect.. Journal of personality and social psychology 39 (6), pp. 1161. Cited by: §1.
  • [54] R. Sawhney, H. Joshi, S. Gandhi, and R. R. Shah (2020-11) A time-aware transformer based model for suicide ideation detection on social media. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, pp. 7685–7697. External Links: Link, Document Cited by: 4th item.
  • [55] A. Scarantino and P. Griffiths (2011-09) Don’t give up on basic emotions. Emotion Review 3, pp. 444–454. External Links: Document Cited by: §2.
  • [56] R. Sharma, D. Pandey, S. Zith, and S. Babu (2020-08) Sentiment analysis of facebook & twitter using soft computing. pp. 2457–1016. Cited by: 3rd item.
  • [57] R. Sprugnoli (2020) MultiEmotions-it: a new dataset for opinion polarity and emotion analysis for italian. In 7th Italian Conference on Computational Linguistics, CLiC-it 2020, pp. 402–408. Cited by: 1st item.
  • [58] M. Stella, V. Restocchi, and S. De Deyne (2020) #lockdown: network-enhanced emotional profiling in the time of covid-19. Big Data and Cognitive Computing 4 (2). External Links: Link, ISSN 2504-2289, Document Cited by: 7th item.
  • [59] M. Stella, M. S. Vitevitch, and F. Botta (2021) Cognitive networks identify the content of english and italian popular posts about covid-19 vaccines: anticipation, logistics, conspiracy and loss of trust. External Links: 2103.15909 Cited by: 7th item, §2.
  • [60] Y. Susanto, A. G. Livingstone, B. C. Ng, and E. Cambria (2020) The hourglass model revisited. IEEE Intelligent Systems 35 (5), pp. 96–102. Cited by: §2.
  • [61] H. Tanabe, T. Ogawa, T. Kobayashi, and Y. Hayashi (2020-12) Exploiting narrative context and a priori knowledge of categories in textual emotion classification. In Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain (Online), pp. 5535–5540. External Links: Link, Document Cited by: 4th item.
  • [62] J. K. D. Treceñe (2019) Delving the sentiments to track emotions in gender issues: a plutchik-based sentiment analysis in students’ learning diaries. International Journal of Scientific & Technology Research 8, pp. 1134–1139. Cited by: 6th item.
  • [63] T. Ulusoy, K. T. Danyluk, and W. J. Willett (2018) Beyond the physical: examining scale and annotation in virtual reality visualizations. Technical report Department of Computer Science, University of Calgary. Cited by: 2nd item.
  • [64] F. van Ham, M. Wattenberg, and F. B. Viégas (2009) Mapping text with phrase nets. IEEE Trans. Vis. Comput. Graph. 15 (6), pp. 1169–1176. External Links: Link, Document Cited by: §2.
  • [65] L. Vidrascu and L. Devillers (2005-01) Annotation and detection of blended emotions in real human-human dialogs recorded in a call center. Vol. 0, pp. 944–947. External Links: Document Cited by: §2.
  • [66] F. B. Viégas, S. Golder, and J. Donath (2006) Visualizing email content: portraying relationships from conversational histories. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’06, New York, NY, USA, pp. 979–988. External Links: ISBN 1-59593-372-7, Link, Document Cited by: §2.
  • [67] F. B. Viegas, M. Wattenberg, F. van Ham, J. Kriss, and M. McKeon (2007-11) ManyEyes: a site for visualization at internet scale. IEEE Transactions on Visualization and Computer Graphics 13 (6), pp. 1121–1128. External Links: ISSN 1077-2626, Link, Document Cited by: §2.
  • [68] W. Wang, L. Chen, K. Thirunarayan, and A. P. Sheth (2012) Harnessing twitter "big data" for automatic emotion identification. In 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing, Vol. , pp. 587–592. External Links: Document Cited by: §2.
  • [69] D. Watson and A. Tellegen (1985) Toward a consensual structure of mood.. Psychological bulletin 98 (2), pp. 219. Cited by: §1.
  • [70] M. Wattenberg and F. B. Viégas (2008-11) The word tree, an interactive visual concordance. IEEE Transactions on Visualization and Computer Graphics 14 (6), pp. 1221–1228. External Links: ISSN 1077-2626, Link, Document Cited by: §2.
  • [71] Y. Wu, N. Cao, D. Gotz, Y. Tan, and D. A. Keim (2016) A survey on visual analytics of social media data. IEEE Transactions on Multimedia 18 (11), pp. 2135–2148. Cited by: §2.
  • [72] S. F. Yilmaz, E. B. Kaynak, A. Koç, H. Dibeklioğlu, and S. S. Kozat (2020) Multi-label sentiment analysis on 100 languages with dynamic weighting for label imbalance. External Links: 2008.11573 Cited by: 5th item.
  • [73] H. Yu and B. Bae (2018) Emotion and sentiment analysis from a film script: a case study. Cited by: 7th item.
  • [74] O. Zhurakovskaya, L. Steinkamp, K. M. Tymann, and C. Gips An emotion detection tool composed of established techniques. Cited by: 3rd item.