Extracting human emotions at different places based on facial expressions and spatial clustering analysis

05/06/2019 ∙ by Yuhao Kang, et al. ∙ 0

The emergence of big data enables us to evaluate the various human emotions at places from a statistic perspective by applying affective computing. In this study, a novel framework for extracting human emotions from large-scale georeferenced photos at different places is proposed. After the construction of places based on spatial clustering of user generated footprints collected in social media websites, online cognitive services are utilized to extract human emotions from facial expressions using the state-of-the-art computer vision techniques. And two happiness metrics are defined for measuring the human emotions at different places. To validate the feasibility of the framework, we take 80 tourist attractions around the world as an example and a happiness ranking list of places is generated based on human emotions calculated over 2 million faces detected out from over 6 million photos. Different kinds of geographical contexts are taken into consideration to find out the relationship between human emotions and environmental factors. Results show that much of the emotional variation at different places can be explained by a few factors such as openness. The research may offer insights on integrating human emotions to enrich the understanding of sense of place in geography and in place-based GIS.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

page 18

page 20

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Place, which plays a central role in daily life not only as a location reference but also reflecting the way human perceive, experience and understand the environment, is a key issue in geography and GIScience (Tuan, 1977; Goodchild, 2011; Winter and Freksa, 2012; Scheider and Janowicz, 2014; Goodchild, 2015; McKenzie et al., 2015; Gao et al., 2017a, b; Blaschke et al., 2018; Zhang et al., 2018; Purves et al., 2019; Wu et al., 2019). Agnew (2011) proposed three aspects of place: location, locale, and the sense of place, which refers to the experiences of people and their perceptions and conceptualizations of a place. And place has been comprehensively depicted as the context and affordance with various human activities, which linked to memories and emotions of individuals (Jordan et al., 1998; Kabachnik, 2012; Scheider and Janowicz, 2014; Merschdorf and Blaschke, 2018). Human emotions, which are innately stored in human neural systems (Wierzbicka, 1986; Izard, 2013), provide bridges linking the surrounding environments and human perceptions. On one hand, emotions tint human experiences (Tuan, 1977), and show how places are psychologically felt by people (Davidson and Milligan, 2004). One the other hand, emotions are proved to be connected and affiliated with the surrounding things including living organisms (Wilson, 1984), nature environment (Capaldi et al., 2014), and cultural environment (Mesquita and Markus, 2004), and so on. Therefore, understanding human emotions to the environment is important for human behavior analysis towards the sense of place (Grossman, 1977; Rentfrow and Jokela, 2016; Smith and Bondi, 2016).

Amount of early studies usually use questionnaires to investigate the emotion of people in different environmental contexts, which costs a lot of human resources and lacks timeliness (Golder and Macy, 2011)

. The emergence of big data and the development of information, communication, technology (ICT), and artificial intelligence (AI) provide advanced methodologies and opportunities to solve the problems aforementioned in social sensing

(Liu et al., 2015; Ye et al., 2016; Janowicz et al., 2019). Affective computing, as an interdisciplinary domain spanning computer science, psychology, and cognitive science, was proposed by Picard et al. (1995) with a focus on investigating the interactions between computer sensors and human emotions. Every day, large volumes of geo-tagged user generated content (UGC) are uploaded to social networking websites such as Facebook and Twitter, photo-sharing sites such as Flickr and Instagram, as well as the video-sharing platform YouTube (O’Connor, 2008), which can reflect human perceptions of environments as sensors and their contributions to the volunteered geographic information (VGI) (Goodchild, 2007). And affective memories are produced and archived in these technology-mediated platforms (Elwood and Mitchell, 2015). In those UGC, people express their emotions actively through tones of the voice (Schuller et al., 2009), facial expressions (Ekman, 1993), body gestures, and written forms (Bollen et al., 2011), or their emotions are captured passively by various types of sensors. Also, state-of-the-art AI technologies make it possible to collect human emotions from massive data sources and have revolutionized the research of human emotions. Several existing studies have tried to connect geography and collective emotions from those UGC using advanced technologies and got promising results (Mitchell et al., 2013). However, the absence of attention to the role of place as locale to human emotions still exists (Smith and Bondi, 2016)

. In addition, most existing research used natural language processing (NLP) to extract human emotions from textual corpus

(Strapparava et al., 2004; Cambria et al., 2012). Such methods may face challenges such as multi-cultural differences in language, which may not be suitable for global-scale research (more discussions in Section 2 and Section 5). In comparison, the facial expression of emotions is said to be universal across countries and different periods, and can capture human emotions in real-time, which may be suitable for a place-based emotion extraction framework in a global-scale.

In this research, our goal is to investigate human emotions in places and explore potentially influential environmental factors. We term the study phenomenon as Place Emotion, which is a special case of the general affective computing in geography, i.e., to examine the human emotions at different places with different affordances (including the environment and human activities). The research questions are as follows: (1) How to extract and compute human emotion scores from amount of georeferenced photos taken in different places? (2) What is the relationship between human emotions and environment factors at places? To answer these questions, a general framework utilizing UGC to compute human emotion scores at places based on facial-expression recognition and spatial clustering techniques is proposed. However, since there are many types of places with variety of environment factors, we only select one specific type of place (i.e., tourist attraction sites) as a case study to test the feasibility of our proposed workflow.

Tourist attractions, which attract “non-local” travelers for sightseeing, participating activities, and experiences (Leiper, 1990; Lew, 1987), are a popular type of places (Jones et al., 2008), and are located across the world, which are suitable for being chosen as a case study of global scale research. In the past decades, with the growth of economy and the development of modern transportation techniques, tourism has experienced continued growth and deepening diversification to become one of the fastest growing economic sectors in the world (Ashley et al., 2007). For a tourist, the choice of places to visit in planning a trip is the first step (Bieger and Laesser, 2004; Sun et al., 2018), while the options are often numerous. When retrieving information of tourist sites, a fair and comprehensive ranking list on tourist attractions is often useful. However, existing ranking lists relying on the environment (Amelung et al., 2007) and socio-economic (Bojic et al., 2016; Chon, 1991) aspects of the tourist sites. These factors indeed influence travel flows but from an objective way. The perceptions and feelings of tourists are often ignored. A ranking list based on human emotions might provide different insights from human-oriented preferences. Additionally, happiness is one of the most common basic emotions (Ekman and Davidson, 1994; Eimer et al., 2003; Izard, 2007). Therefore, a ranking list of the happiest tourist sites in the world will be created as an outcome of the affective computing at each site.

To this end, this study presents a novel framework to measure human emotions at places from facial expressions and to explore influential factors to the degree of happiness at different places. Tourist sites are taken as a specific type of place for experiment. The contributions of the study are three-fold. (1) We propose a novel approach for extracting and characterizing the average happiness score at each place using computer vision and spatial analysis techniques. (2) We explore the relationship between different kinds of environmental contexts and the degree of happiness extracted from human facial expressions. (3) We create a ranking list of the happiest tourist sites based on crowdsourcing human emotions rather than objective indices, and provide new insights on integrating human emotions to enrich the understanding of sense of place in geography and in place-based GIS.

The remainder of paper is organized as follows. First, in the section 2 “Related Work”, we conduct the literature review on place emotion related studies. In the section 3 “Methodology”, we present a methodology framework and explain our computational procedures. Then in the section 4 “Experiments and Results”, we test the framework with a case study of human emotions at 80 worldwide tourist attractions. We discuss the implications and comparison of our image-based method to the text-based studies in the section 5 “Discussion”. Finally, we conclude this work and present our vision for future research in the section 6 “Conclusion and Future Work”.

2 Related Work

There are two categories of affective computing. One is about several instinctive basic emotions like happiness, sadness, anger, etc (Ekman and Davidson, 1994). The other is to detect the polarity of sentiments like positive, neutral and negative expressions, which are organized feelings and mental attitude (Pang et al., 2008). Unless specifically clarified, we use the general term “emotion” to represent both categories interchangeably in this paper. Both emotion and sentiment studies enable us to understand human perceptions of the society and the environment (Zeng et al., 2009). Exploration and understanding of human emotions and sentiments have attracted volumes of interest from psychology (Ekman, 1993; Berman et al., 2012; Svoray et al., 2018), biology (Darwin and Prodger, 1998), computer science (Lisetti, 1998), geography (Davidson and Milligan, 2004; Mitchell et al., 2013; Svoray et al., 2018; Hu et al., 2019), and public health (Zheng et al., 2019), just to name a few.

The emotion collection methods evolve over time. Traditionally, scholars from social sciences often use questionnaires and self-reports to investigate the emotions of people in different environmental contexts (Niedenthal et al., 2018). Several rankings of human’s happiness are published in recent years, including the World Happiness Report released by the United Nations Sustainable Development Solutions Network111http://worldhappiness.report, which ranks the happiness of countries’ citizens by investigating the social-economic indices. The Measuring of National Well-being Program, which is released by the Office of National Statistics, UK, monitors the well-being of citizens by producing assessment measures of the nation222https://www.ons.gov.uk/peoplepopulationandcommunity/wellbeing/articles/measuringnationalwellbeing/qualityoflifeintheuk2018. The Gross National Happiness is used in guiding the government of Bhutan with aspects of living standards, health, education, etc. And the Satisfaction With Life Scale measures the life satisfaction components of subjective well-being333http://www.midss.org/content/satisfaction-life-scale-swl. However, those methods encounter some challenges despite the spread usage in psychological science. For example, it costs a lot of human resources and lacks timeliness (Golder and Macy, 2011). And the results relied on the questionnaires may have constraints of self-knowledge and psychological influence of informed consent (Baumeister et al., 2007).

With the emergence of affective computing technologies, more efficient ways for detecting human emotions are used. Numerous studies on affective computing have been conducted and gained great success, especially using NLP methods to extract emotions from texts and explain from a geographic perspective. For example, Mitchell et al. (2013)estimated human happiness at the state-level in the United States and explored the impact of socioeconomic attributes on human moods. Ballatore and Adams (2015) utilized a corpus of about 100,000 travel blogs for extracting the emotional structure (including joy, anger, fear, sadness, etc.) of different place types. Bertrand et al. (2013) generated a sentiment map of New York city via extraction of emotions from tweet data. Zhen et al. (2018) calculated the human emotion scores using the Weibo tweet data and explored the spatial distribution of sentiments in Nanjing. Zheng et al. (2019) demonstrated that high levels of air pollution (e.g., PM 2.5) may contribute to the urban population’s reported low level of happiness in social media based on analytics of over 210 million geotagged tweets on Weibo. Hu et al. (2019)

presented a semantic-specific sentiment analysis on online neighborhood textual reviews for understanding the perceptions of people toward their living environments.

Aside from the success, text-based measurements of emotions may encounter some challenges: One problem is that texts are often recorded after events. It means that the emotions expressed are not in real-time, but often after a period of transition. The buffering time period may be beneficial to the user who expresses emotions. Because during a calm-down period, the user may utilize more dispassionate linguistic expression to maintain a stable social identity (Coleman and Williams, 2013). Another challenge in extracting emotions from texts is the multi-lingual environment. Different languages may vary in words and syntax for expressing the emotions. Most of emotion extraction models are based on the words or the syntactic and semantic structures of the sentences, which are unique in each language (Shaheen et al., 2014). So far, no existing method is established to standardize the emotional scores computed from all different language models. Therefore, affective computing based on texts has been limited to analyze materials in one language at a time. For problems being multi-lingual, text-based affective computing may be confronted with difficulties.

In comparison, image-based approaches (Zhang et al., 2018)

, especially facial expression-based emotion extraction methods have been improved greatly in recent years because of the emergence of deep convolutional neural networks

(Yu and Zhang, 2015), which even perform better than human in face-recognition benchmark testing (Wang and Deng, 2018). Svoray et al. (2018)

analyzed Flickr photos and found the positive relationship between human facial expressions of happiness and their exposure to nature with urban density, green vegetation, and proximity to water bodies in the city of Boston. By extracting and identifying key points from face images based on facial activities and muscles, machine learning models can learn the visual patterns of faces according to the emotional labels

(Calvo and D’Mello, 2010). Therefore, the emotions of faces can be extracted. Each culture has its own verbal language, and emotion has its own language of facial expressions. The relationship between emotions and facial expressions has been extensively explored. Levenson et al. (1990) pointed out that subjective emotions have significant connections to the facial activities, which provides the fundamental theories of facial expression-based affective computing. Facial expression-based emotion recognition methods have several advantages. First, facial expressions are both universal and culturally‐specific (Matsumoto, 1991). Though connections between emotions and cultures varied (Cohn, 2007), strong evidence has been provided that there is a pan-cultural element in facial expressions of emotion (Ekman and Keltner, 1970). People from ancient times to the present, from all over the world, and even our primate relatives hold similar basic facial expressions, especially smiling and laughter (Preuschoft, 2000; Parr and Waller, 2006). It indicates that humans are universal when representing basic emotions and facial expression-based emotion extraction methods are suitable for global-scaled issues especially for solving the multi-lingual problem. In fact, some existing researches have explored the worldwide expression of emotions based on facial expressions in photos (Kang et al., 2018), which shows the universal compatibility of such methods. In addition, facial expressions are produced spontaneously when emotions are elicited (Berenbaum and Rotter, 1992). By recording and analyzing facial expressions, researchers can track down emotions as they were just formed. As advanced computer-vision based systems and algorithms are becoming more mature, facial expressions as well as facial muscle actions can be recognized and computed with quantitative scores (possibilities) of recognized emotions (Ding et al., 2017; Kim et al., 2016; Zeng et al., 2009). As a result, facial-expression based researches are spreading in affective computing. For instance, Kang et al. (2017) examined the emotion expressed by users in Manhattan, New York City and compared the human emotions fluctuation with stock market movement to find out the relationship between the two. Abdullah et al. (2015) used images from Twitter to calculate emotions from facial expressions and compared them with socio-economic attributes. Singh et al. (2017) analyzed the smiles and diversity via social media photos and pointed out people smile more in a diverse company.

In sum, considering that emotions can be recorded in real time and are universal in multi-lingual environment, facial expressions might be more suitable for place-based emotion extraction in a global scale, as places are located around the world with affording different groups of people and various kinds of activities. To the best of our knowledge, our research is among the pioneer studies that utilize the state-of-the-art facial expression recognition techniques and large-scale georeferenced photos for exploring the human emotions at different places at the global scale.

Figure 1: The workflow of this research.

3 Methodology

3.1 Framework

As shown in Figure 1, there are four steps for the framework of extracting and measuring human emotions at different places. First, large-scale geo-referenced crowdsourcing photos in social media are collected and positioned on the data server. Several geographical and environmental attributes (e.g., the proximity to water bodies, openness, landscape type) of each place are also retrieved and recorded. Second, the footprints of “places” in our study are generated using the area of interest (AOI) extraction approach based on the spatial density of photos (Li and Goodchild, 2012; Hu et al., 2015). Then, with the state-of-the-art cognitive recognition methods based on computer vision technologies (e.g., object detection and localization), human emotions are extracted and measured via facial expressions detected in the social media photos. In order to examine whether results of affective computing are robust and solid, we also implemented sensitivity tests to check the concordance of results with varying algorithm parameter settings.

After the calculation of human emotions at different places, it is necessary to explore what environmental factors have influences on the expressions of human emotions. Correlation analysis and multi-linear regression models are utilized to explore the relationship between human emotions and environmental factors.

3.2 Data Preparation

There are two datasets used in this research. One is the places as well as their geographic attributes for exploring the relationship between human emotions and environments. And the other is georeferenced social media photos for affective computing, which are collected from the Flickr website based on the coordinate information of place names.

In many geographic information systems and digital gazetteers, places, are often represented as points of interest (POIs) although places have footprints that vary by type (e.g. points, lines, or polygons) (Goodchild and Hill, 2008). Based on the place names, the coordinates of those place centers are harvested from the Google Maps Places Application Programming Interface (API)444https://developers.google.com/places/web-service/intro. A list of geographic attributes and environment factors are recorded at each place (see section 4.1 for more details).

Photos taken at different places are obtained from Yahoo Flickr platform. Flickr is a publicly available social media platform where users can upload and share their photos, and it is one of the most commonly cited websites in the era of Web 2.0 (Cox, 2008). Options for geo-tagging photos are also provided in the website as more and more GPS chips are embedded in smart phones and cameras. Time and geographical information are recorded automatically when saving photos from location-aware devices. In addition, users can drag their photos on a map and input their locations for geo-tagging upon uploading. Therefore, each photo can be positioned on the map. For most photos, the locations can be labeled correctly and the uncertainty of the data (eg. incorrect location of photo) can be removed by construction of the place introduced in Section 3.3.

Flickr’s API555https://www.flickr.com/services/api/ allows developers and researchers to collect a huge amount of data from the platform. Public geo-tagged photos with information including user ID, photo ID, latitude, longitude, tag text, time stamp and so on, are retrieved and recorded within a certain distance to the center point at specific places. And the center point coordinates of places are retrieved from the Google Place API. Each photo is saved with its original resolution while keeping a link pointing to its original URL. All the information is stored in a database for data manipulation.

3.3 Construction of Places

As place is a product of human conceptualization that is derived from human experience to describe a specific space (Tuan, 1977; Couclelis, 1992; Curry, 1996; Merschdorf and Blaschke, 2018), one main challenge for modeling places in GIS is the vague boundaries (Burrough and Frank, 1996; Montello et al., 2014; Gao et al., 2017b). The boundary is often generated from the density estimation and spatial clustering of georeferenced photos (Feick and Robertson, 2015; Hu et al., 2015). In this reseasrch, places are constructed by the following steps based on user generated photo footprints: (1) utilizing a density-based spatial clustering for application with noise (DBSCAN) to extract the hotspot zones of human activities. (2) using the convex hull to find out the minimum bounding geometry based on a set of points remained after spatial clustering.

DBSCAN, which is a point-based spatial clustering algorithm (Ester et al., 1996)

, is used to identify clusters of geo-tagged photos. Compared with the K-means clustering algorithm, DBSCAN can find arbitrarily shaped clusters, and it does not require the predefined number of clusters in advance. In addition, it is relatively stable and robust to the noise data. Some geo-tagged photos are manually uploaded by users without specific criteria and may generate noise; for example, a user may drag photos to wrong locations. By applying the DBSCAN algorithm, those noise data can be removed and the core areas of each place, in other words, the hotspots where users most likely to stay and to take photos, will be remained.

The DBSCAN algorithm requires two parameters, namely and . The is the search radius representing the maximum distance of the search neighborhood to the center point. And the indicates the minimum number of points a cluster should have. Different settings of the two parameters will influence the result, and proper values according to the characteristics of places should be selected. As suggested by several previous research (Hu et al., 2015; Mai et al., 2018; Liu et al., 2019), a value between 40 m to 300 m of is suggested in clustering human activities. As the number of photos and users may vary in different places, it is not suitable to use an universal absolute number as . Therefore, a percentage of the number of photos at a certain place is used as . Consequently, a combination of different parameter settings should be tested to find out the best parameter combination.

After the spatial clustering of photo locations, the next step is to derive the core areas of places from the clustered points. The convex hull is a high-quality geometric approximation method used for efficiently clustering geographical features (Graham, 1972; Barber et al., 1996; Liu et al., 2019; Yu et al., 2014). A convex hull is the minimum bounding polygon containing a set of points. It has been utilized in a number of studies to find the minimum bounding shape of the clustered points (Liu et al., 2019).

Figure 2 shows the process of construction of places that are represented as polygons generated from the aforementioned steps.

Figure 2: Construction of a place based on spatial clustering and the convex hull approach.

3.4 Measurement of Human Emotion

One main research question in this work is how to extract basic human emotions and to quantify the degree of happiness expressed by users at different places. The state-of-the-art computer vision and cognitive recognition technologies make it possible to extract and quantify human emotions from facial expressions. In this study, we propose two indices, namely the “Joy Index” and the “Average Happiness Index”, to measure the degree of “happiness atmosphere” at each place.

3.4.1 Affective Computing

We used the Face++ Emotion API666https://www.faceplusplus.com/emotion-recognition/

to detect human faces in photos and to extract human emotions based on their facial expressions. The Face++ platform is a mature commercial cloud-computing enabled AI technology provider with a great number of customers and developers using its products, and is said to perform well in several facial recognition-related competitions

777https://www.faceplusplus.com/blog/article/coco-mapillary-eccv-2018/, which proves the reliability of the system, and hence is selected for extracting emotions from human faces. A set of computer vision-based services are provided for human facial recognition and analyses. The attributes of all faces in a photo are extracted, including the face position and extent, human emotion, age, ethnicity, gender, and even beauty. The Face++ API produces two measurements for evaluating the emotion of human faces. One is the smile, which describes the smile intensity (Whitehill et al., 2009) and includes two elements and . The is a numeric score (from 0 to 100) to indicate the degree of smiling while the is provided by the cloud AI system to judge whether the detected face is smiling or not. Generally, if the is larger than the , the face is judged as a smiling face. Therefore, based on the smile

attribute, each face in the photo is classified as either smiling or not-smiling. The other measurement is

emotion structure

, which is a vector of scores (from 0 to 100) to describe seven basic emotional fields: anger, disgust, fear, happiness, neutral, sadness and surprise. All scores of one face sum up to 100. The higher the score is, the more confidence an emotion represents. Hence, the emotion field could illustrate the intensity of a particular emotion from different dimensions.

It is worth noting that not all emotion fields are used in this study. Happiness is often recognized as one of the most common basic emotions (Izard, 2007). Although some arguments exist (Frank and Ekman, 1993), smile can represent happiness in general. In addition, as happiness is the clearest emotional domain compared with all other dimensions of emotions (Wilhelm et al., 2014), we only use happiness value from the emotion structure. Figure 3 shows the happiness scores extracted from different human faces in photos. Notice that the actual human faces rather than those on paintings will be detected and analyzed in the experiments.

Figure 3: Emotional indices calculated for faces. (Source: Face++)

3.4.2 Emotional Indices for Places

Two place-based human emotion measurement indices are proposed to evaluate the degree of happiness at different places in this study. Namely, the “Joy Index” based on the smiling score and the “Average Happiness Index” based on the happiness score.

The “Joy Index” is calculated with consideration of the normalized difference between the number of smiling faces and the number of non-smiling faces using geo-tagged photos within the spatial footprint of each place as follows.

(1)

Where is the joy index calculated at place , is the number of smiling faces in the photos within this place, is the number of non-smiling faces. The range of this index is between -1 to 1, a symmetric closed interval. A positive value represents that more people are smiling at a place, which indicates positive emotion conditions. While a negative value represents that more people don’t have smiling faces, which may indicate a serious atmosphere at that place.

In comparison, the “Average Happiness Index (AHI)” calculates the average of happiness values for all detected faces in those geo-tagged photos at a place.

(2)

Where is the happiness value of human face at place . The AHI illustrates the average degree of happiness for people at each place.

3.5 Sensitivity Tests

3.5.1 Test for Place Construction

During the construction of place described in section 3.3, a set of combinations for parameters and are used. Although the shape of place boundaries may vary with different parameters, the derived place emotion results should have a similar distribution and trend if the proposed approach is stable. In order to check this, human emotion scores are calculated at each place with different parameter settings. Then, the Kendall’s coefficient of concordance (W) is utilized to measure the agreement among those different human emotion detection results.

In order to do so, a ranking of places based on the detected average happiness score is created for each pair of and . Assuming there are combinations of parameters for places. Summing up the ranks in all scenarios in which a place gets a total rank via Equation 3. Then, is calculated as the average value of those total ranks across all places via Equation 4, and the sum of the squared deviation is calculated via Equation 5. And the Kendall’s W can be calculated by Equation 6.

(3)
(4)
(5)
(6)

In general, if the test statistic

is 1, it means all judges with different parameter settings are assigned as the same order of the places. While indicates that there is no agreement among all judges and the ranks are in random. If the results of W prove that the emotion score rankings are similar with different parameters in place construction, the influence of shape during the place construction process is limited. Also it would also show that the emotion scores calculated at each place are solid.

3.5.2 Test for Affective Computing

As the number of photos varied at different places, it is necessary to know whether the data collected are sufficient for human emotion calculation. To test the reliability and stability of the facial-expression based on emotion recognition results, a bootstrapping strategy was applied to assess the robustness of the emotional indices calculated in the section 3.4.

Bootstrapping is a resampling approach proposed by Efron (Efron, 1992)

. It is often used to approximate the distribution of the test samples. By doing this, a confidence interval can be derived to show how confident the range of emotion scores is at each place. The step-by-step details are described as follows:

  1. Assume that faces are collected at place as a sample set . Perform times random sampling with replacement to form a new sample set with the same size as . Note that more than one face may exist in .

  2. Then, the emotion indices of the new sample set is calculated.

  3. Repeat the two steps above for times to generate with emotion results .

  4. Rank the affective computing results and calculate the average value of the emotional indices as the final output of the place . Discard the lowest 2.5% and the highest 2.5% results. The remaining results show the 95% confidence interval of emotional indices calculated at the place .

The results of bootstrapping show the confidence intervals of the possible emotion scores at each place, which help evaluate the stability of the emotion calculation results. Although it is impossible to know the true confidence interval as photos are collected in bias anyway, the derived results are more asymptotical to be the truth (DiCiccio and Efron, 1996). Further analyses are conducted based on the emotional results after the bootstrapping processing.

3.6 Influence of Environment Factors

As suggested by environmental psychology studies (Capaldi et al., 2014; Svoray et al., 2018), human emotions can be affected by the surrounding environment. Therefore, exploring the potentially influential geographical and environmental factors and their importance has great significance to understand human emotions at different places. To do so, the Pearson’s correlation analysis (Benesty et al., 2009) and the multiple linear regression (MLR) were employed in this study.

As mentioned in section 3.2, a group of social and physical geographic attributes are collected when retrieving the information for each place. Those factors are represented as at each place. Please note that since this paper aims at proposing a general computational framework for extracting place emotions, and the environmental factors may vary in different types of places. Therefore, we do not define a complete set of factors in this research and further research is needed for enumerating a complete list of variables related to a specific type of place. As a case study, by referring several existing works and our geographical knowledge, several environmental factors are chosen in this work as described in Section 4.1.

The Pearson’s correlation coefficient

is employed to explore the positive and negative impacts, and the strength of linear relationship between an environmental factor and the emotion score at each place. As correlation analysis is only suitable for numeric values, for categorical variables (e.g., continents), the correlation coefficient between the emotion and each category is calculated by converting categorical variables to dummy variables (0,1).

For each influential factor , the correlation analysis was performed with a combination of one emotion index via equation 7:

(7)

Where the Pearson’s correlation coefficient is calculated with the expected covariance value of the two variables e and a with their mean values and

, and the standard deviations

and . A positive value shows that the factor has positive impacts on the emotion index e and vice versa.

And the MLR uses all geographical and environmental variables to predict the emotion index value at each place as:

(8)

Where is an unobserved error term. The impact of each attribute could be measured using the coefficient of each independent variable. The is calculated as a goodness-of-fit statistic to determine how well the MLR model fits the observed place emotion data.

4 Experiments and Results

As the experience of travel and tourism is deeply connected with the place (Wearing et al., 2009), we take tourist attractions as a case study place type to examine the feasibility of our Place Emotion sensing framework.

4.1 Input Datasets

Tourist sites selected in this study are located around the world. And the sites selected have to be famous regarding the annual number of visitors, comprehensive in terms of cultural representativeness, and diverse in terms of site types. In addition, in order to get reliable emotion detection results, the site should have a large number of photos taken and uploaded by tourists. To find them, several official resources888https://whc.unesco.org/en/map/ and open statistics were checked999https://www.lovehomeswap.com/blog/latest-news/the-50-most-visited-tourist-attractions-in-the-world101010https://www.travelandleisure.com/slideshows/worlds-most-visited-tourist-attractions. Those selected attractions are distributed all over the world and listed in Figure 5. In total, there are 80 sites, including 24 sites located in Asia, 25 in Europe, 29 in North America, and only 1 site located in Africa and in Oceania respectively. They are from 22 countries around the world. The spatial distribution of all these tourist sites can be found in the Figure 5.

A group of geographical attributes and environmental factors are searched and recorded, to the best of our knowledge, which may influence the tourists’ degree of happiness at each site. As human emotions are complex and influenced by multiple individual and environment variables, we only selected a small group of variables according to some existing studies from environmental psychology (White et al., 2010; Capaldi et al., 2014; Svoray et al., 2018). Other socio-economic and environmental factors as well as individual differences may be added in future work to explore. The selected variables are:

  1. The coordinates of the site location which are searched by the Google Maps Place API.

  2. The continent where the site is located.

  3. The country where the site is located.

  4. The existence of water bodies. As suggested by several related studies from psychology, landscape containing water bodies can influence human activities, and consequently affect moods (White et al., 2010). Therefore, taking water bodies into consideration is necessary. There are two circumstances where the water bodies exist. One is that the water body exists within the tourist place. The other is that the water body is nearby the tourist site and can be directly viewed by the people. Otherwise, we deem that there exists no water bodies at that place.

  5. The distance to the nearest water bodies, which calculates the shortest distance from the nearest water bodies (lakes, oceans, etc.) to the place. If the water body exists within the site, the distance is 0.

  6. Whether the main part of the site is in an open or closed space. Most of previous studies have proved that activities in an outdoor environment have a positive effect on happiness (Thompson Coon et al., 2011). Hence, the tourists sites are classified as open or closed space. Parks, squares, lakes, etc., which are open to the air, are defined as open space, while sites like museums, stations, cathedrals, etc., whose main contents are indoor, are considered as closed space.

  7. The green vegetation coverage of each place. Several studies suggested that green space could reduce pressure and has positive impact on mental health (Maas et al., 2009; Thompson et al., 2012). In order to measure the green space and its impact on the human emotions, the Normalized Difference Vegetation Index (NDVI), which is widely used in remote sensing of vegetation (Goward et al., 1991), was harvested from NASA Earth Observations111111https://neo.sci.gsfc.nasa.gov/view.php?datasetId=MOD_NDVI_M. The NDVI product in June 2017 was downloaded and values were spatially joined to each site.

  8. Whether the place is located at an urban or rural environment. Similar to the open or closed space, urban areas which have higher building density than rural areas which have more natural environment, have great influence on human emotions (Wooller et al., 2018).

  9. The type of a tourist site. Different types of tourist sites may have different groups of visitors and mobility patterns. The type of tourist sites may be associated with the mental conditions (Leiper, 1990). Based on the site type defined by the Google Maps Place API as well as several tourism-related studies (Lew, 1987), there are six types of tourist attractions defined in this research. Namely, natural (like waterfalls, where places have limited human-made things), amusement (like the Disneyland theme parks, where tourists visit places for enjoying games and other activities for fun), religious (like cathedral, where people visit mostly for religious-related activities), museum (like the Metropolitan Museum of Art, where historical, scientific, and artistic objects are kept), palace (like the Forbidden City, where old cities and castles located), and other cultural categories (like the Grand Bazaar, where places have cultural and historical values, but not belong to other categories aforementioned). Note that the six types are selected only based on the attributes of the 80 tourist attractions. More types of tourist attractions can be defined in other datasets.

After the selection of tourist attractions, all photos taken between Jan. 2012 and Jun. 2017, within the distance of 1km to the center of each attraction site were downloaded from the Flickr website. The search radius is larger than the spatial footprint of a place in most cases to ensure the number of photos harvested is sufficient. In total, 6,199,615 photos were collected.

4.2 Construction of Place and Affective Computing

Following the steps in section 3.3 and 3.4, each tourist site is constructed by the user generated footprints with the DBSCAN spatial clustering and the convex-hull minimum bounding geometry algorithms, and the place emotions are calculated. In total, 2,416,191 faces are detected and evaluated, and the ratio between the number of faces to all the photos is about 38.97% while the proportion of pictures with faces is about 20%. For each site, two emotional indices, namely the Joy Index and the Average Happiness Index (AHI) are calculated by the faces remaining within each site. However, since different parameters settings of DBSCAN in the place construction process may impact the generated sites, a set of combinations of parameters are tested. In the experiment, we iteratively chose the as 50 m, 100 m, 200 m, 300 m, and the as 0.5%, 1% and 2% according to the recommendations of previous studies (Hu et al., 2015; Gao et al., 2017b). In sum, 12 combinations of parameter settings were experimented individually and applied into the Kendall’s concordance test.

For each pair of parameters, a ranking of sites are returned based on each emotional index. The output of Kendall’s W is 0.99 for 12 rankings based on the normalized Joy Index, and the same value 0.99 for all rankings based on the AHI, and 0.98 for 24 ranking lists including both indices. The results illustrate that all pairs of parameters can result in a very similar ranking order of the happiest places, which means that the proposed method is stable and the selected parameters of DBSCAN have limited impact on the overall place emotion ranking. The happiness indices calculated at each place are almost invariant in all the different experiments. According to this, we chose only one parameter setting: 100 m as and 1% as for further analyses.

As for the exploration analysis, four famous tourist attractions of interest: the Great Wall, the Amiens Cathedral, the Magic Kingdom Park at Disneyland, and the Universal Studio Hollywood, are selected as individual examples to demonstrate the specific place emotion distributions (in Figure 4). The first column of figures show the spatial distribution of photos with smiling faces and without smiling faces inside the place constructed, while the second column of the figures show the most frequent word tags shared by the Flickr users across those sites. The constructed places are multi-part polygons and the photos outside the polygons are removed in order to reduce the data noise. The red points show smiling faces while blue points indicate no-smiling faces. According to the figure, the Great Wall, the Magic Kingdom Park at Disneyland and the Universal Studio Hollywood have more smiling tourists while tourists at the Amiens Cathedral have less smiling faces as people in the religious site may be less inclined to smile. Moreover, the semantics of the photos are also explored. The word cloud visualization shows the top 100 tags of those social media photos at each place. In addition to the country, department (in France) and city names, a list of tourist site names including Mutianyu, Great Wall, Cathedrale, Disney World, Universal Studios are identified from those geotags, which indicate that those photos could represent the place information although not all the words and topics are necessarily indicative to a specific place (Adams and Janowicz, 2012; Adams and McKenzie, 2013). These examples show that the computational framework for emotion extraction based on facial-expressions at places is generally effective.

Figure 4: The spatial distribution of Flickr photos with smiling faces and without smiling faces and their most frequent word tags across four sample tourist sites: the Great Wall, the Amiens Cathedral, the Magic Kingdom Park at Disneyland, and the Universal Studio Hollywood.

4.3 The World Ranking List of Happiest Tourist Attractions

Figure 5 shows the spatial distribution of tourist sites as well as their emotional indices. The circles represent the joy index while diamonds represent the AHI of each site. A deeper red color shows more happiness at a site while a deeper blue indicates less happiness. For each site on the map, its associated name can be found via the index. Based on the emotional indices, two ranking lists of tourist attractions were generated in Figure 6 (Joy Index) and in Figure 7 (AHI). After using the bootstrapping strategy, the 95% confidence interval of the emotional indices at each site is characterized by the blue bars, and the circles at the center of the lines indicate the averaged values of emotional indices. For the joy index, a positive value represents more enjoyment smiling faces indicating a happiness atmosphere, while a negative value indicates that happiness cannot be deduced clearly from facial expressions in a site. The average joy index across all sites is about -0.115, a little bit lower than 0, while the average of all AHI values is about 38.04. The correlation analysis result shows that the Pearson’s correlation coefficient between the two rankings is 0.97, which means that the two rankings are similar. Interestingly, the official slogan for Disneyland is “The Happiest Place On Earth”. However, according to the ranking lists from user generated crowdsourcing data, the top site that has the highest happiness indices in the world is the Great Wall, China based on the two measurements (Joy Index: 0.429, and AHI: 63.72). But several amusement parks like the Disneyland Parks, the Everland in South Korea, the Ocean Park in Hong Kong indeed appear with high rankings though, which is in accordance with the public opinions. Meanwhile at the bottom of the ranking list is the Amiens Cathedral, with only -0.489 for the joy index and 20.79 for AHI. It is worth noting that low happiness scores don’t necessarily mean that people at those sites (e.g., religious places) are not as happy as people in other types of places, but it could mean that people are less inclined to smile at those sites. However, since only tourist sites are chosen in our case study, most smiling faces are enjoyment smiles and seem to be associated with positive emotion and happiness in the top ranked sites.

Figure 5: The spatial distribution of all tourist sites and their associated happiness indices.
Figure 6: The ranking list of tourist sites based on the Joy Index. The 95% confidence interval of the emotional index at each site is characterized by the blue bars, and the circles indicate the averaged values of the emotional index. (Note: figure is zoomable)
Figure 7: The ranking list of tourist sites based on the Average Happiness Index. The 95% confidence interval of the emotional index at each site is characterized by the blue bars, and the circles indicate the averaged values of the emotional index. (Note: figure is zoomable)

4.4 Relationships between Human Emotions and Environmental Factors

Each tourist site listed in Figure 5 was assigned with a set of aforementioned attributes, namely, the continent, open or closed space, urban or rural area, attraction type, vegetation coverage, water body existence, and the distance to the nearest water body. Figure 8, Table 1, and Table 2 show the results of correlation analysis and multiple linear regression of the emotional indices and those attributes.

Results of the correlation analysis show that amusement parks have significant positive impact (0.41 in Joy Index and 0.46 in AHI) on the tourists’ smiles and happiness, which is in accordance with our common knowledge. As tourists often go to amusement parks to relax and enjoy holidays, they may be happier than going to other places. Natural landscapes (0.27 in Joy Index and 0.28 in AHI), open space (0.25 in Joy Index and 0.28 in AHI), existence of the water body (0.21 in Joy Index and 0.25 in AHI), North America (0.19 in Joy Index and 0.23 in AHI), rural areas (0.31 in Joy Index and 0.22 in AHI), vegetation coverage by NDVI (0.18 in Joy Index and 0.2 in AHI) all have positive impact respectively. Except for the continent variable, the coefficients of those aforementioned variables hint that, to some degree, places with more open environment can increase the degree of happiness of tourists with more enjoyment smiles. On the contrary, compared with sites in other continents, people staying in the sites in Europe (sites selected in our case study only) may not explicitly express as much happiness with smile as that in other continents. What is more, religious site (-0.31 in Joy Index and -0.34 in AHI), closed space (-0.25 in Joy Index and -0.28 in AHI), nonexistence of water body (-0.21 in Joy Index and -0.26 in AHI), Palace (-0.16 in Joy Index and -0.23 in AHI) and urban areas (-0.31 in Joy Index and -0.22 in AHI) have negative impacts on the average happiness score of tourists.

According to the MLR results (Table 1 and Table 2), the impact of most variables show similar results with the correlation analysis. The impact of Europe on the happiness conditions is negative and is statistically significant in the regression model. Sites with water bodies have a positive impact on human happiness but are not significant in our samples. Conversely, for sites located in urban areas, the emotional indices are negative and this result is significant. Natural landscape has positive impacts on the happiness indices but is not statistically significant. The goodness of fit is about 0.57 and statistically significant with p-value 0.001 for both indices, showing that the variation of human emotions at different places can be explained by those geographical and environmental factors to a certain degree.

In addition, as the type of tourist sites has impacts on human emotions in the statistical analyses, we further explore a specific type of tourist attractions—amusement park to illustrate the results. As shown in Table 3, there are 17 amusement parks in this study and they generally have higher AHI (average 45.72) and Joy Index scores (average 0.52) compared with other types of tourist sites. For most amusement parks, they are located in urban areas and with open space, as well as containing water bodies (e.g., lakes) inside the park. The average NDVI value at amusement parks is about 0.52, which is similar to the value of all sites (about 0.54) and not type-biased. A more specific exploration can be conducted in future to investigate more factors that may impact on the human emotions at amusement parks.


(a)

(b)

Figure 8: The Pearson’s correlation coefficients of the geographical and environmental attributes to the human emotions: (a) Joy Index; (b) Average Happiness Index
Attributes Regression Coefficient
Constant 0.486**
Continent
Asia -0.396*
Europe -0.458**
North America -0.353*
Oceania -0.235
Africa N/A
Open/Closed Space
Open Space -0.0044
Closed Space N/A
Urban/Rural
Urban -0.1458*
Rural N/A
Type
Cultural Landscape -0.1923**
Museum -0.254*
Natural Landscape 0.002
Palace -0.203*
Religious Site -0.319*
Amusement Park N/A
Water Body
Existence of the Water Body 0.021
Inexistence of the Water Body N/A
Distance to the Nearest Water Body 0.0004
NDVI 0.0004

**
p 0.05
p 0.001

Table 1: The coefficients of multi-linear regression based on the joy index and the geographical and environmental factors
Attributes Regression Coefficient
Constant 60.649**
Continent
Asia -13.805*
Europe -16.647*
North America -12.196
Oceania -5.058
Africa N/A
Open/Closed Space
Open Space -0.015
Closed Space N/A
Urban/Rural
Urban -5.083*
Rural N/A
Type
Cultural Landscape -9.264**
Museum -10.645*
Natural Landscape 0.81
Palace -10.889**
Religious Site -15.547**
Amusement Park N/A
Water Body
Existence of the Water Body 0.7484
Inexistence of the Water Body N/A
Distance to the Nearest Water Body 0.0172
NDVI 0.018

**
p 0.05
p 0.001

Table 2: The coefficients of multi-linear regression based on the average happiness index (AHI) and the geographical and environmental factors.
Tourist site AHI Joy Index
Epcot, USA 53.86 0.60
Disney Animal Kingdom, USA 53.07 0.60
Disney Worlds Magic Kingdom, USA 52.91 0.60
Disney Holly Wood Studios, USA 50.37 0.56
Universal Studios, Hollywood, USA 48.05 0.54
Everland, Gyeonggi-Do, South Korea 47.34 0.50
Disneyland Park, France 46.34 0.51
Disney California Adventure, USA 46.14 0.52
Ocean Park, Hong Kong 45.89 0.53
Disneyland Hong Kong, Hong Kong 45.76 0.51
Universal Studios, Florida, USA 45.55 0.53
Islands of Adventure, USA 45.34 0.52
Tokyo Disneyland, Japan 44.68 0.51
Universal Studios, Japan 41.57 0.47
Disneyland Park, USA 40.86 0.47
Balboa Park, USA 38.6 0.45
Lotte World, South Korea 30.95 0.35
Table 3: The list of amusement parks with their average happiness index (AHI) and joy index scores.

5 Discussion

5.1 Human-environment perspective of results

Scholars from environmental psychology have proved that surrounding environment has impacts on human emotions. Results of this study demonstrate a similar conclusion from a big-data-driven perspective. By combining the results of correlation analysis and the multiple linear regression, amusement parks are the places that most positively affect individual’s happiness expressions. Environments such as open spaces, places with the existence of water body, places where the green vegetables is denser and rural areas, seem to be summarized as one kind of places. All of these variables aforementioned present positive impacts on the degree of human happiness. Therefore, it can be concluded that people who stay in such areas may tend to feel happier. Our findings are consistent with several existing theory in psychology (Kaplan, 1995), that exposure to nature has a positive impact on human moods (Bowler et al., 2010), which also supports the theoretical foundation of the framework and proves the validity of this study to some extent.

However, some limitations should be pointed out. As expressions of human emotions are quite complex and are influenced by multiple variables both internally and externally, the results from this study may not be guaranteed for individuals (Junot et al., 2017) nor for all tourist attractions around the world. And some cultural environment, religious sites and museums may suppress people’s positive emotional expressions. It is worth noting that being suppressed does not mean that people are not happy at those places, but just express less enjoyment smiles explicitly. In addition, although the semantics of geotags show that most photos are related to the places, tourists’ emotions may not be directly relevant to the views of surrounding environments but could be affected by the activities they are doing or the events they are participating in at that place. A deeper exploration should be conducted to find out other factors impacting human emotion expressions.

5.2 Uncertainty of the Data

Social media data are uploaded by volunteers based on their experiences and opinions, which caused “ambient geographic information” (Degrossi et al., 2018). As user-generated photos are used in this study, the uncertainty and quality of the data should be tested (Goodchild and Li, 2012). Three types of data uncertainty are addressed: the vagueness of the place, the different number of faces, and the different groups of people.

As the size and the boundary of a place might be vague, it is not suitable to use a fixed distance for data analysis. Place boundaries are constructed based on the density distribution of photos. Georeferenced photos outside the place boundary are removed to minimize the error of the results. In addition, by using the DBSCAN algorithm, which is not sensitive to the noise data for place construction, the vagueness of the results are also decreased. Besides, a combination of parameter settings as well as the Kendall’s W are tested for ensuring data consistency. Therefore, the uncertainty of the result is minimized.

Since the number of faces varied across different tourist sites, a key issue is to examine whether the number of photos collected at one site is sufficient to extract the human emotion and whether the emotional condition calculated is stable. Using the bootstrapping strategy, a 95% confidence interval of emotion scores at each site is generated. The variability is derived by subtracting the lower bound from the upper bound of the confidence interval. To explore the relationship between the uncertainty of the emotional indices and the number of faces analyzed at each site, the linear regression analysis was employed. Figure

LABEL:F:bootstrap_joy_happiness illustrates that the relation between the variability of emotional indices at each site and the number of faces identified from photos taken at each site fits into a power model very well (with a goodness of fit coefficient 0.99). In general, the more faces detected at a site, the more stable is the emotion measurement calculated. For most sites, the variability of 95% confidence interval for the joy index is less than 0.05 and is less than 3 for the AHI, and have limited influence on the ranking lists. Therefore, the results of the emotional conditions at each site are reliable and can be trusted.

In addition, as different groups of people with various culture backgrounds and being locals or visitors may express different degree of excitement, enjoyment, and emotions to the same place, the results might be influenced by the proportion of various types of tourists. For instance, in order to distinguish the tourists and local people, we follow the criteria used in previous studies to define tourists: if the period of one user who takes multiple photos at one place longer than one month, then the user was labeled as locals otherwise as visitors (García-Palomares et al., 2015). Results show that for most tourist attractions (more than 90%), the majority of Flickr photos (more than 80%) are uploaded by tourists. The average difference of the AHI scores between tourists and locals in those tourist sites is just 3, showing that such influence is minimal and will not change the ranking list.

Although we tried our best to reduce the uncertainty, some limitations still exist. Data bias commonly exists in the VGI data (Senaratne et al., 2017). As suggested previously (Gao et al., 2017a; Jolivet and Olteanu-Raimond, 2017), one data bias issue of VGI is that the contributions of volunteers often follow a power-law or an exponential-law of frequency distribution with a long tail, which indicates that most photos are posted by only a small proportion of users and a large number of users only contribute few (Goodchild and Li, 2012). In this study, a large number of faces detected might belong to a small group of users. And the information provided by social media users may not always comply with quality standards. However, the results of emotions based on facial expressions do reflect active users’ experiences, opinions, interests, and feelings at those places, and can provide new insights for the place-based information research (Blaschke et al., 2018).

5.3 Comparison between text-based and facial-expression-based methods

Though facial-expression based emotional detection methods have becoming more mature, and a few of studies implemented it in research, a key issue is whether the facial-expression based methodology is reliable. Therefore, we conducted a comparison between our methodology and a text-based framework. To address this, we referred to the Mitchell’s research (Mitchell et al., 2013). In this study, scholars followed Dodd’s method (Dodds et al., 2011), where a daily happiness score was calculated from Twitter with the state-of-the-art NLP technologies, summarizing a range of human emotions in United States from a state level, and examining the connections to the socioeconomic attributes. In comparison, the YFCC100 dataset containing the most photos in Flickr was used (Thomee et al., 2015). We also evaluated emotions using our framework of all photos in each state in United States. Since Mitchell’s research was conducted in the year 2011, we only retrieved photos taken in the same year within the United States to ensure the consistence of the time period. Then, both Joy Index and Average Happiness Index were applied for the photo data to calculate the happiness scores for each state. The results of our metrics and the results of the Mitchell’s research across 50 states were analyzed via the Spearman’s correlation analysis (Fieller et al., 1957), which makes comparison between the rank of each value in the data series. As shown in Table 4, results in the two studies have positive correlation: the Joy Index 0.28 and the Average Happiness Index 0.30, which show some degree of similarity between the two technologies.

It should be noted that the focus of our research is not to compare and even contrast the existing text-based emotion extraction technologies with facial expression approaches. Different research methods have their own pros and cons. As aforementioned, text-based approaches cannot record real-time emotions and might not be suitable for global-scale research due to the multi-linguistic environment. But it typically has larger volumes of datasets and rich semantics (Hu et al., 2019). By combining those two approaches for affective computing could help improve the holistic understanding of human emotions from different aspects and enrich the understanding of such innate neural program (Abdullah et al., 2015).

Our approach also has some limitations. First, as aforementioned, the results might be biased to certain group of people (visitors v.s. local citizens and different ethnicities or culture backgrounds) and affected by the diversity of faces in the trained data sets. Second, people’s emotions may not be directly relevant to the views of surrounding environments. Moreover, people may not always express emotions explicitly by either facial expressions nor texts. A further exploration should be conducted in order to show the collective connections between human emotions and the facial expressions in the technology-mediated platforms (Elwood and Mitchell, 2015).

Emotion Index Correlation Coefficient p-value
Joy Index 0.28 0.0472
Average Happiness Index 0.30 0.0314
Table 4: The Spearman’s correlation coefficients between the text-based method and the facial expression-based method.

6 Conclusion and Future Work

In this research, in order to understand the interaction between human emotions and the environment, a data-driven framework of measuring the human emotions at different places using large-scale user generated photos from social media is proposed. We utilize the state-of-the-art social computing tools to detect and measure human happiness from facial expressions in photos. Tourist attractions, as a specific type of place are exemplified for deriving place-based human emotion indices. A ranking list of 80 tourist sites across the world is created not from the statistics of tourist flow, but from the degree of happiness expressed and shared by millions of tourists, which also shows that our framework is suitable for global-scale issues. In addition, we explore the impacts of several geographical and environmental factors to human happiness. Results are consistent with common sense and with existing studies from psychology, stating that people in the environment with more openness and with more opportunities exposing to nature express more happiness and smiling on faces. Overall, this research advances our knowledge of the human emotions at different tourist attractions. Our study connects crowdsourced human emotions to the geographic attributes of environment using advanced AI techniques and spatial analytics, and provides a new paradigm for research in geography and in GIScience. The proposed framework and the findings could also lead to practical guidance for environmental psychology, human geography, tourism management and urban planning.

In the future, several potential directions will be focused to explain related research questions. One direction is about data fusion. Although only Flickr photos are employed in the experiment, this study can be further improved with diverse data sources such as surveys. A data-synthesis-driven method might provide varied perspectives of human emotions. And the mix of text-based and facial expression-based emotion extraction methods may enhance the confidence of the final output. Another direction is to explore fundamental factors impacting human emotions. As we propose the framework for Place Emotion research, we will focus more on spatial analysis of the emotion patterns. Human emotions at different scales will be compared to revisit the scale effect in geography. And different groups of people, as suggested by existing studies (Niedenthal et al., 2018; Kang et al., 2018), will be explored to figure out deeper insights on influential factors of human emotions. Moreover, different place types as well as spatial units from different scales, including points of interest, census blocks, neighborhoods, and communities will be combined to examine the geographic patterns and socioeconomic linkages of human emotions. One specific research taking a limited number of places but with more environmental and socioeconomic factors to be examined can be conducted to enrich the understanding of place-based emotions.

Acknowledgement

The authors would like to thank Wanjuan Bie, Shan Lu, and Dan’nan Shen at Wuhan University, for their contributions on figures. Thank Timothy Prestby at UW-Madison for his help on language edits of the manuscript. And thank Jialin Wang, Zimo Zhang, Wenyuan Kong and Zijun Xu in the Place&Emotion Group, Urban Playground Lab, Wuhan University, for their helpful discussions. The funding support for this research is provided by the Office of Vice Chancellor for Research and Graduate Education at the University of Wisconsin-Madison with funding from the Wisconsin Alumni Research Foundation, and the Fund for National College Students Innovations Special Project of China (Grant No. 201810486033).

References

  • Tuan (1977) Y.-F. Tuan, Space and place: The perspective of experience, U of Minnesota Press, 1977.
  • Goodchild (2011) M. F. Goodchild, Formalizing place in geographic information systems, in: Communities, neighborhoods, and health, Springer, 2011, pp. 21–33.
  • Winter and Freksa (2012) S. Winter, C. Freksa, Approaching the notion of place by contrast, Journal of Spatial Information Science 2012 (2012) 31–50.
  • Scheider and Janowicz (2014) S. Scheider, K. Janowicz, Place reference systems, Applied Ontology 9 (2014) 97–127.
  • Goodchild (2015) M. F. Goodchild, Space, place and health, Annals of GIS 21 (2015) 97–100.
  • McKenzie et al. (2015) G. McKenzie, K. Janowicz, S. Gao, J.-A. Yang, Y. Hu, POI pulse: A multi-granular, semantic signature–based information observatory for the interactive visualization of big geosocial data, Cartographica: The International Journal for Geographic Information and Geovisualization 50 (2015) 71–85.
  • Gao et al. (2017a) S. Gao, L. Li, W. Li, K. Janowicz, Y. Zhang, Constructing gazetteers from volunteered big geo-data based on hadoop, Computers, Environment and Urban Systems 61 (2017a) 172–186.
  • Gao et al. (2017b) S. Gao, K. Janowicz, D. R. Montello, Y. Hu, J.-A. Yang, G. McKenzie, Y. Ju, L. Gong, B. Adams, B. Yan, A data-synthesis-driven method for detecting and extracting vague cognitive regions, International Journal of Geographical Information Science 31 (2017b) 1245–1271.
  • Blaschke et al. (2018) T. Blaschke, H. Merschdorf, P. Cabrera-Barona, S. Gao, E. Papadakis, A. Kovacs-Györi, Place versus space: From points, lines and polygons in gis to place-based representations reflecting language and culture, ISPRS International Journal of Geo-Information 7 (2018) 452.
  • Zhang et al. (2018) F. Zhang, D. Zhang, Y. Liu, H. Lin, Representing place locales using scene elements, Computers, Environment and Urban Systems 71 (2018) 153 – 164.
  • Purves et al. (2019) R. S. Purves, S. Winter, W. Kuhn, Places in information science, Journal of the Association for Information Science and Technology (2019).
  • Wu et al. (2019) X. Wu, J. Wang, L. Shi, Y. Gao, Y. Liu, A fuzzy formal concept analysis-based approach to uncovering spatial hierarchies among vague places extracted from user-generated data, International Journal of Geographical Information Science 33 (2019) 991–1016.
  • Agnew (2011) J. Agnew, Space and place, Handbook of geographical knowledge 2011 (2011) 316–331.
  • Jordan et al. (1998) T. Jordan, M. Raubal, B. Gartrell, M. Egenhofer, An affordance-based model of place in gis, in: 8th Int. Symposium on Spatial Data Handling, SDH, volume 98, pp. 98–109.
  • Kabachnik (2012) P. Kabachnik, Nomads and mobile places: Disentangling place, space and mobility, Identities 19 (2012) 210–228.
  • Merschdorf and Blaschke (2018) H. Merschdorf, T. Blaschke, Revisiting the role of place in geographic information science, ISPRS International Journal of Geo-Information 7 (2018) 364.
  • Wierzbicka (1986) A. Wierzbicka, Human emotions: universal or culture-specific?, American anthropologist 88 (1986) 584–594.
  • Izard (2013) C. E. Izard, Human emotions, Springer Science & Business Media, 2013.
  • Davidson and Milligan (2004) J. Davidson, C. Milligan, Embodying emotion sensing space: introducing emotional geographies, 2004.
  • Wilson (1984) E. O. Wilson, Sociobiology (1980) and biophilia: The human bond to other species, 1984.
  • Capaldi et al. (2014) C. A. Capaldi, R. L. Dopko, J. M. Zelenski, The relationship between nature connectedness and happiness: a meta-analysis, Frontiers in psychology 5 (2014) 976.
  • Mesquita and Markus (2004) B. Mesquita, H. R. Markus, Culture and emotion, in: Feelings and emotions: The Amsterdam symposium, Cambridge University Press, p. 341.
  • Grossman (1977) L. Grossman, Man-environment relationships in anthropology and geography, Annals of the Association of American Geographers 67 (1977) 126–144.
  • Rentfrow and Jokela (2016) P. J. Rentfrow, M. Jokela, Geographical psychology: The spatial organization of psychological phenomena, Current Directions in Psychological Science 25 (2016) 393–398.
  • Smith and Bondi (2016) M. Smith, L. Bondi, Emotion, place and culture, Routledge, 2016.
  • Golder and Macy (2011) S. A. Golder, M. W. Macy, Diurnal and seasonal mood vary with work, sleep, and daylength across diverse cultures, Science 333 (2011) 1878–1881.
  • Liu et al. (2015) Y. Liu, X. Liu, S. Gao, L. Gong, C. Kang, Y. Zhi, G. Chi, L. Shi, Social sensing: A new approach to understanding our socioeconomic environments, Annals of the Association of American Geographers 105 (2015) 512–530.
  • Ye et al. (2016) X. Ye, Q. Huang, W. Li, Integrating big social data, computing and modeling for spatial social science, Cartography and Geographic Information Science 43 (2016) 377–378.
  • Janowicz et al. (2019) K. Janowicz, G. McKenzie, Y. Hu, R. Zhu, S. Gao, Using semantic signatures for social sensing in urban environments, in: Mobility Patterns, Big Data and Transport Analytics, Elsevier, 2019, pp. 31–54.
  • Picard et al. (1995) R. W. Picard, et al., Affective computing (1995).
  • O’Connor (2008) P. O’Connor, User-generated content and travel: A case study on tripadvisor. com, Information and communication technologies in tourism 2008 (2008) 47–58.
  • Goodchild (2007) M. F. Goodchild, Citizens as sensors: the world of volunteered geography, GeoJournal 69 (2007) 211–221.
  • Elwood and Mitchell (2015) S. Elwood, K. Mitchell, Technology, memory, and collective knowing, 2015.
  • Schuller et al. (2009) B. Schuller, B. Vlasenko, F. Eyben, G. Rigoll, A. Wendemuth, Acoustic emotion recognition: A benchmark comparison of performances, in: Automatic Speech Recognition & Understanding, 2009. ASRU 2009. IEEE Workshop on, IEEE, pp. 552–557.
  • Ekman (1993) P. Ekman, Facial expression and emotion., American psychologist 48 (1993) 384.
  • Bollen et al. (2011) J. Bollen, H. Mao, A. Pepe, Modeling public mood and emotion: Twitter sentiment and socio-economic phenomena., Icwsm 11 (2011) 450–453.
  • Mitchell et al. (2013) L. Mitchell, M. R. Frank, K. D. Harris, P. S. Dodds, C. M. Danforth, The geography of happiness: Connecting twitter sentiment and expression, demographics, and objective characteristics of place, PloS one 8 (2013) e64417.
  • Strapparava et al. (2004) C. Strapparava, A. Valitutti, et al., Wordnet affect: an affective extension of wordnet., in: Lrec, volume 4, Citeseer, pp. 1083–1086.
  • Cambria et al. (2012) E. Cambria, C. Havasi, A. Hussain, Senticnet 2: A semantic and affective resource for opinion mining and sentiment analysis., in: FLAIRS conference, pp. 202–207.
  • Leiper (1990) N. Leiper, Tourist attraction systems, Annals of tourism research 17 (1990) 367–384.
  • Lew (1987) A. A. Lew, A framework of tourist attraction research, Annals of tourism research 14 (1987) 553–575.
  • Jones et al. (2008) C. B. Jones, R. S. Purves, P. D. Clough, H. Joho, Modelling vague places with knowledge from the web, International Journal of Geographical Information Science 22 (2008) 1045–1065.
  • Ashley et al. (2007) C. Ashley, P. De Brine, A. Lehr, H. Wilde, The role of the tourism sector in expanding economic opportunity, John F. Kennedy School of Government, Harvard University Cambridge, 2007.
  • Bieger and Laesser (2004) T. Bieger, C. Laesser, Information sources for travel decisions: Toward a source process model, Journal of Travel Research 42 (2004) 357–371.
  • Sun et al. (2018) X. Sun, Z. Huang, X. Peng, Y. Chen, Y. Liu, Building a model-based personalised recommendation approach for tourist attractions from geotagged social media data, International Journal of Digital Earth (2018) 1–18.
  • Amelung et al. (2007) B. Amelung, S. Nicholls, D. Viner, Implications of global climate change for tourism flows and seasonality, Journal of Travel research 45 (2007) 285–296.
  • Bojic et al. (2016) I. Bojic, A. Belyi, C. Ratti, S. Sobolevsky, Scaling of foreign attractiveness for countries and states, Applied Geography 73 (2016) 47–52.
  • Chon (1991) K.-S. Chon, Tourism destination image modification process: Marketing implications, Tourism management 12 (1991) 68–72.
  • Ekman and Davidson (1994) P. E. Ekman, R. J. Davidson, The nature of emotion: Fundamental questions., Oxford University Press, 1994.
  • Eimer et al. (2003) M. Eimer, A. Holmes, F. P. McGlone, The role of spatial attention in the processing of facial expression: an erp study of rapid brain responses to six basic emotions, Cognitive, Affective, & Behavioral Neuroscience 3 (2003) 97–110.
  • Izard (2007) C. E. Izard, Basic emotions, natural kinds, emotion schemas, and a new paradigm, Perspectives on psychological science 2 (2007) 260–280.
  • Pang et al. (2008) B. Pang, L. Lee, et al., Opinion mining and sentiment analysis, Foundations and Trends® in Information Retrieval 2 (2008) 1–135.
  • Zeng et al. (2009) Z. Zeng, M. Pantic, G. I. Roisman, T. S. Huang, A survey of affect recognition methods: Audio, visual, and spontaneous expressions, IEEE transactions on pattern analysis and machine intelligence 31 (2009) 39–58.
  • Berman et al. (2012) M. G. Berman, E. Kross, K. M. Krpan, M. K. Askren, A. Burson, P. J. Deldin, S. Kaplan, L. Sherdell, I. H. Gotlib, J. Jonides, Interacting with nature improves cognition and affect for individuals with depression, Journal of affective disorders 140 (2012) 300–305.
  • Svoray et al. (2018) T. Svoray, M. Dorman, G. Shahar, I. Kloog, Demonstrating the effect of exposure to nature on happy facial expressions via flickr data: Advantages of non-intrusive social network data analyses and geoinformatics methodologies, Journal of Environmental Psychology 58 (2018) 93–100.
  • Darwin and Prodger (1998) C. Darwin, P. Prodger, The expression of the emotions in man and animals, Oxford University Press, USA, 1998.
  • Lisetti (1998) C. Lisetti, Affective computing, Pattern Analysis & Applications 1 (1998) 71–73.
  • Hu et al. (2019) Y. Hu, C. Deng, Z. Zhou, A semantic and sentiment analysis on online neighborhood reviews for understanding the perceptions of people toward their living environment, Annals of the Association of American Geographers (2019).
  • Zheng et al. (2019) S. Zheng, J. Wang, C. Sun, X. Zhang, M. E. Kahn, Air pollution lowers chinese urbanites’expressed happiness on social media, Nature Human Behaviour (2019) 1.
  • Niedenthal et al. (2018) P. M. Niedenthal, M. Rychlowska, A. Wood, F. Zhao, Heterogeneity of long-history migration predicts smiling, laughter and positive emotion across the globe and within the united states, PloS one 13 (2018) e0197651.
  • Baumeister et al. (2007) R. F. Baumeister, K. D. Vohs, D. C. Funder, Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior?, Perspectives on Psychological Science 2 (2007) 396–403.
  • Ballatore and Adams (2015) A. Ballatore, B. Adams, Extracting place emotions from travel blogs, in: Proceedings of AGILE, volume 2015, pp. 1–5.
  • Bertrand et al. (2013) K. Z. Bertrand, M. Bialik, K. Virdee, A. Gros, Y. Bar-Yam, Sentiment in new york city: A high resolution spatial and temporal view, arXiv preprint arXiv:1308.5010 (2013).
  • Zhen et al. (2018) F. Zhen, J. Tang, Y. Chen, Spatial distribution characteristics of residents’ emotions based on sina weibo big data: A case study of nanjing, in: Big Data Support of Urban Planning and Management, Springer, 2018, pp. 43–62.
  • Coleman and Williams (2013) N. V. Coleman, P. Williams, Feeling like my self: Emotion profiles and social identity, Journal of Consumer Research 40 (2013) 203–222.
  • Shaheen et al. (2014) S. Shaheen, W. El-Hajj, H. Hajj, S. Elbassuoni, Emotion recognition from text based on automatically generated rules, in: 2014 IEEE International Conference on Data Mining Workshop, IEEE, pp. 383–392.
  • Zhang et al. (2018) F. Zhang, B. Zhou, L. Liu, Y. Liu, H. H. Fung, H. Lin, C. Ratti, Measuring human perceptions of a large-scale urban region using machine learning, Landscape and Urban Planning 180 (2018) 148–160.
  • Yu and Zhang (2015) Z. Yu, C. Zhang, Image based static facial expression recognition with multiple deep network learning, in: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ACM, pp. 435–442.
  • Wang and Deng (2018) M. Wang, W. Deng, Deep face recognition: a survey, arXiv preprint arXiv:1804.06655 (2018).
  • Calvo and D’Mello (2010) R. A. Calvo, S. D’Mello, Affect detection: An interdisciplinary review of models, methods, and their applications, IEEE Transactions on affective computing 1 (2010) 18–37.
  • Levenson et al. (1990) R. W. Levenson, P. Ekman, W. V. Friesen, Voluntary facial action generates emotion-specific autonomic nervous system activity, Psychophysiology 27 (1990) 363–384.
  • Matsumoto (1991) D. Matsumoto, Cultural influences on facial expressions of emotion, Southern Journal of Communication 56 (1991) 128–137.
  • Cohn (2007) J. F. Cohn, Foundations of human computing: Facial expression and emotion, in: Artifical Intelligence for Human Computing, Springer, 2007, pp. 1–16.
  • Ekman and Keltner (1970) P. Ekman, D. Keltner, Universal facial expressions of emotion, California mental health research digest 8 (1970) 151–158.
  • Preuschoft (2000) S. Preuschoft, Primate faces and facial expressions, Social Research (2000) 245–271.
  • Parr and Waller (2006) L. A. Parr, B. M. Waller, Understanding chimpanzee facial expression: insights into the evolution of communication, Social Cognitive and Affective Neuroscience 1 (2006) 221–228.
  • Kang et al. (2018) Y. Kang, X. Zeng, Z. Zhang, Y. Wang, T. Fei, Who are happier? spatio-temporal analysis of worldwide human emotion based on geo-crowdsourcing faces, in: 2018 Ubiquitous Positioning, Indoor Navigation and Location-Based Services (UPINLBS), IEEE, pp. 1–8.
  • Berenbaum and Rotter (1992) H. Berenbaum, A. Rotter, The relationship between spontaneous facial expressions of emotion and voluntary control of facial muscles, Journal of Nonverbal Behavior 16 (1992) 179–190.
  • Ding et al. (2017) H. Ding, S. K. Zhou, R. Chellappa, Facenet2expnet: Regularizing a deep face recognition net for expression recognition, in: Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on, IEEE, pp. 118–126.
  • Kim et al. (2016) B.-K. Kim, J. Roh, S.-Y. Dong, S.-Y. Lee, Hierarchical committee of deep convolutional neural networks for robust facial expression recognition, Journal on Multimodal User Interfaces 10 (2016) 173–189.
  • Kang et al. (2017) Y. Kang, J. Wang, Y. Wang, S. Angsuesser, T. Fei, Mapping the sensitivity of the public emotion to the movement of stock market value: A case study of manhattan., International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences 42 (2017).
  • Abdullah et al. (2015) S. Abdullah, E. L. Murnane, J. M. Costa, T. Choudhury, Collective smile: Measuring societal happiness from geolocated images, in: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, ACM, pp. 361–374.
  • Singh et al. (2017) V. K. Singh, A. Atrey, S. Hegde, Do individuals smile more in diverse social company?: Studying smiles and diversity via social media photos, in: Proceedings of the 2017 ACM on Multimedia Conference, ACM, pp. 1818–1827.
  • Li and Goodchild (2012) L. Li, M. F. Goodchild, Constructing places from spatial footprints, in: Proceedings of the 1st ACM SIGSPATIAL international workshop on crowdsourced and volunteered geographic information, ACM, pp. 15–21.
  • Hu et al. (2015) Y. Hu, S. Gao, K. Janowicz, B. Yu, W. Li, S. Prasad, Extracting and understanding urban areas of interest using geotagged photos, Computers, Environment and Urban Systems 54 (2015) 240–254.
  • Goodchild and Hill (2008) M. F. Goodchild, L. L. Hill, Introduction to digital gazetteer research, International Journal of Geographical Information Science 22 (2008) 1039–1044.
  • Cox (2008) A. M. Cox, Flickr: a case study of web2. 0, in: Aslib Proceedings, volume 60, Emerald Group Publishing Limited, pp. 493–516.
  • Couclelis (1992) H. Couclelis, Location, place, region, and space, Geography’s inner worlds 2 (1992) 15–233.
  • Curry (1996) M. R. Curry, The work in the world: geographical practice and the written word, U of Minnesota Press, 1996.
  • Burrough and Frank (1996) P. A. Burrough, A. Frank, Geographic objects with indeterminate boundaries, volume 2, CRC Press, 1996.
  • Montello et al. (2014) D. R. Montello, A. Friedman, D. W. Phillips, Vague cognitive regions in geography and geographic information science, International Journal of Geographical Information Science 28 (2014) 1802–1820.
  • Feick and Robertson (2015) R. Feick, C. Robertson, A multi-scale approach to exploring urban places in geotagged photographs, Computers, Environment and Urban Systems 53 (2015) 96–109.
  • Ester et al. (1996) M. Ester, H.-P. Kriegel, J. Sander, X. Xu, et al., A density-based algorithm for discovering clusters in large spatial databases with noise., in: Kdd, volume 96, pp. 226–231.
  • Mai et al. (2018) G. Mai, K. Janowicz, Y. Hu, S. Gao, Adcn: An anisotropic density-based clustering algorithm for discovering spatial point patterns with noise, Transactions in GIS 22 (2018) 348–369.
  • Liu et al. (2019) X. Liu, Q. Huang, S. Gao, Exploring the uncertainty of activity zone detection using digital footprints with multi-scaled dbscan, International Journal of Geographical Information Science (2019) 1–28.
  • Graham (1972) R. L. Graham, An efficient algorithm for determining the convex hull of a finite planar set, Info. Pro. Lett. 1 (1972) 132–133.
  • Barber et al. (1996) C. B. Barber, D. P. Dobkin, H. Huhdanpaa, The quickhull algorithm for convex hulls, ACM Transactions on Mathematical Software (TOMS) 22 (1996) 469–483.
  • Yu et al. (2014) B. Yu, S. Shu, H. Liu, W. Song, J. Wu, L. Wang, Z. Chen, Object-based spatial cluster analysis of urban landscape pattern using nighttime light satellite images: A case study of china, International Journal of Geographical Information Science 28 (2014) 2328–2355.
  • Whitehill et al. (2009) J. Whitehill, G. Littlewort, I. Fasel, M. Bartlett, J. Movellan, Toward practical smile detection, IEEE transactions on pattern analysis and machine intelligence 31 (2009) 2106–2111.
  • Frank and Ekman (1993) M. G. Frank, P. Ekman, Not all smiles are created equal: The differences between enjoyment and nonenjoyment smiles, Humor-International Journal of Humor Research 6 (1993) 9–26.
  • Wilhelm et al. (2014) O. Wilhelm, A. Hildebrandt, K. Manske, A. Schacht, W. Sommer, Test battery for measuring the perception and recognition of facial expressions of emotion, Frontiers in psychology 5 (2014) 404.
  • Efron (1992) B. Efron, Bootstrap methods: another look at the jackknife, in: Breakthroughs in statistics, Springer, 1992, pp. 569–593.
  • DiCiccio and Efron (1996) T. J. DiCiccio, B. Efron, Bootstrap confidence intervals, Statistical science (1996) 189–212.
  • Benesty et al. (2009) J. Benesty, J. Chen, Y. Huang, I. Cohen, Pearson correlation coefficient, in: Noise reduction in speech processing, Springer, 2009, pp. 1–4.
  • Wearing et al. (2009) S. Wearing, D. Stevenson, T. Young, Tourist cultures: Identity, place and the traveller, Sage, 2009.
  • White et al. (2010) M. White, A. Smith, K. Humphryes, S. Pahl, D. Snelling, M. Depledge, Blue space: The importance of water for preference, affect, and restorativeness ratings of natural and built scenes, Journal of Environmental Psychology 30 (2010) 482–493.
  • Thompson Coon et al. (2011) J. Thompson Coon, K. Boddy, K. Stein, R. Whear, J. Barton, M. H. Depledge, Does participating in physical activity in outdoor natural environments have a greater effect on physical and mental wellbeing than physical activity indoors? a systematic review, Environmental science & technology 45 (2011) 1761–1772.
  • Maas et al. (2009) J. Maas, R. A. Verheij, S. de Vries, P. Spreeuwenberg, F. G. Schellevis, P. P. Groenewegen, Morbidity is related to a green living environment, Journal of Epidemiology & Community Health (2009) jech–2008.
  • Thompson et al. (2012) C. W. Thompson, J. Roe, P. Aspinall, R. Mitchell, A. Clow, D. Miller, More green space is linked to less stress in deprived communities: Evidence from salivary cortisol patterns, Landscape and urban planning 105 (2012) 221–229.
  • Goward et al. (1991) S. N. Goward, B. Markham, D. G. Dye, W. Dulaney, J. Yang, Normalized difference vegetation index measurements from the advanced very high resolution radiometer, Remote sensing of environment 35 (1991) 257–277.
  • Wooller et al. (2018) J. J. Wooller, M. Rogerson, J. Barton, D. Micklewright, V. Gladwell, Can simulated green exercise improve recovery from acute mental stress?, Frontiers in Psychology 9 (2018).
  • Adams and Janowicz (2012) B. Adams, K. Janowicz, On the geo-indicativeness of non-georeferenced text, in: Sixth International AAAI Conference on Weblogs and Social Media, pp. 375–378.
  • Adams and McKenzie (2013) B. Adams, G. McKenzie, Inferring thematic places from spatially referenced natural language descriptions, in: Crowdsourcing geographic knowledge, Springer, 2013, pp. 201–221.
  • Kaplan (1995) S. Kaplan, The restorative benefits of nature: Toward an integrative framework, Journal of environmental psychology 15 (1995) 169–182.
  • Bowler et al. (2010) D. E. Bowler, L. M. Buyung-Ali, T. M. Knight, A. S. Pullin, A systematic review of evidence for the added benefits to health of exposure to natural environments, BMC public health 10 (2010) 456.
  • Junot et al. (2017) A. Junot, Y. Paquet, C. Martin-Krumm, Passion for outdoor activities and environmental behaviors: A look at emotions related to passionate activities, Journal of Environmental Psychology 53 (2017) 177–184.
  • Degrossi et al. (2018) L. C. Degrossi, J. Porto de Albuquerque, R. d. Santos Rocha, A. Zipf, A taxonomy of quality assessment methods for volunteered and crowdsourced geographic information, Transactions in GIS 22 (2018) 542–560.
  • Goodchild and Li (2012) M. F. Goodchild, L. Li, Assuring the quality of volunteered geographic information, Spatial statistics 1 (2012) 110–120.
  • García-Palomares et al. (2015) J. C. García-Palomares, J. Gutiérrez, C. Mínguez, Identification of tourist hot spots based on social networks: A comparative analysis of european metropolises using photo-sharing services and GIS, Applied Geography 63 (2015) 408–417.
  • Senaratne et al. (2017) H. Senaratne, A. Mobasheri, A. L. Ali, C. Capineri, M. Haklay, A review of volunteered geographic information quality assessment methods, International Journal of Geographical Information Science 31 (2017) 139–167.
  • Jolivet and Olteanu-Raimond (2017) L. Jolivet, A.-M. Olteanu-Raimond, Crowd and community sourced data quality assessment, in: International Cartographic Conference, Springer, pp. 47–60.
  • Dodds et al. (2011) P. S. Dodds, K. D. Harris, I. M. Kloumann, C. A. Bliss, C. M. Danforth, Temporal patterns of happiness and information in a global social network: Hedonometrics and twitter, PloS one 6 (2011) e26752.
  • Thomee et al. (2015) B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, L.-J. Li, Yfcc100m: The new data in multimedia research, arXiv preprint arXiv:1503.01817 (2015).
  • Fieller et al. (1957) E. C. Fieller, H. O. Hartley, E. S. Pearson, Tests for rank correlation coefficients. i, Biometrika 44 (1957) 470–481.