Disaster prediction and early warnings are crucial when mitigating the impact of disasters and consequent damage . Even with the significant improvements in forecasting and warning systems, there are many factors that still limit the accuracy of the current prediction algorithms such as: the lack of complete data on natural hazards, monitoring instruments and the highly dynamic nature of natural hazards . Interestingly, social media plays an increasingly significant role in disaster management and communication . People use social media during disasters to share their feelings, ask for help and provide disaster relief efforts.
A significant body of research has hence leveraged shared disaster-related information in social media to reduce the impact of disasters and deliver faster responses [12, 10]. For instance, the authors of  analyzed user tweets during 25 different earthquakes in Japan, where they demonstrated how social media users can act as a reliable source to provide real-time situational updates during disasters. On the other hand, decision makers use social media to engage with the public quickly and widely. For example, during typhoon Pablo in 2012, local authorities in the Philippines asked people to use the hashtag #pabloph for getting or sharing on-site updates about the typhoon .
Such correlations are valuable for supporting decision makers in emergency response processes. Previous works used data mining techniques to extract such correlation. For instance,  applied a wavelet analysis to track the disaster progression from social media data. Their results showed that wavelet-based features can preserve text semantics and predict the total duration for localized small-scale disasters.
In this work, we propose an end-to-end learning model to classify the intensity of typhoons (also called a typhoon’s category or class) by learning from environmental data and social media (tweets). We were inspired by previous works (see, e.g., [27, 18]) which suggest that the joint learning of multiple models can significantly outperform standalone models. Our proposed approach consists of two jointly-trained models. The first model (dubbed Feature Extractor
) analyzes typhoon-related tweets and computes statistical features (i.e., tweets volume and sentiments variances). To capture tweet sentiments, we employ a semantics-enriched word embedding in whichentities
are recognized and represented as semantics vectors. The second model (dubbedTyphoon Classifier
) takes an input of the combined features extracted by the first model and the environmental data. Both models are trained jointly through a shared loss function and their learning parameters are optimized using the same gradient descent.
We evaluate our joint model in experiments on two real sources of data. First, we use environmental data of typhoons tracked by the Joint Typhoon Warning Center (JTWC)
. The dataset contains measurements of climate changes (e.g., wind speed and pressure of sea level) before, during and after typhoon landfall. As a second dataset, we rely on typhoon-related tweets collected using keyword-based queries executed during periods of typhoons between 2006-2018. We employ different architectures based on Deep Neural Networks (DNN), Deep Convolutional Network (CNN) and Recurrent Neural Networks (e.g, RNN, LSTM and BiLSTM) as our baseline approaches. Our results suggest that our jointly-trained models outperform these baseline solutions in disaster prediction. We summarize the main contributions in this paper as follows:
We propose a generic end-to-end framework that improves the overall system performance via joint learning.
We conduct several experiments on a real disaster dataset to evaluate the performance of our proposed approaches. Our results clearly show that our proposed framework outperforms the state-of-the-art standalone baselines significantly.
We studied the impact of incorporating semantics embedding from knowledge graphs to enrich tweets representation. Our experiments show that feeding our model with semantics representation of the entities included in the tweets improved the system overall performance.
We provide an updated version our disaster dataset (TED), which includes typhoons environmental data and their tweets up-to 2018 (the last archived date by JTWC).
All our implementations are open-source and available from the project website111https://github.com/dice-group/joint-model-disaster-prediction.
2 Data and Preliminaries
In this section, we discuss our social media analysis during different typhoons and the dataset collection and preprocessing.
2.1 Social Media Content Analysis During Typhoons
Typhoon environmental data are tracked periodically (i.e. at regular time intervals) before, during and after they strike. The goal of our work is to detect the intensity of typhoons not only based on such environmental data but also based on data collected by humans in the form of social media posts during each of the typhoons under study.
To pair social media with the environmental data of typhoons, we collect all tweets posted within the time slot of the respective environmental data into one batch. Inspired by the work of [11, 9], we thus analyzed the volume of tweets as well as sentiments during different time slots of typhoons. As shown in Figure 1, the upper row of plots depicts our content analysis of tweets during four different typhoons, where we explored how typhoon intensities vary during typhoon days. In the middle row of plot of the same figure, we present the count of tweets within the same time slots of the provided intensity in the upper row. Finally, the lower 4 plots present count of tweets with positive (in blue) and negative (in yellow) sentiment. Comparing the respective plots of intensity, tweets count and sentiments for each of the 4 typhoons, we are able to see the correlation between typhoon’s intensity and tweets count and sentiment. To this end, we use the count of tweets and their sentiments as additional indicators for predicting typhoon intensity.
We used three datasets to evaluate the performance of our proposed models (details of each dataset are presented in Table 1).
JTWC Best-tracked (typhoons environmental data). This dataset consists of tracked data points for 70 typhoons between 2006-2016. Each data track is labelled into classes (TD: tropical depression, TS: tropical storm, TY: typhoon and ST: super typhoon). The typhoon data also includes time-stamp, location, maximum wind speed (VMAX), wind intensity (RAD) and sea level pressure (MSLP). Note that we removed noisy and corrupted data (e.g., missing values). Then, we handled the problem of imbalance classes using the SMOTE technique  and over-sampled minority classes.
Typhoon Tweets. To search and retrieve typhoon-related tweets, we executed keywords queries based on related typhoon terminology such as typhoon and typhoon names (e.g., Haiyan). The official Twitter streaming API limits free access222https://developer.twitter.com/en/docs/tweets/search/overview to tweets into only the past 7 days. However, we were able to get all tweets using the open-source library GetOldTweets-python333https://github.com/Jefferson-Henrique/GetOldTweets-python.
Stanford NLP Sentiment140. We used the Stanford sentiment dataset to enrich our model’s performance in sentiment detection. This dataset contains million labeled tweets with binary sentiments (positive/negative).
Tweets are commonly informal and often contain noisy and incomplete text. Hence, the preprocessing of tweets often involves varied techniques to achieve high-quality analysis in data mining applications. In particular, we carry out the following preprocessing steps:
Entity recognition. There are several tools for entity extractions and semantics reasoning. We use the Spacy API444https://spacy.io/
to annotate entities from tweets. Spacy is an open-source Natural Language Processing (NLP) library that is widely used due to its availability and preeminent accuracy in different linguistics tasks.
Tokenization. Our final tweet preprocessing step is to tokenize tweets into words, then convert to lower-case letters.
The preprocessing of the tweet "My heart goes out to all those affected by Typhoon Haiyan. You can help by donating to the Philippine RED CROSS here (link: http://www.redcross.org) redcross.org" will remove the URL at the end of the tweet and generate the following tokenized word vector: [my, heart, goes, out, to, all, those, affected, by, typhoon, haiyan, you, can, help, by, donating, to, the, philippine, red, cross, here].
3 The Approach
In this section, first we discuss the problem formulation of jointly training from social media and environmental data to improve typhoon intensities. Our approach takes two inputs: typhoon environmental data and tweet batches. Figure 2 shows the architectures (BiLSTM+CNN) for our jointly models (Feature Extractor and Typhoon Classifier). In the rest of this section, we discuss our semantics-enriched word embedding to represent tweets and joint models architectures.
3.1 Problem Formulation
Let be a typhoon environmental data, where is a set of typhoon observations and is a set of typhoon categories (i.e., labels, classes). Each represents an instance of a typhoon data with features (e.g., time-tamp, wind-speed, sea level pressure and gust) and each represents the respective typhoon’s category (e.g., either a tropical-depression, tropical-storm, typhoon or super-typhoon).
For each typhoon with environmental data , we collect all related tweets posted within the time-stamp of , we dub such tweets as . Further, we analyse all tweets to extract statistical features (i.e., tweets-based features) such as tweets volume and variances of tweets’ sentiments. Finally, we combine these features with typhoon’s environmental data in one input vector.
(Task Description.) Our goal is to design a classification model able to learn features from and in order to predict the typhoon category (). We build our classification model as a joint model of two cascaded models and dubbed feature extractor and typhoon classifier models respectively (i.e., ). To ensure the joint training of and , we combine the loss functions of both models (,) in one shared loss function as follows:
Where parameter is used to balance the individual loss functions (, ). In this paper, we set all parameters as 1. To compute the training losses, we used the cross-entropy function as a loss function as follows:
where and donate target and predicted typhoon categories respectively for typhoon instance .
(Joint-Learning). Let and be the learning parameters of feature extractor and typhoon classifier models. Both , are optimized concurrently as follows: Assume two consecutive batches of training data and , the learning parameters in
are optimized using the same gradient descent (e.g., ADAM optimizer) by backpropagating the gradients to both models. In the following batch (), the computation of statistical features by feature extractor model is hence further adapted not only from the losses in its outputs, but also from the losses in the final output by typhoon classifier model. Therefore, the feature extractor model feeds adaptive features from social media to the typhoon classifier model.
3.2 Semantics-enriched Word Embedding
Our analysis of tweets during different typhoons (see Section 2.1) suggests that we can use the tweets volume and sentiments as additional features in our model. To perform sentiment analysis on crisis tweets, we used the continuous skip-gram approach  to train our word embedding model on typhoon tweets and the sentiment140 dataset. While generic word embeddings trained on generic large-scale datasets (e.g., Wikipedia and Google News) could have been used here, they often do not capture domain-specific knowledge and semantic nuances. In contrast, domain-adapted word embeddings are effective in the field of the context in which they are trained as they capture domain-specific knowledge .
Now, given a preprocessed list of words of a tweet , we map each word to its embedding vector in with dimension. Unlike classical word embeddings, we represent entities with their corresponding vectors from the semantic knowledge base of ConceptNet555http://conceptnet.io/, where entities and relationships are projected into the same embedding space. The motivation behind combining semantics embedding with the classical word embedding is the superior performance of semantics embedding in modern data mining applications .
The semantics-enriched word embedding vector of the tweet from Example 1 would be [my, heart, goes, out, to, all, those, affected, by, typhoon, haiyan, you, can, help, by, donating, to, the, philippine, red, cross, here]. Note that, the bold words represent semantic entities.
For each input tweet, we build an embedding matrix , where is the number of words per tweet. Each row of represents the word2vec embedding of the at the corresponding position in a tweet. Our word2vec model has a dimension of and vocabulary size of words and recognized entities. Due to variable lengths of tweets, we fixed
to the average number of words per tweet to maintain a regular embedding matrix. For this reason, we truncated longer tweets and padded shorter tweets with zeros.
3.3 Feature Extractor
Feature extractor is the first model in our approach that aims to extract statistical features from tweets. To model the words sequences in tweets, we employ a bidirectional-LSTM (BiLSTM) model. First, we use an embedding look-up layer to map words to their corresponding vectors from semantics-enriched word embedding model. Then, we employ on BiLSTM layer with units and dropout rate , and one dense layer with softmax output.
Given a sequence of input words , BiLSTM see the context of word in both directions (left-to-right and right-to-left ) and summarizes information into a concatenated output vector . BiLSTM layer associates each time-stamp with an input , memory cell , forget gate and an output gate . The output vector is then computed by iterating the following equations:
Where are the weights vectors of the input, forget, memory and output gates concerned with the input vector . Respectively, are the weights vectors concerned with the previous hidden vector . are the bias terms for the four gates. and donates sigmoid function and product-wise multiplication respectively.
The outputs of the BiLSTM are probabilities of positive and negative sentiments computed by softmax function in Equation5. Subsequently, we extract and combine statistical features from BiLSTM outputs with typhoon data as with typhoon instances and features. represents the statistical features: tweets count, variance of negative sentiments and variance of positive sentiments. The variances of sentiments are computed as follows:
Where donates the predicted sentiment of tweet , is the average of sentiments and is the tweets count.
Given input vector and is the number of typhoon categories, donates the probability of typhoon category .
3.4 Typhoon Classifier
The Typhoon classifier takes input features from the Feature Extractor (see previous Section) to predict the typhoon intensity as a final output. We explored different deep architectures models (i.e., the DNN, CNN and RNN models) as baselines to benchmark the performance of our proposed joint-training model for typhoon predictions. In the following, we discuss the architecture of Typhoon Classifier presented in our joint approach in Figure 2.
. After each convolutional operation, we subsample the output by Max-pooling layer (also called pooling operation). We use a max-pooling layer to eliminate non-maximal values and reduces computation in later layers. We also employed dropout rates in convolutional layers (after first layer and after second layer), to avoid overfitting and increase the model robustness. Finally, we used a fully connected layer with softmax activation function to compute the output probabilities for all typhoon categories as in Equation 5. At the end, the class with the highest probability is returned as the model output.
We conducted several sets of experiments to benchmark the performance of the baseline models as well as our proposed approach to predict the intensity of typhoons. The aim of our evaluation is to answer the following two research questions:
To which extent, can social media improve the performance of the state-of-the-art disaster prediction approaches?
What is the impact of semantic embedding of tweets representation on the performance of our proposed approach?
In the rest of this section, we begin by describing baseline approaches and evaluation metrics. Thereafter, we analyze our results and answer each our research questions in details.
We benchmarked our approach against different baselines including traditional ML and deep neural models. We selected the SVM classifier as our traditional baseline classifier as it outperform the other traditional classifiers . For our experiment, we implemented an RBF-based kernel SVM classifier trained with typhoon environmental data. We also used four benchmarks from the deep neural based models (i.e., DNN and RNN, CNN and BiLSTM), where both achieved good performance in disaster-related research [7, 29]. In particular, CNN-based model with semantics has been employed to classify disaster-related social media data. The authors of  suggested to enrich data representation by recognizing entities (0 or 1) in tweets (also called bag of concepts) and add as additional features. However, their does not capture the context of an entity where it exist.
In our approach, we recognized entities and extract their semantic representation from an external knowledge graph embedding. The semantics embedding helps to consider not only the existence of an entity, but also represent the context of an entity where it exist. We specify our proposed approaches used in the experiments as follows:
LSTM+DNN (word embedding): Our first approach of two deep neural models (LSTM and DNN) are jointly trained with combined features from environmental data and word embedding.
LSTM+DNN (semantic embedding): This model is the same as LSTM+DNN (word embedding), but we consider semantic-enriched word embedding for data representation. See Section 3.2 for more details.
LSTM+RNN (word embedding): Our second approach of two deep neural models (LSTM and RNN) trained with combined features from environmental data and word embedding.
LSTM+RNN (semantic embedding): This model is the same as LSTM+RNN (word embedding), with considering semantic-enriched word embedding.
BiLSTM+CNN (word embedding): Our third approach of two deep neural models (BiLSTM and CNN) are jointly trained with combined features from environmental data and word embeddings.
BiLSTM+CNN (semantic embedding): This model is the same as BiLSTM+CNN (word embedding) with considering semantic-enriched word embedding.
To ensure a fair performance evaluation, we evaluated our proposed models with the same architectures and hyper-parameters settings used to configure the baselines. Moreover, we evaluated the impact of incorporating semantics embedding from external knowledge graphs in comparison with the traditional word embedding of our input tweet dataset.
4.2 Evaluation Metrics
We considered standard evaluation metrics to assess the models performance in the task of typhoons prediction. We divided the dataset (formally described in Section 2.2) into train-test splits of 80%-20% respectively. We train each model for epochs, then we evaluated the overall performance metrics (accuracy, precision, recall and F1-score) on the test dataset as depicted in Table 2.
4.3 Discussion and Result Analysis
To answer ,
we evaluated the baseline models with features extracted from the environmental data (see Section 2.2). Further, we used the same features, in addition to the features extracted from relevant tweets (i.e., tweets-based features) to train our proposed models. In particular, we computed the additional features of tweets count , variance of positive and negative sentiment of tweets.
Table 2 shows how challenging is the task of typhoons intensity prediction, where the best baseline model (i.e., the BiLSTM model) produces an accuracy of . All the base line models were trained on environmental data, which captured from sensor devices and usually include noisy and incomplete data . In contrast, our proposed models clearly demonstrate a significant improved performance where tweets-based features were incorporated with environmental data. In particular, the best accuracy is achieved by our proposed model BiLSTM+CNN, where it outperforms the respective baselines (CNN by and BiLSTM by ) on micro average F1-score. The other proposed models (LSTM+DNN and LSTM+RNN) also outperform their respective baselines (by in DNN and in RNN) on average.
To summarize our answer, training our proposed model with relevant features from social media in addition to environmental data outperforms both the accuracy and the F1-measure of the baseline approaches. To understand the superior performance of our models, we evaluated the importance of training features using the Random Forests algorithm666We used the implementation from scikit-learn http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html. As shown in Figure 4, tweet-based features were found to be more important than environmental features. As discussed in Section 3.1, tweet-based features are adaptive to prediction losses in our joint model which helps to fit the features by joint training and improve the accuracy of final prediction.
To answer ,
we investigated the impact of incorporating semantics embedding on the performance of our proposed system. Our experiments showed improved performance in terms of accuracy, precision, recall and F1-measures with the semantics embedding based models over those based only on word embedding. In particular, the accuracy of our proposed joint-model architectures are improved by up to in both LSTM+DNN and LSTM+RNN models compared to word embedding based models. As discussed earlier in Section 3.2, the semantics embedding from knowledge enrich the representation of entities and their relationship into semantic vectors.
We validated the training robustness in each model and checked the training over-fitting. As shown in Figure 3, we trained all models to achieve robust performances in testing phase as well as in training and alleviate the over-fitting by tuning the hyper-parameters properly.
5 Related Works
Our work is related to social media analysis for disaster management, jointly-trained models and semantics-enrichment data mining. In the following, we briefly present the related state of the art in each of these areas in turn.
5.1 Social media analysis for disaster management
proposed a social media-based framework to estimate damages initially from social media (i.e., tweets and images). Their system analyzed users activities on Twitter before, during and after the hurricaneSandy. Their results showed a strong correlation between activities on social media and hurricane path. Similarly, in this paper, we studied social media behaviors during different typhoons, our analysis showed a an implicit correlation based on tweets volume and sentiments.
Social media has also been shown to be a rapid event detector or so-called social-sensing from the crowd. For example, Takeshi et al.  proposed a probabilistic spatio-temporal approach to predict the center and path of natural hazards based on geo-based tweets. The authors analyzed the behaviors of social media users during different cases of earthquakes and typhoons. Their experimental results demonstrated a strong correlation between user’s behavior on Twitter and natural hazards. The authors used such correlation as an event detector for earthquakes.
Although previous works have explored the role of social media (e.g., tweets) in crisis events detection and enhancing situational awareness, few research studies showed how tweets sentiments have been used as discriminative features to improve events prediction. For example, Jingrui et al.  proposed an optimization framework to improve traffic-jam situations. Their system extracted traffic indicators from tweets semantics to improve traffic-jam production.
5.2 Semantics-enriched data mining
Researchers have leveraged semantics nuances to boost the performance in machine learning (ML) and data mining tasks. The authors of [4, 30] extended the traditional bag-of-word models with a bag-of-concepts model extracted from semantics knowledge graphs (e.g. WordNet, DBpedia). However, the authors represented the presence of concepts as vector of indices within a concept space. On the other hand, Jin Wang et al.  leverage information from knowledge graph for short text classification. Their approach associated each short text with relevant concepts. Then, words and concepts were combined to generate its embedding from a pre-traind word embedding model.
In contrast, in our approach, we leverage semantics embedding from knowledge graph which projects concepts (i.e., entities) and their relationship for conceptualized data representation.
5.3 Joint-Learning models
Recently, joint learning models have achieved superior performances in complex tasks. For instance, Jishnu et al.  proposed a jointly-training RNN-based models to extract keyphrases from disaster-related tweets. Through intersection of two RNN models via joint learning, the experimental results clearly demonstrated a significant performances in comparison with existing baseline approaches. In a related task of image classification, Wanli et al.  proposed a unified deep model that jointly learn from different components in image classification task. Similarly, Zheng et al.  demonstrated that the joint learning of two deep models could not only separately learn user and item latent factors from review text, but also cooperate with each other to boost the performance of rating prediction.
In this paper, we propose an end-to-end training framework that learns from social media and environmental data to improve disasters prediction. In particular, we analyzed typhoon-related tweets to capture additional indicators of typhoon events. Unlike previous works, we extract adaptive features based on joint training models (e.g. BiLSTM+CNN). The first model (BiLSTM) acts as a feature extractor from social media and feeds the second model (CNN) with combined features from tweets and environmental data. Furthermore, we study the impact of applying semantically-enriched data representation on the performance of our system. We employed semantics embedding from the external knowledge graph of ConceptNet. We conducted several experiments to benchmark the prediction performance of our approach and several baselines. Our evaluation showed significantly improved accuracies in our approaches (LSTM+DNN: 87.3%, LSTM+RNN: 86.0% and BiLSTM+CNN:0.90%) when compared to state-of-the-art baselines (DNN: 75.6%, RNN: 80.2%, CNN: 0.70% and BiLSTM: 0.84%). Moreover, feeding our proposed joint models with the semantics representation of entities improved F1-score even further (up to 3% in the case of LSTM+DNN, up to 4% in the case of LSTM+RNN and up to 2.7% in the case of BiLSTM+CNN).
In future work, we aim to construct a domain specific knowledge graph from disaster-related tweets for disaster relief tasks (e.g event summarization, identifying actionable information). In addition, we will carry out a comprehensive study of the effect of generic word embedding models (e.g., Word2vec, Glove) in comparison with embedding from domain-specific corpus, knowledge graphs and conceptualized embedding (e.g., BERT).
-  (2011) Twitter for crisis communication: lessons learned from japan’s tsunami disaster. International Journal of Web Based Communities 7 (3), pp. 392–402. Cited by: §5.1.
-  (2017) Choosing an NLP library for analyzing software documentation: a systematic literature review and a series of experiments. In Proceedings of the 14th International Conference on Mining Software Repositories, Cited by: item -.
-  (2018) Evaluating disaster time-line from social media with wavelet analysis. In 2018 IEEE International Conference on Smart Computing (SMARTCOMP), pp. 41–48. Cited by: §1.
On semantics and deep learning for event detection in crisis situations. In Proceedings of the International Workshop on Semantic Deep Learning (SemDeep), Cited by: §4.1, §5.2, §5.3.
SMOTE: synthetic minority over-sampling technique.
Journal of artificial intelligence research16, pp. 321–357. Cited by: item -.
-  (2012) Upper ocean responses to category 5 typhoon megi in the western north pacific. Acta Oceanologica Sinica 31 (1), pp. 51–58. Cited by: §1.
-  (2019) Detecting climate change deniers on twitter using a deep neural network. In Proceedings of the 2019 11th International Conference on Machine Learning and Computing, pp. 204–210. Cited by: §4.1.
-  (2014) Early warning systems for natural hazards and risks. Springer. Cited by: §1.
-  (2013) Improving traffic prediction with tweet semantics. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, IJCAI ’13, pp. 1387–1393. External Links: Cited by: §2.1, §5.1.
-  (2015) Social media and disasters: a functional framework for social media use in disaster planning, response, and research. Disasters 39 (1), pp. 1–22. Cited by: §1.
-  (2016) Rapid assessment of disaster damage using social media activity. Science advances 2 (3), pp. e1500779. Cited by: §2.1, §5.1.
-  (2014) Social media in disaster relief. In Data mining and knowledge discovery for big data, pp. 225–257. Cited by: §1.
-  (2013) Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119. Cited by: §3.2.
-  (2011) Challenges in disaster data collection during recent disasters. Prehospital and disaster medicine 26 (3), pp. 196–201. Cited by: §4.3.
Joint deep learning for pedestrian detection.
Proceedings of the IEEE International Conference on Computer Vision, Cited by: §5.3.
-  (2018) Make embeddings semantic again!. In International Semantic Web Conference (P&D/Industry/BlueSky), Cited by: §3.2.
-  (2016) Semantic event detection using ensemble deep learning. In Multimedia (ISM), 2016 IEEE International Symposium on, pp. 203–208. Cited by: §5.3.
Joint training of cascaded cnn for face detection. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3456–3465. Cited by: §1.
-  (2011) Microblogging after a major disaster in china: a case study of the 2010 yushu earthquake. In Proceedings of the ACM 2011 conference on Computer supported cooperative work, Cited by: §5.1.
-  (2019) Keyphrase extraction from disaster-related tweets. In The World Wide Web Conference, pp. 1555–1566. Cited by: §5.3.
-  (2016) How we’ll predict the next natural disaster: advances in natural hazard forecasting could help keep more people out of harm’s way. Discover Magazine, Sep. Cited by: §1.
-  (2018) Fifteen years of social media in emergencies: a retrospective review and future directions for crisis informatics. Journal of Contingencies and Crisis Management 26 (1), pp. 41–57. Cited by: §1.
-  (2012) Semantic sentiment analysis of twitter. In International semantic web conference, pp. 508–524. Cited by: item -.
-  (2010) Earthquake shakes twitter users: real-time event detection by social sensors. In Proceedings of the 19th international conference on World wide web, pp. 851–860. Cited by: §1, §5.1, §5.1.
-  (2018) Domain adapted word embeddings for improved sentiment classification. arXiv preprint arXiv:1805.04576. Cited by: §3.2.
-  (2013-06) To tweet or not to tweet during a disaster?. External Links: Cited by: §1.
Joint training of a convolutional network and a graphical model for human pose estimation. In Advances in neural information processing systems, pp. 1799–1807. Cited by: §1.
-  (2013) Aid is out there: looking for help from tweets during a large scale disaster. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, Cited by: §5.1.
-  (2017) Using a recurrent neural network model for classification of tweets conveyed influenza-related information. In Proceedings of the International Workshop on Digital Disease Detection using Social Media 2017 (DDDSM-2017), pp. 33–38. Cited by: §4.1.
-  (2014) Concept-based short text classification and ranking. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, pp. 1069–1078. Cited by: §5.2.
Combining knowledge with deep convolutional neural networks for short text classification.. In IJCAI, pp. 2915–2921. Cited by: §5.2.
-  (2019) Jointly learning from social media and environmental data for typhoon intensity prediction. In Proceedings of the 10th International Conference on Knowledge Capture, pp. 231–234. Cited by: Semantic-based End-to-End Learning for Typhoon Intensity Prediction.
-  (2017) Joint deep modeling of users and items using reviews for recommendation. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pp. 425–434. Cited by: §5.3.