DeepAI AI Chat
Log In Sign Up

Multimodal Emoji Prediction

by   Francesco Barbieri, et al.

Emojis are small images that are commonly included in social media text messages. The combination of visual and textual content in the same message builds up a modern way of communication, that automatic systems are not used to deal with. In this paper we extend recent advances in emoji prediction by putting forward a multimodal approach that is able to predict emojis in Instagram posts. Instagram posts are composed of pictures together with texts which sometimes include emojis. We show that these emojis can be predicted by using the text, but also using the picture. Our main finding is that incorporating the two synergistic modalities, in a combined model, improves accuracy in an emoji prediction task. This result demonstrates that these two modalities (text and images) encode different information on the use of emojis and therefore can complement each other.


Multimodal Emotion Classification

Most NLP and Computer Vision tasks are limited to scarcity of labelled d...

Detecting Sarcasm in Multimodal Social Platforms

Sarcasm is a peculiar form of sentiment expression, where the surface se...

Which Emoji Talks Best for My Picture?

Emojis have evolved as complementary sources for expressing emotion in s...

Do Images really do the Talking? Analysing the significance of Images in Tamil Troll meme classification

A meme is an part of media created to share an opinion or emotion across...

Point-of-Interest Type Prediction using Text and Images

Point-of-interest (POI) type prediction is the task of inferring the typ...

Story-oriented Image Selection and Placement

Multimodal contents have become commonplace on the Internet today, manif...

Automatic Location Type Classification From Social-Media Posts

We introduce the problem of Automatic Location Type Classification from ...