"Factual" or "Emotional": Stylized Image Captioning with Adaptive Learning and Attention

07/10/2018
by   Tianlang Chen, et al.
0

Generating stylized captions for an image is an emerging topic in image captioning. Given an image as input, it requires the system to generate a caption that has a specific style (e.g., humorous, romantic, positive, and negative) while describing the image content semantically accurately. In this paper, we propose a novel stylized image captioning model that effectively takes both requirements into consideration. To this end, we first devise a new variant of LSTM, named style-factual LSTM, as the building block of our model. It uses two groups of matrices to capture the factual and stylized knowledge, respectively, and automatically learns the word-level weights of the two groups based on previous context. In addition, when we train the model to capture stylized elements, we propose an adaptive learning approach based on a reference factual model, it provides factual knowledge to the model as the model learns from stylized caption labels, and can adaptively compute how much information to supply at each time step. We evaluate our model on two stylized image captioning datasets, which contain humorous/romantic captions and positive/negative captions, respectively. Experiments shows that our proposed model outperforms the state-of-the-art approaches, without using extra ground truth supervision.

READ FULL TEXT

page 3

page 11

page 12

page 14

research
11/24/2018

Senti-Attend: Image Captioning using Sentiment and Attention

There has been much recent work on image captioning models that describe...
research
08/08/2019

Towards Generating Stylized Image Captions via Adversarial Training

While most image captioning aims to generate objective descriptions of i...
research
06/15/2020

SD-RSIC: Summarization Driven Deep Remote Sensing Image Captioning

Deep neural networks (DNNs) have been recently found popular for image c...
research
09/15/2017

Self-Guiding Multimodal LSTM - when we do not have a perfect training dataset for image captioning

In this paper, a self-guiding multimodal LSTM (sg-LSTM) image captioning...
research
08/26/2020

Attr2Style: A Transfer Learning Approach for Inferring Fashion Styles via Apparel Attributes

Popular fashion e-commerce platforms mostly provide details about low-le...
research
08/26/2021

Similar Scenes arouse Similar Emotions: Parallel Data Augmentation for Stylized Image Captioning

Stylized image captioning systems aim to generate a caption not only sem...
research
02/06/2018

Multimodal Image Captioning for Marketing Analysis

Automatically captioning images with natural language sentences is an im...

Please sign up or login with your details

Forgot password? Click here to reset