Enhanced Movie Content Similarity Based on Textual, Auditory and Visual Information

In this paper we examine the ability of low-level multimodal features to extract movie similarity, in the context of a content-based movie recommendation approach. In particular, we demonstrate the extraction of multimodal representation models of movies, based on textual information from subtitles, as well as cues from the audio and visual channels. With regards to the textual domain, we emphasize our research in topic modeling of movies based on their subtitles, in order to extract topics that discriminate between movies. Regarding the visual domain, we focus on the extraction of semantically useful features that model camera movements, colors and faces, while for the audio domain we adopt simple classification aggregates based on pretrained models. The three domains are combined with static metadata (e.g. directors, actors) to prove that the content-based movie similarity procedure can be enhanced with low-level multimodal information. In order to demonstrate the proposed content representation approach, we have built a small dataset of 160 widely known movies. We assert movie similarities, as propagated by the individual modalities and fusion models, in the form of recommendation rankings. Extensive experimentation proves that all three low-level modalities (text, audio and visual) boost the performance of a content-based recommendation system, compared to the typical metadata-based content representation, by more than 50% relative increase. To our knowledge, this is the first approach that utilizes a wide range of features from all involved modalities, in order to enhance the performance of the content similarity estimation, compared to the metadata-based approaches.

READ FULL TEXT

page 12

page 17

page 18

page 20

page 29

page 31

research
08/08/2019

Moviescope: Large-scale Analysis of Movies using Multiple Modalities

Film media is a rich form of artistic expression. Unlike photography, an...
research
09/23/2020

Cosine Similarity of Multimodal Content Vectors for TV Programmes

Multimodal information originates from a variety of sources: audiovisual...
research
09/14/2021

Multilevel profiling of situation and dialogue-based deep networks for movie genre classification using movie trailers

Automated movie genre classification has emerged as an active and essent...
research
11/30/2022

Movie Recommendation System using Composite Ranking

In today's world, abundant digital content like e-books, movies, videos ...
research
04/25/2018

Movie Question Answering: Remembering the Textual Cues for Layered Visual Contents

Movies provide us with a mass of visual content as well as attracting st...
research
04/05/2016

Feature extraction using Latent Dirichlet Allocation and Neural Networks: A case study on movie synopses

Feature extraction has gained increasing attention in the field of machi...
research
07/25/2018

Who is the director of this movie? Automatic style recognition based on shot features

We show how low-level formal features, such as shot duration, meant as l...

Please sign up or login with your details

Forgot password? Click here to reset