DeepAI AI Chat
Log In Sign Up

InverseMV: Composing Piano Scores with a Convolutional Video-Music Transformer

by   Chin-Tung Lin, et al.

Many social media users prefer consuming content in the form of videos rather than text. However, in order for content creators to produce videos with a high click-through rate, much editing is needed to match the footage to the music. This posts additional challenges for more amateur video makers. Therefore, we propose a novel attention-based model VMT (Video-Music Transformer) that automatically generates piano scores from video frames. Using music generated from models also prevent potential copyright infringements that often come with using existing music. To the best of our knowledge, there is no work besides the proposed VMT that aims to compose music for video. Additionally, there lacks a dataset with aligned video and symbolic music. We release a new dataset composed of over 7 hours of piano scores with fine alignment between pop music videos and MIDI files. We conduct experiments with human evaluation on VMT, SeqSeq model (our baseline), and the original piano version soundtrack. VMT achieves consistent improvements over the baseline on music smoothness and video relevance. In particular, with the relevance scores and our case study, our model has shown the capability of multimodality on frame-level actors' movement for music generation. Our VMT model, along with the new dataset, presents a promising research direction toward composing the matching soundtrack for videos. We have released our code at


Video Background Music Generation: Dataset, Method and Evaluation

Music is essential when editing videos, but selecting music manually is ...

Symbolic Music Data Version 1.0

In this document, we introduce a new dataset designed for training machi...

AutoMatch: A Large-scale Audio Beat Matching Benchmark for Boosting Deep Learning Assistant Video Editing

The explosion of short videos has dramatically reshaped the manners peop...

Using Raspberry Pi for scientific video observation of pedestrians during a music festival

The document serves as a reference for researchers trying to capture a l...

Dance2Music: Automatic Dance-driven Music Generation

Dance and music typically go hand in hand. The complexities in dance, mu...

Debiased Cross-modal Matching for Content-based Micro-video Background Music Recommendation

Micro-video background music recommendation is a complicated task where ...

Music-Driven Group Choreography

Music-driven choreography is a challenging problem with a wide variety o...