Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task

11/21/2022
by   Shangda Wu, et al.
0

Benefiting from large-scale datasets and pre-trained models, the field of generative models has recently gained significant momentum. However, most datasets for symbolic music are very small, which potentially limits the performance of data-driven multimodal models. An intuitive solution to this problem is to leverage pre-trained models from other modalities (e.g., natural language) to improve the performance of symbolic music-related multimodal tasks. In this paper, we carry out the first study of generating complete and semantically consistent symbolic music scores from text descriptions, and explore the efficacy of using publicly available checkpoints (i.e., BERT, GPT-2, and BART) for natural language processing in the task of text-to-music generation. Our experimental results show that the improvement from using pre-trained checkpoints is statistically significant in terms of BLEU score and edit distance similarity. We analyse the capabilities and limitations of our model to better understand the potential of language-music models.

READ FULL TEXT
research
06/15/2023

Language-Guided Music Recommendation for Video via Prompt Analogies

We propose a method to recommend music for an input video while allowing...
research
03/12/2021

Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability

In this paper, we investigate whether the power of the models pre-traine...
research
09/21/2022

Learning Hierarchical Metrical Structure Beyond Measures

Music contains hierarchical structures beyond beats and measures. While ...
research
06/18/2023

MARBLE: Music Audio Representation Benchmark for Universal Evaluation

In the era of extensive intersection between art and Artificial Intellig...
research
01/12/2023

Rock Guitar Tablature Generation via Natural Language Processing

Deep learning has recently empowered and democratized generative modelin...
research
07/15/2023

Can Pre-Trained Text-to-Image Models Generate Visual Goals for Reinforcement Learning?

Pre-trained text-to-image generative models can produce diverse, semanti...
research
07/10/2019

LakhNES: Improving multi-instrumental music generation with cross-domain pre-training

We are interested in the task of generating multi-instrumental music sco...

Please sign up or login with your details

Forgot password? Click here to reset