Dynamic-SUPERB: Towards A Dynamic, Collaborative, and Comprehensive Instruction-Tuning Benchmark for Speech

09/18/2023
by   Chien-yu Huang, et al.
0

Text language models have shown remarkable zero-shot capability in generalizing to unseen tasks when provided with well-formulated instructions. However, existing studies in speech processing primarily focus on limited or specific tasks. Moreover, the lack of standardized benchmarks hinders a fair comparison across different approaches. Thus, we present Dynamic-SUPERB, a benchmark designed for building universal speech models capable of leveraging instruction tuning to perform multiple tasks in a zero-shot fashion. To achieve comprehensive coverage of diverse speech tasks and harness instruction tuning, we invite the community to collaborate and contribute, facilitating the dynamic growth of the benchmark. To initiate, Dynamic-SUPERB features 55 evaluation instances by combining 33 tasks and 22 datasets. This spans a broad spectrum of dimensions, providing a comprehensive platform for evaluation. Additionally, we propose several approaches to establish benchmark baselines. These include the utilization of speech models, text language models, and the multimodal encoder. Evaluation results indicate that while these baselines perform reasonably on seen tasks, they struggle with unseen ones. We also conducted an ablation study to assess the robustness and seek improvements in the performance. We release all materials to the public and welcome researchers to collaborate on the project, advancing technologies in the field together.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/21/2022

MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning

Instruction tuning, a new learning paradigm that fine-tunes pre-trained ...
research
05/25/2022

Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning

Instruction tuning is an emergent paradigm in NLP wherein natural langua...
research
04/06/2023

Instruction Tuning with GPT-4

Prior work has shown that finetuning large language models (LLMs) using ...
research
05/14/2023

STORYWARS: A Dataset and Instruction Tuning Baselines for Collaborative Story Understanding and Generation

Collaborative stories, which are texts created through the collaborative...
research
07/26/2023

Mental-LLM: Leveraging Large Language Models for Mental Health Prediction via Online Text Data

Advances in large language models (LLMs) have empowered a variety of app...
research
05/19/2023

Do Models Really Learn to Follow Instructions? An Empirical Study of Instruction Tuning

Recent works on instruction tuning (IT) have achieved great performance ...
research
09/27/2022

EditEval: An Instruction-Based Benchmark for Text Improvements

Evaluation of text generation to date has primarily focused on content c...

Please sign up or login with your details

Forgot password? Click here to reset