Wespeaker baselines for VoxSRC2023

06/27/2023
by   Shuai Wang, et al.
0

This report showcases the results achieved using the wespeaker toolkit for the VoxSRC2023 Challenge. Our aim is to provide participants, especially those with limited experience, with clear and straightforward guidelines to develop their initial systems. Via well-structured recipes and strong results, we hope to offer an accessible and good enough start point for all interested individuals. In this report, we describe the results achieved on the VoxSRC2023 dev set using the pretrained models, you can check the CodaLab evaluation server for the results on the evaluation set.

READ FULL TEXT
research
11/02/2022

Where Do We Go From Here? Guidelines For Offline Recommender Evaluation

Various studies in recent years have pointed out large issues in the off...
research
09/01/2021

The VoicePrivacy 2020 Challenge: Results and findings

This paper presents the results and analyses stemming from the first Voi...
research
06/02/2018

AP18-OLR Challenge: Three Tasks and Their Baselines

The third oriental language recognition (OLR) challenge AP18-OLR is intr...
research
07/16/2019

AP19-OLR Challenge: Three Tasks and Their Baselines

This paper introduces the fourth oriental language recognition (OLR) cha...
research
06/28/2017

AP17-OLR Challenge: Data, Plan, and Baseline

We present the data profile and the evaluation plan of the second orient...
research
05/24/2023

Ranger: A Toolkit for Effect-Size Based Multi-Task Evaluation

In this paper, we introduce Ranger - a toolkit to facilitate the easy us...
research
07/09/2019

Interactive Verifiable Polynomial Evaluation

Cloud computing platforms have created the possibility for computational...

Please sign up or login with your details

Forgot password? Click here to reset