LEGOEval: An Open-Source Toolkit for Dialogue System Evaluation via Crowdsourcing

05/05/2021 ∙ by Yu Li, et al. ∙ 0

We present LEGOEval, an open-source toolkit that enables researchers to easily evaluate dialogue systems in a few lines of code using the online crowdsource platform, Amazon Mechanical Turk. Compared to existing toolkits, LEGOEval features a flexible task design by providing a Python API that maps to commonly used React.js interface components. Researchers can personalize their evaluation procedures easily with our built-in pages as if playing with LEGO blocks. Thus, LEGOEval provides a fast, consistent method for reproducing human evaluation results. Besides the flexible task design, LEGOEval also offers an easy API to review collected data.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

Code Repositories

LEGOEval

A toolkit for dialogue system evaluation via crowdsourcing


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.