LEGOEval: An Open-Source Toolkit for Dialogue System Evaluation via Crowdsourcing

05/05/2021
by   Yu Li, et al.
0

We present LEGOEval, an open-source toolkit that enables researchers to easily evaluate dialogue systems in a few lines of code using the online crowdsource platform, Amazon Mechanical Turk. Compared to existing toolkits, LEGOEval features a flexible task design by providing a Python API that maps to commonly used React.js interface components. Researchers can personalize their evaluation procedures easily with our built-in pages as if playing with LEGO blocks. Thus, LEGOEval provides a fast, consistent method for reproducing human evaluation results. Besides the flexible task design, LEGOEval also offers an easy API to review collected data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset