LEGOEval: An Open-Source Toolkit for Dialogue System Evaluation via Crowdsourcing.

2021
We present LEGOEval, an open-source toolkit that enables researchers to easily evaluate dialogue systems in a few lines of code using the online crowdsource platform, Amazon Mechanical Turk. Compared to existing toolkits, LEGOEval features a flexible task design by providing a Python API that maps to commonly used React.js interface components. Researchers can personalize their evaluation procedures easily with our built-in pages as if playing with LEGO blocks. Thus, LEGOEval provides a fast, consistent method for reproducing human evaluation results. Besides the flexible task design, LEGOEval also offers an easy API to review collected data.
    • Correction
    • Source
    • Cite
    • Save
    16
    References
    2
    Citations
    NaN
    KQI
    []
    Baidu
    map