Automating Human Evaluation of Dialogue Systems

No Thumbnail Available

Date

2022

Journal Title

Journal ISSN

Volume Title

Publisher

Association for Computational Linguistics (ACL)

Abstract

Automated metrics to evaluate dialogue systems like BLEU, METEOR, etc., weakly correlate with human judgments. Thus, human evaluation is often used to supplement these metrics for system evaluation. However, human evaluation is time-consuming as well as expensive. This paper provides an alternative approach to human evaluation with respect to three aspects: naturalness, informativeness, and quality in dialogue systems. I propose an approach based on fine-tuning the BERT model with three prediction heads, to predict whether the system-generated output is natural, fluent and informative. I observe that the proposed model achieves an average accuracy of around 77% over these 3 labels. I also design a baseline approach that uses three different BERT models to make the predictions. Based on experimental analysis, I find that using a shared model to compute the three labels performs better than three separate models. © 2022 Association for Computational Linguistics.

Description

Keywords

Citation

NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Student Research Workshop, 2022, Vol., , p. 229-234

Endorsement

Review

Supplemented By

Referenced By