Paper
AskQE: Question Answering as Automatic Evaluation for Machine Translation
Published Apr 15, 2025 · Dayeon Ki, Kevin Duh, Marine Carpuat
ArXiv
2
Citations
0
Influential Citations
Abstract
How can a monolingual English speaker determine whether an automatic translation in French is good enough to be shared? Existing MT error detection and quality estimation (QE) techniques do not address this practical scenario. We introduce AskQE, a question generation and answering framework designed to detect critical MT errors and provide actionable feedback, helping users decide whether to accept or reject MT outputs even without the knowledge of the target language. Using ContraTICO, a dataset of contrastive synthetic MT errors in the COVID-19 domain, we explore design choices for AskQE and develop an optimized version relying on LLaMA-3 70B and entailed facts to guide question generation. We evaluate the resulting system on the BioMQM dataset of naturally occurring MT errors, where AskQE has higher Kendall's Tau correlation and decision accuracy with human ratings compared to other QE metrics.
AskQE is a question generation and answering framework that helps users determine whether to accept or reject machine translation outputs without knowledge of the target language, improving quality estimation.
Full text analysis coming soon...