What's Wrong With This Translation? Simplifying Error Annotation For Crowd Evaluation

dc.contributor.authorDebess, Iben Nyholm
dc.contributor.authorKarakanta, Alina
dc.contributor.authorScalvini, Barbara
dc.contributor.editorEinarsson, Hafsteinn
dc.contributor.editorSimonsen, Annika
dc.contributor.editorNielsen, Dan Saattrup
dc.coverage.spatialTallinn, Estonia
dc.date.accessioned2025-02-17T09:51:19Z
dc.date.available2025-02-17T09:51:19Z
dc.date.issued2025-03
dc.description.abstractMachine translation (MT) for Faroese faces challenges due to limited expert annotators and a lack of robust evaluation metrics. This study addresses these challenges by developing an MQM-inspired expert annotation framework to identify key error types and a simplified crowd evaluation scheme to enable broader participation. Our findings based on an analysis of 200 sentences translated by three models demonstrate that simplified crowd evaluations align with expert assessments, paving the way for improved accessibility and democratization of MT evaluation.
dc.identifier.isbn978-9908-53-116-8
dc.identifier.urihttps://hdl.handle.net/10062/107160
dc.language.isoen
dc.publisherUniversity of Tartu Library
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.titleWhat's Wrong With This Translation? Simplifying Error Annotation For Crowd Evaluation
dc.typeArticle

Failid

Originaal pakett

Nüüd näidatakse 1 - 1 1
Laen...
Pisipilt
Nimi:
2025_nbreal_1_3.pdf
Suurus:
325.74 KB
Formaat:
Adobe Portable Document Format