Tõlkekvaliteedi hindamine
Date
2011
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Tartu Ülikool
Abstract
Enamus kvaliteedi meetrikaid annab ainult mingi numbrilise väärtuse ette määratud vahemikust, andes masintõlkesüsteemide arendajate ainult üldise hinnangu süsteemi kvaliteedi suhtes, täpsustamata milliseid vigu analüüsi jooksul kohati. Käesoleva töö käigus prooviti lahendada seda mureküsimust - millist tüüpi vigu teeb süsteem tõlkimisel. Probleemi lahendamisel püüti säilitada keelest sõltumatust, mis omakorda seadis piiranguid programmi võimekusele teatud veatüüpe tuvastada. Näiteks ei saa keelespetsiifilisi andmeid omamata seada vastavusse inimtõlke ja masintõlke sõnu, mis ei ole sarnased. Töö lahenduse käigus prooviti läbi erinevaid lähenemisviise. Lõpuks jäi peale täielik keelest sõltumatuse nõue ehk loobuti morfoloogilise info kasutamisest ning arvutati keelest sõltumatult hinnangud erinevate vigade esinemise sagedustest. Programm suutis talle pandud eesmärke täita ning annab üpris adekvaatset statistikat süsteemi vigade kohta.
Masintõlge on väga oluline paljudele inimestele, selle abil saab kasutaja ligikaudese tõlke abil teksti sisust, mille jaoks oleks vaja muidu lingvisti. Kuna masintõlge on oluline, on ka selle arendamine oluline. Selleks on vaja süsteemi väljundit hinnata mingite kriteeriumite alusel. Alati on seda võimalik käsitsi teha, kuid see on aega ja ressurssi nõudev. Selleks on välja arendatud palju automaatsed hindamise meetrikad. Levinum neist on BLEU, mis hindab süsteemi tõlke ja näidistõlke korreleerumist. Veaanalüüs on siiski veel üsna uurimata ala ning loodetavasi sai töö autor anda oma panuse selle hüvanguks
The task of this thesis was to design a system that would help the machine translation system developers to identify what kind of errors their system makes while translating. To do that the analyzer generates detailed summary over the system output. The analyzer is able to identify missing and extra words/phrases, some differences in word inflections and differences in word/phrase order. It also calculates one of the most popular metrics BLEU value to help the developers to decide how well their system correlates with human translation. The analyzer was tested on two languages English and Estonian. On each language two translation systems was chosen and their translation on the same input was compared to human translation. Examples of their output evaluation were presented in this thesis. It showed that overall quality of the systems was similar. Some differences occurred in the number of words with similar stems that occurred in translations. Error analysis is still a rather unexplored area and author hopes that he was able to put some contribution in the area.
The task of this thesis was to design a system that would help the machine translation system developers to identify what kind of errors their system makes while translating. To do that the analyzer generates detailed summary over the system output. The analyzer is able to identify missing and extra words/phrases, some differences in word inflections and differences in word/phrase order. It also calculates one of the most popular metrics BLEU value to help the developers to decide how well their system correlates with human translation. The analyzer was tested on two languages English and Estonian. On each language two translation systems was chosen and their translation on the same input was compared to human translation. Examples of their output evaluation were presented in this thesis. It showed that overall quality of the systems was similar. Some differences occurred in the number of words with similar stems that occurred in translations. Error analysis is still a rather unexplored area and author hopes that he was able to put some contribution in the area.