![]() Anthology ID: W18-6478 Original: W18-6478v1 Version 2: W18-6478v2 Volume: Proceedings of the Third Conference on Machine Translation: Shared Task Papers Month: October Year: 2018 Address: Belgium, Brussels Venues: EMNLP We further evaluate our method in the context of the WMT2018 shared task on parallel corpus filtering and achieve the overall highest ranking scores of the shared task, scoring top in three out of four subtasks. We achieve higher BLEU scores with models trained on parallel data filtered only from Paracrawl than with models trained on clean WMT data. Sorting or thresholding according to these scores results in better subsets of parallel data. We penalize divergent cross-entropies and weigh the penalty by the cross-entropy average of both models. For each sentence pair of the noisy parallel corpus we compute cross-entropy scores according to two inverse translation models trained on clean data. Abstract In this work we introduce dual conditional cross-entropy filtering for noisy parallel data.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |