Federico Garcea


2023

pdf bib
!Translate: When You Cannot Cook Up a Translation, Explain
Federico Garcea | Margherita Martinelli | Maja Milicević Petrović | Alberto Barrón-Cedeño
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing

In the domain of cuisine, both dishes and ingredients tend to be heavily rooted in the local context they belong to. As a result, the associated terms are often realia tied to specific cultures and languages. This causes difficulties for non-speakers of the local language and ma- chine translation (MT) systems alike, as it implies a lack of the concept and/or of a plausible translation. MT typically opts for one of two alternatives: keeping the source language terms untranslated or relying on a hyperonym/near-synonym in the target language, provided one exists. !Translate proposes a better alternative: explaining. Given a cuisine entry such as a restaurant menu item, we identify culture-specific terms and enrich the output of the MT system with automatically retrieved definitions of the non-translatable terms in the target language, making the translation more actionable for the final user.

pdf bib
Return to the Source: Assessing Machine Translation Suitability
Francesco Fernicola | Silvia Bernardini | Federico Garcea | Adriano Ferraresi | Alberto Barrón-Cedeño
Proceedings of the 24th Annual Conference of the European Association for Machine Translation

We approach the task of assessing the suitability of a source text for translation by transferring the knowledge from established MT evaluation metrics to a model able to predict MT quality a priori from the source text alone. To open the door to experiments in this regard, we depart from reference English-German parallel corpora to build a corpus of 14,253 source text-quality score tuples. The tuples include four state-of-the-art metrics: cushLEPOR, BERTScore, COMET, and TransQuest. With this new resource at hand, we fine-tune XLM-RoBERTa, both in a single-task and a multi-task setting, to predict these evaluation scores from the source text alone. Results for this methodology are promising, with the single-task model able to approximate well-established MT evaluation and quality estimation metrics - without looking at the actual machine translations - achieving low RMSE values in the [0.1-0.2] range and Pearson correlation scores up to 0.688.

2013

pdf bib
USE CASE: Customization and Collaboration to Enhance MT for a Knowledge Base Online Portal
Chris Wendt | Federico Garcea
Proceedings of Machine Translation Summit XIV: User track