Miguel Rios


2019

pdf bib
Latent Variable Model for Multi-modal Translation
Iacer Calixto | Miguel Rios | Wilker Aziz
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

In this work, we propose to model the interaction between visual and textual features for multi-modal neural machine translation (MMT) through a latent variable model. This latent variable can be seen as a multi-modal stochastic embedding of an image and its description in a foreign language. It is used in a target-language decoder and also to predict image features. Importantly, our model formulation utilises visual and textual inputs during training but does not require that images be available at test time. We show that our latent variable MMT formulation improves considerably over strong baselines, including a multi-task learning approach (Elliott and Kadar, 2017) and a conditional variational auto-encoder approach (Toyama et al., 2016). Finally, we show improvements due to (i) predicting image features in addition to only conditioning on them, (ii) imposing a constraint on the KL term to promote models with non-negligible mutual information between inputs and latent variable, and (iii) by training on additional target-language image descriptions (i.e. synthetic data).

2018

pdf bib
Deep Generative Model for Joint Alignment and Word Representation
Miguel Rios | Wilker Aziz | Khalil Sima’an
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

This work exploits translation data as a source of semantically relevant learning signal for models of word representation. In particular, we exploit equivalence through translation as a form of distributional context and jointly learn how to embed and align with a deep generative model. Our EmbedAlign model embeds words in their complete observed context and learns by marginalisation of latent lexical alignments. Besides, it embeds words as posterior probability densities, rather than point estimates, which allows us to compare words in context using a measure of overlap between distributions (e.g. KL divergence). We investigate our model’s performance on a range of lexical semantics tasks achieving competitive results on several standard benchmarks including natural language inference, paraphrasing, and text similarity.

2017

pdf bib
The QT21 Combined Machine Translation System for English to Latvian
Jan-Thorsten Peter | Hermann Ney | Ondřej Bojar | Ngoc-Quan Pham | Jan Niehues | Alex Waibel | Franck Burlot | François Yvon | Mārcis Pinnis | Valters Šics | Jasmijn Bastings | Miguel Rios | Wilker Aziz | Philip Williams | Frédéric Blain | Lucia Specia
Proceedings of the Second Conference on Machine Translation

2015

pdf bib
Obtaining SMT dictionaries for related languages
Miguel Rios | Serge Sharoff
Proceedings of the Eighth Workshop on Building and Using Comparable Corpora

2014

pdf bib
UoW: Multi-task Learning Gaussian Process for Semantic Textual Similarity
Miguel Rios
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)

2012

pdf bib
UOW: Semantically Informed Text Similarity
Miguel Rios | Wilker Aziz | Lucia Specia
*SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012)

2011

pdf bib
TINE: A Metric to Assess MT Adequacy
Miguel Rios | Wilker Aziz | Lucia Specia
Proceedings of the Sixth Workshop on Statistical Machine Translation

pdf bib
Shallow Semantic Trees for SMT
Wilker Aziz | Miguel Rios | Lucia Specia
Proceedings of the Sixth Workshop on Statistical Machine Translation

pdf bib
Improving Chunk-based Semantic Role Labeling with Lexical Features
Wilker Aziz | Miguel Rios | Lucia Specia
Proceedings of the International Conference Recent Advances in Natural Language Processing 2011