Hai Wan


2022

pdf bib
Enhancing Cross-lingual Natural Language Inference by Prompt-learning from Cross-lingual Templates
Kunxun Qi | Hai Wan | Jianfeng Du | Haolan Chen
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Cross-lingual natural language inference (XNLI) is a fundamental task in cross-lingual natural language understanding. Recently this task is commonly addressed by pre-trained cross-lingual language models. Existing methods usually enhance pre-trained language models with additional data, such as annotated parallel corpora. These additional data, however, are rare in practice, especially for low-resource languages. Inspired by recent promising results achieved by prompt-learning, this paper proposes a novel prompt-learning based framework for enhancing XNLI. It reformulates the XNLI problem to a masked language modeling problem by constructing cloze-style questions through cross-lingual templates. To enforce correspondence between different languages, the framework augments a new question for every question using a sampled template in another language and then introduces a consistency loss to make the answer probability distribution obtained from the new question as similar as possible with the corresponding distribution obtained from the original question. Experimental results on two benchmark datasets demonstrate that XNLI models enhanced by our proposed framework significantly outperform original ones under both the full-shot and few-shot cross-lingual transfer settings.

2021

pdf bib
Enhancing Metaphor Detection by Gloss-based Interpretations
Hai Wan | Jinxia Lin | Jianfeng Du | Dawei Shen | Manrong Zhang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
A DQN-based Approach to Finding Precise Evidences for Fact Verification
Hai Wan | Haicheng Chen | Jianfeng Du | Weilin Luo | Rongzhen Ye
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Computing precise evidences, namely minimal sets of sentences that support or refute a given claim, rather than larger evidences is crucial in fact verification (FV), since larger evidences may contain conflicting pieces some of which support the claim while the other refute, thereby misleading FV. Despite being important, precise evidences are rarely studied by existing methods for FV. It is challenging to find precise evidences due to a large search space with lots of local optimums. Inspired by the strong exploration ability of the deep Q-learning network (DQN), we propose a DQN-based approach to retrieval of precise evidences. In addition, to tackle the label bias on Q-values computed by DQN, we design a post-processing strategy which seeks best thresholds for determining the true labels of computed evidences. Experimental results confirm the effectiveness of DQN in computing precise evidences and demonstrate improvements in achieving accurate claim verification.

2015

pdf bib
Aligning Knowledge and Text Embeddings by Entity Descriptions
Huaping Zhong | Jianwen Zhang | Zhen Wang | Hai Wan | Zheng Chen
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing