Xiangyang Xue


2023

pdf bib
Improving Empathetic Dialogue Generation by Dynamically Infusing Commonsense Knowledge
Hua Cai | Xuli Shen | Qing Xu | Weilin Shen | Xiaomei Wang | Weifeng Ge | Xiaoqing Zheng | Xiangyang Xue
Findings of the Association for Computational Linguistics: ACL 2023

In empathetic conversations, individuals express their empathy towards others. Previous work has mainly focused on generating empathetic responses by utilizing the speaker’s emotion. Besides, external commonsense knowledge has been applied to enhance the system’s understandings of the speaker’s situation. However, given an event, commonsense knowledge base contains various relations, potentially leading to confusion for the dialogue system. Consequently, inconsistencies arise among the emotion, generated response and speaker’s contextual information. To this end, we propose a novel approach for empathetic response generation, which incorporates an adaptive module for commonsense knowledge selection to ensure consistency between the generated empathetic responses and the speaker’s situation. This selected knowledge is used to refine the commonsense cognition and empathy expression for generated responses. Experimental results show that our approach significantly outperforms baseline models in both automatic and human evaluations, exhibiting the generation of more coherent and empathetic responses. Moreover, case studies highlight the interpretability of knowledge selection in the responses and the effectiveness of adaptive module in our model. Code: https://github.com/Hanscal/DCKS.

2020

pdf bib
𝒫2: A Plan-and-Pretrain Approach for Knowledge Graph-to-Text Generation
Qipeng Guo | Zhijing Jin | Ning Dai | Xipeng Qiu | Xiangyang Xue | David Wipf | Zheng Zhang
Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)

Text verbalization of knowledge graphs is an important problem with wide application to natural language generation (NLG) systems. It is challenging because the generated text not only needs to be grammatically correct (fluency), but also has to contain the given structured knowledge input (relevance) and meet some other criteria. We develop a plan-and-pretrain approach, 𝒫2, which consists of a relational graph convolutional network (RGCN) planner and the pretrained sequence-tosequence (Seq2Seq) model T5. Specifically, the R-GCN planner first generates an order of the knowledge graph triplets, corresponding to the order that they will be mentioned in text, and then T5 produces the surface realization of the given plan. In the WebNLG+ 2020 Challenge, our submission ranked in 1st place on all automatic and human evaluation criteria of the English RDF-to-text generation task.

pdf bib
BERT-ATTACK: Adversarial Attack Against BERT Using BERT
Linyang Li | Ruotian Ma | Qipeng Guo | Xiangyang Xue | Xipeng Qiu
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Adversarial attacks for discrete data (such as texts) have been proved significantly more challenging than continuous data (such as images) since it is difficult to generate adversarial samples with gradient-based methods. Current successful attack methods for texts usually adopt heuristic replacement strategies on the character or word level, which remains challenging to find the optimal solution in the massive space of possible combinations of replacements while preserving semantic consistency and language fluency. In this paper, we propose BERT-Attack, a high-quality and effective method to generate adversarial samples using pre-trained masked language models exemplified by BERT. We turn BERT against its fine-tuned models and other deep neural models in downstream tasks so that we can successfully mislead the target models to predict incorrectly. Our method outperforms state-of-the-art attack strategies in both success rate and perturb percentage, while the generated adversarial samples are fluent and semantically preserved. Also, the cost of calculation is low, thus possible for large-scale generations. The code is available at https://github.com/LinyangLee/BERT-Attack.

2019

pdf bib
Star-Transformer
Qipeng Guo | Xipeng Qiu | Pengfei Liu | Yunfan Shao | Xiangyang Xue | Zheng Zhang
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Although Transformer has achieved great successes on many NLP tasks, its heavy structure with fully-connected attention connections leads to dependencies on large training data. In this paper, we present Star-Transformer, a lightweight alternative by careful sparsification. To reduce model complexity, we replace the fully-connected structure with a star-shaped topology, in which every two non-adjacent nodes are connected through a shared relay node. Thus, complexity is reduced from quadratic to linear, while preserving the capacity to capture both local composition and long-range dependency. The experiments on four tasks (22 datasets) show that Star-Transformer achieved significant improvements against the standard Transformer for the modestly sized datasets.

2010

pdf bib
Fusion of Multiple Features and Ranking SVM for Web-based English-Chinese OOV Term Translation
Yuejie Zhang | Yang Wang | Lei Cen | Yanxia Su | Cheng Jin | Xiangyang Xue | Jianping Fan
Coling 2010: Posters

2009

pdf bib
English-Chinese Bi-Directional OOV Translation based on Web Mining and Supervised Learning
Yuejie Zhang | Yang Wang | Xiangyang Xue
Proceedings of the ACL-IJCNLP 2009 Conference Short Papers