Jinpeng Wang


2023

pdf bib
Evaluating Object Hallucination in Large Vision-Language Models
Yifan Li | Yifan Du | Kun Zhou | Jinpeng Wang | Xin Zhao | Ji-Rong Wen
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Inspired by the superior language abilities of large language models (LLM), large vision-language models (LVLM) have been recently proposed by integrating powerful LLMs for improving the performance on complex multimodal tasks. Despite the promising progress on LVLMs, we find that they suffer from object hallucinations, i.e., they tend to generate objects inconsistent with the target images in the descriptions. To investigate it, this work presents the first systematic study on object hallucination of LVLMs. We conduct the evaluation experiments on several representative LVLMs, and show that they mostly suffer from severe object hallucination issues. We further discuss that the visual instructions may influence the hallucination, and find that: objects that frequently appear in the visual instructions or co-occur with the image objects are obviously prone to be hallucinated by LVLMs. Besides, we further design a polling-based query method called POPE for better evaluation of object hallucination. Experiment results show that our POPE can evaluate object hallucination in a more stable and flexible way.

2020

pdf bib
Learning Semantic Correspondences from Noisy Data-text Pairs by Local-to-Global Alignments
Feng Nie | Jinpeng Wang | Chin-Yew Lin
Proceedings of the 28th International Conference on Computational Linguistics

Learning semantic correspondences between structured input data (e.g., slot-value pairs) and associated texts is a core problem for many downstream NLP applications, e.g., data-to-text generation. Large-scale datasets recently proposed for generation contain loosely corresponding data text pairs, where part of spans in text cannot be aligned to its incomplete paired input. To learn semantic correspondences from such datasets, we propose a two-stage local-to-global alignment (L2GA) framework. First, a local model based on multi-instance learning is applied to build alignments for texts spans that can be directly grounded to its paired structured input. Then, a novel global model built upon a memory-guided conditional random field (CRF) layer aims to infer missing alignments for text spans which not supported by paired incomplete inputs, where the memory is designed to leverage alignment clues provided by the local model to strengthen the global model. In this way, the local model and global model can work jointly to learn semantic correspondences in the same framework. Experimental results show that our proposed method can be generalized to both restaurant and computer domains and improve the alignment accuracy.

2019

pdf bib
A Simple Recipe towards Reducing Hallucination in Neural Surface Realisation
Feng Nie | Jin-Ge Yao | Jinpeng Wang | Rong Pan | Chin-Yew Lin
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Recent neural language generation systems often hallucinate contents (i.e., producing irrelevant or contradicted facts), especially when trained on loosely corresponding pairs of the input structure and text. To mitigate this issue, we propose to integrate a language understanding module for data refinement with self-training iterations to effectively induce strong equivalence between the input data and the paired text. Experiments on the E2E challenge dataset show that our proposed framework can reduce more than 50% relative unaligned noise from the original data-text pairs. A vanilla sequence-to-sequence neural NLG model trained on the refined data has improved on content correctness compared with the current state-of-the-art ensemble generator.

pdf bib
Enhancing Neural Data-To-Text Generation Models with External Background Knowledge
Shuang Chen | Jinpeng Wang | Xiaocheng Feng | Feng Jiang | Bing Qin | Chin-Yew Lin
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Recent neural models for data-to-text generation rely on massive parallel pairs of data and text to learn the writing knowledge. They often assume that writing knowledge can be acquired from the training data alone. However, when people are writing, they not only rely on the data but also consider related knowledge. In this paper, we enhance neural data-to-text models with external knowledge in a simple but effective way to improve the fidelity of generated text. Besides relying on parallel data and text as in previous work, our model attends to relevant external knowledge, encoded as a temporary memory, and combines this knowledge with the context representation of data before generating words. This allows the model to infer relevant facts which are not explicitly stated in the data table from an external knowledge source. Experimental results on twenty-one Wikipedia infobox-to-text datasets show our model, KBAtt, consistently improves a state-of-the-art model on most of the datasets. In addition, to quantify when and why external knowledge is effective, we design a metric, KBGain, which shows a strong correlation with the observed performance boost. This result demonstrates the relevance of external knowledge and sparseness of original data are the main factors affecting system performance.

pdf bib
An Encoder with non-Sequential Dependency for Neural Data-to-Text Generation
Feng Nie | Jinpeng Wang | Rong Pan | Chin-Yew Lin
Proceedings of the 12th International Conference on Natural Language Generation

Data-to-text generation aims to generate descriptions given a structured input data (i.e., a table with multiple records). Existing neural methods for encoding input data can be divided into two categories: a) pooling based encoders which ignore dependencies between input records or b) recurrent encoders which model only sequential dependencies between input records. In our investigation, although the recurrent encoder generally outperforms the pooling based encoder by learning the sequential dependencies, it is sensitive to the order of the input records (i.e., performance decreases when injecting the random shuffling noise over input data). To overcome this problem, we propose to adopt the self-attention mechanism to learn dependencies between arbitrary input records. Experimental results show the proposed method achieves comparable results and remains stable under random shuffling over input data.

2018

pdf bib
Aggregated Semantic Matching for Short Text Entity Linking
Feng Nie | Shuyan Zhou | Jing Liu | Jinpeng Wang | Chin-Yew Lin | Rong Pan
Proceedings of the 22nd Conference on Computational Natural Language Learning

The task of entity linking aims to identify concepts mentioned in a text fragments and link them to a reference knowledge base. Entity linking in long text has been well studied in previous work. However, short text entity linking is more challenging since the text are noisy and less coherent. To better utilize the local information provided in short texts, we propose a novel neural network framework, Aggregated Semantic Matching (ASM), in which two different aspects of semantic information between the local context and the candidate entity are captured via representation-based and interaction-based neural semantic matching models, and then two matching signals work jointly for disambiguation with a rank aggregation mechanism. Our evaluation shows that the proposed model outperforms the state-of-the-arts on public tweet datasets.

pdf bib
Learning Latent Semantic Annotations for Grounding Natural Language to Structured Data
Guanghui Qin | Jin-Ge Yao | Xuening Wang | Jinpeng Wang | Chin-Yew Lin
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Previous work on grounded language learning did not fully capture the semantics underlying the correspondences between structured world state representations and texts, especially those between numerical values and lexical terms. In this paper, we attempt at learning explicit latent semantic annotations from paired structured tables and texts, establishing correspondences between various types of values and texts. We model the joint probability of data fields, texts, phrasal spans, and latent annotations with an adapted semi-hidden Markov model, and impose a soft statistical constraint to further improve the performance. As a by-product, we leverage the induced annotations to extract templates for language generation. Experimental results suggest the feasibility of the setting in this study, as well as the effectiveness of our proposed framework.

pdf bib
Operation-guided Neural Networks for High Fidelity Data-To-Text Generation
Feng Nie | Jinpeng Wang | Jin-Ge Yao | Rong Pan | Chin-Yew Lin
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Recent neural models for data-to-text generation are mostly based on data-driven end-to-end training over encoder-decoder networks. Even though the generated texts are mostly fluent and informative, they often generate descriptions that are not consistent with the input structured data. This is a critical issue especially in domains that require inference or calculations over raw data. In this paper, we attempt to improve the fidelity of neural data-to-text generation by utilizing pre-executed symbolic operations. We propose a framework called Operation-guided Attention-based sequence-to-sequence network (OpAtt), with a specifically designed gating mechanism as well as a quantization module for operation results to utilize information from pre-executed operations. Experiments on two sports datasets show our proposed method clearly improves the fidelity of the generated texts to the input structured data.

pdf bib
Data2Text Studio: Automated Text Generation from Structured Data
Longxu Dou | Guanghui Qin | Jinpeng Wang | Jin-Ge Yao | Chin-Yew Lin
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

Data2Text Studio is a platform for automated text generation from structured data. It is equipped with a Semi-HMMs model to extract high-quality templates and corresponding trigger conditions from parallel data automatically, which improves the interactivity and interpretability of the generated text. In addition, several easy-to-use tools are provided for developers to edit templates of pre-trained models, and APIs are released for developers to call the pre-trained model to generate texts in third-party applications. We conduct experiments on RotoWire datasets for template extraction and text generation. The results show that our model achieves improvements on both tasks.

2017

pdf bib
A Statistical Framework for Product Description Generation
Jinpeng Wang | Yutai Hou | Jing Liu | Yunbo Cao | Chin-Yew Lin
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

We present in this paper a statistical framework that generates accurate and fluent product description from product attributes. Specifically, after extracting templates and learning writing knowledge from attribute-description parallel data, we use the learned knowledge to decide what to say and how to say for product description generation. To evaluate accuracy and fluency for the generated descriptions, in addition to BLEU and Recall, we propose to measure what to say (in terms of attribute coverage) and to measure how to say (by attribute-specified generation) separately. Experimental results show that our framework is effective.

2015

pdf bib
User Based Aggregation for Biterm Topic Model
Weizheng Chen | Jinpeng Wang | Yan Zhang | Hongfei Yan | Xiaoming Li
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

2013

pdf bib
Mining New Business Opportunities: Identifying Trend related Products by Leveraging Commercial Intents from Microblogs
Jinpeng Wang | Wayne Xin Zhao | Haitian Wei | Hongfei Yan | Xiaoming Li
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing