Jian-Yun Nie

Also published as: Jian-yun Nie


2023

pdf bib
The Web Can Be Your Oyster for Improving Language Models
Junyi Li | Tianyi Tang | Wayne Xin Zhao | Jingyuan Wang | Jian-Yun Nie | Ji-Rong Wen
Findings of the Association for Computational Linguistics: ACL 2023

Pretrained language models (PLMs) encode a large amount of world knowledge. However, as such knowledge is frozen at the time of model training, the models become static and limited by the training data at that time. In order to further improve the capacity of PLMs for knowledge-intensive tasks, we consider augmenting PLMs with the large-scale web using search engine. Unlike previous augmentation sources (e.g., Wikipedia data dump), the web provides broader, more comprehensive and constantly updated information. In this paper, we present a web-augmented PLM – UniWeb, which is trained over 16 knowledge-intensive tasks in a unified text-to-text format. Instead of simply using the retrieved contents from web, our approach has made two major improvements. Firstly, we propose an adaptive search engine assisted learning method that can self-evaluate the confidence level of PLM’s predictions, and adaptively determine when to refer to the web for more data, which can avoid useless or noisy augmentation from web. Secondly, we design a pretraining task, i.e., continual knowledge learning, based on salient spans prediction, to reduce the discrepancy between the encoded and retrieved knowledge. Experiments on a wide range of knowledge-intensive tasks show that our model significantly outperforms previous retrieval-augmented methods.

pdf bib
A Customized Text Sanitization Mechanism with Differential Privacy
Sai Chen | Fengran Mo | Yanhao Wang | Cen Chen | Jian-Yun Nie | Chengyu Wang | Jamie Cui
Findings of the Association for Computational Linguistics: ACL 2023

As privacy issues are receiving increasing attention within the Natural Language Processing (NLP) community, numerous methods have been proposed to sanitize texts subject to differential privacy. However, the state-of-the-art text sanitization mechanisms based on a relaxed notion of metric local differential privacy (MLDP) do not apply to non-metric semantic similarity measures and cannot achieve good privacy-utility trade-offs. To address these limitations, we propose a novel Customized Text sanitization (CusText) mechanism based on the original 𝜖-differential privacy (DP) definition, which is compatible with any similarity measure.Moreover, CusText assigns each input token a customized output set to provide more advanced privacy protection at the token level.Extensive experiments on several benchmark datasets show that CusText achieves a better trade-off between privacy and utility than existing mechanisms.The code is available at https://github.com/sai4july/CusText.

pdf bib
MoqaGPT : Zero-Shot Multi-modal Open-domain Question Answering with Large Language Model
Le Zhang | Yihong Wu | Fengran Mo | Jian-Yun Nie | Aishwarya Agrawal
Findings of the Association for Computational Linguistics: EMNLP 2023

Multi-modal open-domain question answering typically requires evidence retrieval from databases across diverse modalities, such as images, tables, passages, etc. Even Large Language Models (LLMs) like GPT-4 fall short in this task. To enable LLMs to tackle the task in a zero-shot manner, we introduce MoqaGPT, a straightforward and flexible framework. Using a divide-and-conquer strategy that bypasses intricate multi-modality ranking, our framework can accommodate new modalities and seamlessly transition to new models for the task. Built upon LLMs, MoqaGPT retrieves and extracts answers from each modality separately, then fuses this multi-modal information using LLMs to produce a final answer. Our methodology boosts performance on the MMCoQA dataset, improving F1 by +37.91 points and EM by +34.07 points over the supervised baseline. On the MultiModalQA dataset, MoqaGPT surpasses the zero-shot baseline, improving F1 by 9.5 points and EM by 10.1 points, and significantly closes the gap with supervised methods. Our codebase is available at https://github.com/lezhang7/MOQAGPT.

pdf bib
HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models
Junyi Li | Xiaoxue Cheng | Xin Zhao | Jian-Yun Nie | Ji-Rong Wen
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Large language models (LLMs), such as ChatGPT, are prone to generate hallucinations, i.e., content that conflicts with the source or cannot be verified by the factual knowledge. To understand what types of content and to which extent LLMs are apt to hallucinate, we introduce the Hallucination Evaluation for Large Language Models (HaluEval) benchmark, a large collection of generated and human-annotated hallucinated samples for evaluating the performance of LLMs in recognizing hallucination. To generate these samples, we propose a ChatGPT-based two-step framework, i.e., sampling-then-filtering. Besides, we also hire some human labelers to annotate the hallucinations in ChatGPT responses. The empirical results suggest that ChatGPT is likely to generate hallucinated content in specific topics by fabricating unverifiable information (i.e., about 19.5% user queries). Moreover, existing LLMs face great challenges in recognizing the hallucinations in texts. While, our experiments also prove that the hallucination recognition can be improved by providing external knowledge or adding reasoning steps.

pdf bib
Augmentation de jeux de données RI pour la recherche conversationnelle à initiative mixte
Pierre Erbacher | Philippe Preux | Jian-Yun Nie | Laure Soulier
Actes de CORIA-TALN 2023. Actes de la 18e Conférence en Recherche d'Information et Applications (CORIA)

Une des particularités des systèmes de recherche conversationnelle est qu’ils impliquent des initiatives mixtes telles que des questions de clarification des requêtes générées par le système pour mieux comprendre le besoin utilisateur. L’évaluation de ces systèmes à grande échelle sur la tâche finale de RI est très difficile et nécessite des ensembles de données adéquats contenant de telles interactions. Cependant, les jeux de données actuels se concentrent uniquement sur les tâches traditionnelles de RI ad hoc ou sur les tâches de clarification de la requête. Pour combler cette lacune, nous proposons une méthodologie pour construire automatiquement des ensembles de données de RI conversationnelle à grande échelle à partir d’ensembles de données de RI ad hoc afin de faciliter les explorations sur la RI conversationnelle. Nous effectuons une évaluation approfondie montrant la qualité et la pertinence des interactions générées pour chaque requête initiale. Cet article montre la faisabilité et l’utilité de l’augmentation des ensembles de données de RI ad-hoc pour la RI conversationnelle.

pdf bib
CoSPLADE : Adaptation d’un Modèle Neuronal Basé sur des Représentations Parcimonieuses pour la Recherche d’Information Conversationnelle
Nam Le Hai | Thomas Gerald | Thibault Formal | Jian-Yun Nie | Benjamin Piwowarksi | Laure Soulier
Actes de CORIA-TALN 2023. Actes de la 18e Conférence en Recherche d'Information et Applications (CORIA)

La recherche conversationnelle est une tâche qui vise à retrouver des documents à partir de la questioncourante de l’utilisateur ainsi que l’historique complet de la conversation. La plupart des méthodesantérieures sont basées sur une approche multi-étapes reposant sur une reformulation de la question.Cette étape de reformulation est critique, car elle peut conduire à un classement sous-optimal des do-cuments. D’autres approches ont essayé d’ordonner directement les documents, mais s’appuient pourla plupart sur un jeu de données contenant des pseudo-labels. Dans ce travail, nous proposons une tech-nique d’apprentissage à la fois “légère” et innovante pour un modèle contextualisé d’ordonnancementbasé sur SPLADE. En s’appuyant sur les représentations parcimonieuses de SPLADE, nous montronsque notre modèle, lorsqu’il est combiné avec le modèle de ré-ordonnancement T5Mono, obtient desrésultats qui sont compétitifs avec ceux obtenus par les participants des campagnes d’évaluation TRECCAsT 2020 et 2021. Le code source est disponible sur https://github.com/anonymous.

pdf bib
Recherche d’information conversationnelle
Laure Soulier | Pierre Erbacher | Thomas Gerald | Hanane Djeddal | Jian-Yun Nie | Philippe Preux
Actes de CORIA-TALN 2023. Actes de la 30e Conférence sur le Traitement Automatique des Langues Naturelles (TALN), volume 6 : projets

Le projet ANR JCJC SESAMS s’intéresse depuis 2018 au paradigme désormais actuels des systèmes de recherche d’information conversationnels. L’objectif est de formaliser des modèles de recherche d’information capables de fluidifier les interactions avec les utilisateurs pendant une session de recherche. Nous abordons différents enjeux : la prise en compte d’une conversation en langage naturel en contexte d’une recherche d’information, la génération d’interactions permettant de clarifier les besoins en information, la génération de réponse en langage naturel, ainsi que l’apprentissage continu pour s’adapter aux nouveaux besoins des utilisateurs. Nous présenterons dans ce poster ces différents enjeux et les contributions associées. Nous pourrons également discuter les perspectives de recherche dans ce domaine suite au développement récents des gros modèles de langue.

pdf bib
ConvGQR: Generative Query Reformulation for Conversational Search
Fengran Mo | Kelong Mao | Yutao Zhu | Yihong Wu | Kaiyu Huang | Jian-Yun Nie
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

In conversational search, the user’s real search intent for the current conversation turn is dependent on the previous conversation history. It is challenging to determine a good search query from the whole conversation context. To avoid the expensive re-training of the query encoder, most existing methods try to learn a rewriting model to de-contextualize the current query by mimicking the manual query rewriting. However, manually rewritten queries are not always the best search queries. Thus, training a rewriting model on them would lead to sub-optimal queries. Another useful information to enhance the search query is the potential answer to the question. In this paper, we propose ConvGQR, a new framework to reformulate conversational queries based on generative pre-trained language models (PLMs), one for query rewriting and another for generating potential answers. By combining both, ConvGQR can produce better search queries. In addition, to relate query reformulation to the retrieval task, we propose a knowledge infusion mechanism to optimize both query reformulation and retrieval. Extensive experiments on four conversational search datasets demonstrate the effectiveness of ConvGQR.

2022

pdf bib
Improving Few-Shot Relation Classification by Prototypical Representation Learning with Definition Text
Li Zhenzhen | Yuyang Zhang | Jian-Yun Nie | Dongsheng Li
Findings of the Association for Computational Linguistics: NAACL 2022

Few-shot relation classification is difficult because the few instances available may not represent well the relation patterns. Some existing approaches explored extra information such as relation definition, in addition to the instances, to learn a better relation representation. However, the encoding of the extra information has been performed independently from the labeled instances. In this paper, we propose to learn a prototype encoder from relation definition in a way that is useful for relation instance classification. To this end, we use a joint training approach to train both a prototype encoder from definition and an instance encoder. Extensive experiments on several datasets demonstrate the effectiveness and usefulness of our prototype encoder from definition text, enabling us to outperform state-of-the-art approaches.

pdf bib
UPER: Boosting Multi-Document Summarization with an Unsupervised Prompt-based Extractor
Shangqing Tu | Jifan Yu | Fangwei Zhu | Juanzi Li | Lei Hou | Jian-Yun Nie
Proceedings of the 29th International Conference on Computational Linguistics

Multi-Document Summarization (MDS) commonly employs the 2-stage extract-then-abstract paradigm, which first extracts a relatively short meta-document, then feeds it into the deep neural networks to generate an abstract. Previous work usually takes the ROUGE score as the label for training a scoring model to evaluate source documents. However, the trained scoring model is prone to under-fitting for low-resource settings, as it relies on the training data. To extract documents effectively, we construct prompting templates that invoke the underlying knowledge in Pre-trained Language Model (PLM) to calculate the document and keyword’s perplexity, which can assess the document’s semantic salience. Our unsupervised approach can be applied as a plug-in to boost other metrics for evaluating a document’s salience, thus improving the subsequent abstract generation. We get positive results on 2 MDS datasets, 2 data settings, and 2 abstractive backbone models, showing our method’s effectiveness. Our code is available at https://github.com/THU-KEG/UPER

pdf bib
Learning to Transfer Prompts for Text Generation
Junyi Li | Tianyi Tang | Jian-Yun Nie | Ji-Rong Wen | Xin Zhao
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tuning. While, it is challenging to fine-tune PLMs in a data-scarce situation. Therefore, it is non-trivial to develop a general and lightweight model that can adapt to various text generation tasks based on PLMs. To fulfill this purpose, the recent prompt-based learning offers a potential solution. In this paper, we improve this technique and propose a novel prompt-based method (PTG) for text generation in a transferable setting. First, PTG learns a set of source prompts for various source generation tasks and then transfers these prompts as target prompts to perform target generation tasks. To consider both task- and instance-level information, we design an adaptive attention mechanism to derive the target prompts. For each data instance, PTG learns a specific target prompt by attending to highly relevant source prompts. In extensive experiments, PTG yields competitive or better results than fine-tuning methods. We release our source prompts as an open resource, where users can add or reuse them to improve new text generation tasks for future research. Code and data can be available at https://github.com/RUCAIBox/Transfer-Prompts-for-Text-Generation.

pdf bib
ELMER: A Non-Autoregressive Pre-trained Language Model for Efficient and Effective Text Generation
Junyi Li | Tianyi Tang | Wayne Xin Zhao | Jian-Yun Nie | Ji-Rong Wen
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

We study the text generation task under the approach of pre-trained language models (PLMs). Typically, an auto-regressive (AR) method is adopted for generating texts in a token-by-token manner. Despite many advantages of AR generation, it usually suffers from inefficient inference. Therefore, non-autoregressive (NAR) models are proposed to generate all target tokens simultaneously. However, NAR models usually generate texts of lower quality due to the absence of token dependency in the output text. In this paper, we propose ELMER: an efficient and effective PLM for NAR text generation to explicitly model the token dependency during NAR generation. By leveraging the early exit technique, ELMER enables the token generations at different layers, according to their prediction confidence (a more confident token will exit at a lower layer). Besides, we propose a novel pre-training objective, Layer Permutation Language Modeling, to pre-train ELMER by permuting the exit layer for each token in sequences. Experiments on three text generation tasks show that ELMER significantly outperforms NAR models and further narrows the performance gap with AR PLMs (ELMER (29.92) vs BART (30.61) ROUGE-L in XSUM) while achieving over 10 times inference speedup.

pdf bib
TextBox 2.0: A Text Generation Library with Pre-trained Language Models
Tianyi Tang | Junyi Li | Zhipeng Chen | Yiwen Hu | Zhuohao Yu | Wenxun Dai | Wayne Xin Zhao | Jian-yun Nie | Ji-rong Wen
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

To facilitate research on text generation, this paper presents a comprehensive and unified library, TextBox 2.0, focusing on the use of pre-trained language models (PLMs). To be comprehensive, our library covers 13 common text generation tasks and their corresponding 83 datasets and further incorporates 45 PLMs covering general, translation, Chinese, dialogue, controllable, distilled, prompting, and lightweight PLMs. We also implement 4 efficient training strategies and provide 4 generation objectives for pre-training new PLMs from scratch. To be unified, we design the interfaces to support the entire research pipeline (from data loading to training and evaluation), ensuring that each step can be fulfilled in a unified way. Despite the rich functionality, it is easy to use our library, either through the friendly Python API or command line. To validate the effectiveness of our library, we conduct extensive experiments and exemplify four types of research scenarios. The project is released at the link: https://github.com/RUCAIBox/TextBox#2.0.

2021

pdf bib
An Investigation of Suitability of Pre-Trained Language Models for Dialogue Generation – Avoiding Discrepancies
Yan Zeng | Jian-Yun Nie
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
Inductive Topic Variational Graph Auto-Encoder for Text Classification
Qianqian Xie | Jimin Huang | Pan Du | Min Peng | Jian-Yun Nie
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Graph convolutional networks (GCNs) have been applied recently to text classification and produced an excellent performance. However, existing GCN-based methods do not assume an explicit latent semantic structure of documents, making learned representations less effective and difficult to interpret. They are also transductive in nature, thus cannot handle out-of-graph documents. To address these issues, we propose a novel model named inductive Topic Variational Graph Auto-Encoder (T-VGAE), which incorporates a topic model into variational graph-auto-encoder (VGAE) to capture the hidden semantic information between documents and words. T-VGAE inherits the interpretability of the topic model and the efficient information propagation mechanism of VGAE. It learns probabilistic representations of words and documents by jointly encoding and reconstructing the global word-level graph and bipartite graphs of documents, where each document is considered individually and decoupled from the global correlation graph so as to enable inductive learning. Our experiments on several benchmark datasets show that our method outperforms the existing competitive models on supervised and semi-supervised text classification, as well as unsupervised text representation learning. In addition, it has higher interpretability and is able to deal with unseen documents.

pdf bib
A Simple and Efficient Multi-Task Learning Approach for Conditioned Dialogue Generation
Yan Zeng | Jian-Yun Nie
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Conditioned dialogue generation suffers from the scarcity of labeled responses. In this work, we exploit labeled non-dialogue text data related to the condition, which are much easier to collect. We propose a multi-task learning approach to leverage both labeled dialogue and text data. The 3 tasks jointly optimize the same pre-trained Transformer – conditioned dialogue generation task on the labeled dialogue data, conditioned language encoding task and conditioned language generation task on the labeled text data. Experimental results show that our approach outperforms the state-of-the-art models by leveraging the labeled texts, and it also obtains larger improvement in performance comparing to the previous methods to leverage text data.

pdf bib
Learning Syntactic Dense Embedding with Correlation Graph for Automatic Readability Assessment
Xinying Qiu | Yuan Chen | Hanwu Chen | Jian-Yun Nie | Yuming Shen | Dawei Lu
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Deep learning models for automatic readability assessment generally discard linguistic features traditionally used in machine learning models for the task. We propose to incorporate linguistic features into neural network models by learning syntactic dense embeddings based on linguistic features. To cope with the relationships between the features, we form a correlation graph among features and use it to learn their embeddings so that similar features will be represented by similar embeddings. Experiments with six data sets of two proficiency levels demonstrate that our proposed methodology can complement BERT-only model to achieve significantly better performances for automatic readability assessment.

2020

pdf bib
ScriptWriter: Narrative-Guided Script Generation
Yutao Zhu | Ruihua Song | Zhicheng Dou | Jian-Yun Nie | Jin Zhou
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

It is appealing to have a system that generates a story or scripts automatically from a storyline, even though this is still out of our reach. In dialogue systems, it would also be useful to drive dialogues by a dialogue plan. In this paper, we address a key problem involved in these applications - guiding a dialogue by a narrative. The proposed model ScriptWriter selects the best response among the candidates that fit the context as well as the given narrative. It keeps track of what in the narrative has been said and what is to be said. A narrative plays a different role than the context (i.e., previous utterances), which is generally used in current dialogue systems. Due to the unavailability of data for this new application, we construct a new large-scale data collection GraphMovie from a movie website where end- users can upload their narratives freely when watching a movie. Experimental results on the dataset show that our proposed approach based on narratives significantly outperforms the baselines that simply use the narrative as a kind of context.

2018

pdf bib
Mutux at SemEval-2018 Task 1: Exploring Impacts of Context Information On Emotion Detection
Pan Du | Jian-Yun Nie
Proceedings of the 12th International Workshop on Semantic Evaluation

This paper describes MuTuX, our system that is designed for task 1-5a, emotion classification analysis of tweets on SemEval2018. The system aims at exploring the potential of context information of terms for emotion analysis. A Recurrent Neural Network is adopted to capture the context information of terms in tweets. Only term features and the sequential relations are used in our system. The results submitted ranks 16th out of 35 systems on the task of emotion detection in English-language tweets.

2015

pdf bib
TJUdeM: A Combination Classifier for Aspect Category Detection and Sentiment Polarity Classification
Zhifei Zhang | Jian-Yun Nie | Hongling Wang
Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)

pdf bib
A Neural Network Approach to Context-Sensitive Generation of Conversational Responses
Alessandro Sordoni | Michel Galley | Michael Auli | Chris Brockett | Yangfeng Ji | Margaret Mitchell | Jian-Yun Nie | Jianfeng Gao | Bill Dolan
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Préface [Foreword]
Vincent Claveau | Jian-Yun Nie
Traitement Automatique des Langues, Volume 56, Numéro 3 : Recherche d'Information [Information Retrieval]

2012

pdf bib
Bridging the Gap between Intrinsic and Perceived Relevance in Snippet Generation
Jing He | Pablo Duboue | Jian-Yun Nie
Proceedings of COLING 2012

2011

pdf bib
Summarize What You Are Interested In: An Optimization Framework for Interactive Personalized Summarization
Rui Yan | Jian-Yun Nie | Xiaoming Li
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

2010

pdf bib
Clinical Information Retrieval using Document and PICO Structure
Florian Boudin | Jian-Yun Nie | Martin Dawes
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Positional Language Models for Clinical Information Retrieval
Florian Boudin | Jian-Yun Nie | Martin Dawes
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
RALI: Automatic Weighting of Text Window Distances
Bernard Brosseau-Villeneuve | Noriko Kando | Jian-Yun Nie
Proceedings of the 5th International Workshop on Semantic Evaluation

pdf bib
Towards an optimal weighting of context words based on distance
Bernard Brosseau-Villeneuve | Jian-Yun Nie | Noriko Kando
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

2009

pdf bib
Search Engine Adaptation by Feedback Control Adjustment for Time-sensitive Query
Ruiqiang Zhang | Yi Chang | Zhaohui Zheng | Donald Metzler | Jian-yun Nie
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers

2008

pdf bib
A Comparative Study for Query Translation using Linear Combination and Confidence Measure
Youssef Kadri | Jian-Yun Nie
Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I

pdf bib
Selecting Query Term Alternations for Web Search by Exploiting Query Contexts
Guihong Cao | Stephen Robertson | Jian-Yun Nie
Proceedings of ACL-08: HLT

2007

pdf bib
A system to mine large-scale bilingual dictionaries from monolingual web pages
Guihong Cao | Jianfeng Gao | Jian-Yun Nie
Proceedings of Machine Translation Summit XI: Papers

2006

pdf bib
An Information-Theoretic Approach to Automatic Evaluation of Summaries
Chin-Yew Lin | Guihong Cao | Jianfeng Gao | Jian-Yun Nie
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference

pdf bib
Effective Stemming for Arabic Information Retrieval
Youssef Kadri | Jian-Yun Nie
Proceedings of the International Conference on the Challenge of Arabic for NLP/MT

Arabic has a very rich and complex morphology. Its appropriate morphological processing is very important for Information Retrieval (IR). In this paper, we propose a new stemming technique that tries to determine the stem of a word representing the semantic core of this word according to Arabic morphology. This method is compared to a commonly used light stemming technique which truncates a word by simple rules. Our tests on TREC collections show that the new stemming technique is more effective than the light stemming.

pdf bib
An Iterative Implicit Feedback Approach to Personalized Search
Yuanhua Lv | Le Sun | Junlin Zhang | Jian-Yun Nie | Wan Chen | Wei Zhang
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

pdf bib
Context-Dependent Term Relations for Information Retrieval
Jing Bai | Jian-Yun Nie | Guihong Cao
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing

2004

pdf bib
Mots composés dans les modèles de langue pour la recherche d’information
Carmen Alvarez | Philippe Langlais | Jian-Yun Nie
Actes de la 11ème conférence sur le Traitement Automatique des Langues Naturelles. Posters

Une approche classique en recherche d’information (RI) consiste à bâtir une représentation des documents et des requêtes basée sur les mots simples les constituant. L’utilisation de modèles bigrammes a été étudiée, mais les contraintes sur l’ordre et l’adjacence des mots dans ces travaux ne sont pas toujours justifiées pour la recherche d’information. Nous proposons une nouvelle approche basée sur les modèles de langue qui incorporent des affinités lexicales (ALs), c’est à dire des paires non ordonnées de mots qui se trouvent proches dans un texte. Nous décrivons ce modèle et le comparons aux plus traditionnels modèles unigrammes et bigrammes ainsi qu’au modèle vectoriel.

2003

pdf bib
Embedding Web-Based Statistical Translation Models in Cross-Language Information Retrieval
Wessel Kraaij | Jian-Yun Nie | Michel Simard
Computational Linguistics, Volume 29, Number 3, September 2003: Special Issue on the Web as Corpus

2000

pdf bib
Automatic construction of parallel English-Chinese corpus for cross-language information retrieval
Jiang Chen | Jian-Yun Nie
Sixth Applied Natural Language Processing Conference

1998

pdf bib
Using a Probabilistic Translation Model for Cross-Language Information Retrieval
Jian-Yun Nie | Pierre Isabelle | George Foster
Sixth Workshop on Very Large Corpora

1995

pdf bib
A Unifying Approach To Segmentation Of Chinese And Its Application To Text Retrieval
Jian-Yun Nie | Xiaobo Ren | Martin Brisebois
Proceedings of Rocling VIII Computational Linguistics Conference VIII