Giuseppe Carenini


2024

pdf bib
Discourse Relation Prediction and Discourse Parsing in Dialogues with Minimal Supervision
Chuyuan Li | Chloé Braud | Maxime Amblard | Giuseppe Carenini
Proceedings of the 5th Workshop on Computational Approaches to Discourse (CODI 2024)

Discourse analysis plays a crucial role in Natural Language Processing, with discourse relation prediction arguably being the most difficult task in discourse parsing. Previous studies have generally focused on explicit or implicit discourse relation classification in monologues, leaving dialogue an under-explored domain. Facing the data scarcity issue, we propose to leverage self-training strategies based on a Transformer backbone. Moreover, we design the first semi-supervised pipeline that sequentially predicts discourse structures and relations. Using 50 examples, our relation prediction module achieves 58.4 in accuracy on the STAC corpus, close to supervised state-of-the-art. Full parsing results show notable improvements compared to the supervised models both in-domain (gaming) and cross-domain (technical chat), with better stability.

pdf bib
Personalized Abstractive Summarization by Tri-agent Generation Pipeline
Wen Xiao | Yujia Xie | Giuseppe Carenini | Pengcheng He
Findings of the Association for Computational Linguistics: EACL 2024

Tailoring outputs from large language models, like ChatGPT, to implicit user preferences remains a challenge despite their impressive generative capabilities. In this paper, we propose a tri-agent generation pipeline comprising a generator, an instructor, and an editor to enhance output personalization. The generator produces an initial output, the instructor automatically generates editing instructions based on user preferences, and the editor refines the output to align with those preferences. The inference-only large language model (ChatGPT) serves as both the generator and editor, with a smaller model acting as the instructor to guide output generation. We train the instructor using editor-steered reinforcement learning, leveraging feedback from a large-scale editor model to optimize instruction generation. Experimental results on two abstractive summarization datasets demonstrate the effectiveness of our approach in generating outputs that better meet user expectations.

2023

pdf bib
Discourse Structure Extraction from Pre-Trained and Fine-Tuned Language Models in Dialogues
Chuyuan Li | Patrick Huber | Wen Xiao | Maxime Amblard | Chloe Braud | Giuseppe Carenini
Findings of the Association for Computational Linguistics: EACL 2023

Discourse processing suffers from data sparsity, especially for dialogues. As a result, we explore approaches to infer latent discourse structures for dialogues, based on attention matrices from Pre-trained Language Models (PLMs). We investigate multiple auxiliary tasks for fine-tuning and show that the dialogue-tailored Sentence Ordering task performs best. To locate and exploit discourse information in PLMs, we propose an unsupervised and a semi-supervised method. Our proposals thereby achieve encouraging results on the STAC corpus, with F1 scores of 57.2 and 59.3 for the unsupervised and semi-supervised methods, respectively. When restricted to projective trees, our scores improved to 63.3 and 68.1.

pdf bib
Mixture-of-Linguistic-Experts Adapters for Improving and Interpreting Pre-trained Language Models
Raymond Li | Gabriel Murray | Giuseppe Carenini
Findings of the Association for Computational Linguistics: EMNLP 2023

In this work, we propose a method that combines two popular research areas by injecting linguistic structures into pre-trained language models in the parameter-efficient fine-tuning (PEFT) setting. In our approach, parallel adapter modules encoding different linguistic structures are combined using a novel Mixture-of-Linguistic-Experts architecture, where Gumbel-Softmax gates are used to determine the importance of these modules at each layer of the model. To reduce the number of parameters, we first train the model for a fixed small number of steps before pruning the experts based on their important scores. Our experiment results with three different pre-trained models show that our approach can outperform state-of-the-art PEFT methods with a comparable number of parameters. In addition, we provide additional analysis to examine the experts selected by each model at each layer to provide insights for future studies.

pdf bib
Entity-based SpanCopy for Abstractive Summarization to Improve the Factual Consistency
Wen Xiao | Giuseppe Carenini
Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)

Discourse-aware techniques, including entity-aware approaches, play a crucial role in summarization. In this paper, we propose an entity-based SpanCopy mechanism to tackle the entity-level factual inconsistency problem in abstractive summarization, i.e. reducing the mismatched entities between the generated summaries and the source documents. Complemented by a Global Relevance component to identify summary-worthy entities, our approach demonstrates improved factual consistency while preserving saliency on four summarization datasets, contributing to the effective application of discourse-aware methods summarization tasks.

pdf bib
Proceedings of the 4th New Frontiers in Summarization Workshop
Yue Dong | Wen Xiao | Lu Wang | Fei Liu | Giuseppe Carenini
Proceedings of the 4th New Frontiers in Summarization Workshop

pdf bib
Tracing Influence at Scale: A Contrastive Learning Approach to Linking Public Comments and Regulator Responses
Linzi Xing | Brad Hackinen | Giuseppe Carenini
Proceedings of the Natural Legal Language Processing Workshop 2023

U.S. Federal Regulators receive over one million comment letters each year from businesses, interest groups, and members of the public, all advocating for changes to proposed regulations. These comments are believed to have wide-ranging impacts on public policy. However, measuring the impact of specific comments is challenging because regulators are required to respond to comments but they do not have to specify which comments they are addressing. In this paper, we propose a simple yet effective solution to this problem by using an iterative contrastive method to train a neural model aiming for matching text from public comments to responses written by regulators. We demonstrate that our proposal substantially outperforms a set of selected text-matching baselines on a human-annotated test set. Furthermore, it delivers performance comparable to the most advanced gigantic language model (i.e., GPT-4), and is more cost-effective when handling comments and regulator responses matching in larger scale.

pdf bib
Diversity-Aware Coherence Loss for Improving Neural Topic Models
Raymond Li | Felipe Gonzalez-Pizarro | Linzi Xing | Gabriel Murray | Giuseppe Carenini
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

The standard approach for neural topic modeling uses a variational autoencoder (VAE) framework that jointly minimizes the KL divergence between the estimated posterior and prior, in addition to the reconstruction loss. Since neural topic models are trained by recreating individual input documents, they do not explicitly capture the coherence between words on the corpus level. In this work, we propose a novel diversity-aware coherence loss that encourages the model to learn corpus-level coherence scores while maintaining high diversity between topics. Experimental results on multiple datasets show that our method significantly improves the performance of neural topic models without requiring any pretraining or additional parameters.

2022

pdf bib
Improving Topic Segmentation by Injecting Discourse Dependencies
Linzi Xing | Patrick Huber | Giuseppe Carenini
Proceedings of the 3rd Workshop on Computational Approaches to Discourse

Recent neural supervised topic segmentation models achieve distinguished superior effectiveness over unsupervised methods, with the availability of large-scale training corpora sampled from Wikipedia. These models may, however, suffer from limited robustness and transferability caused by exploiting simple linguistic cues for prediction, but overlooking more important inter-sentential topical consistency. To address this issue, we present a discourse-aware neural topic segmentation model with the injection of above-sentence discourse dependency structures to encourage the model make topic boundary prediction based more on the topical consistency between sentences. Our empirical study on English evaluation datasets shows that injecting above-sentence discourse structures to a neural topic segmenter with our proposed strategy can substantially improve its performances on intra-domain and out-of-domain data, with little increase of model’s complexity.

pdf bib
PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization
Wen Xiao | Iz Beltagy | Giuseppe Carenini | Arman Cohan
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We introduce PRIMERA, a pre-trained model for multi-document representation with a focus on summarization that reduces the need for dataset-specific architectures and large amounts of fine-tuning labeled data. PRIMERA uses our newly proposed pre-training objective designed to teach the model to connect and aggregate information across documents. It also uses efficient encoder-decoder transformers to simplify the processing of concatenated input documents. With extensive experiments on 6 multi-document summarization datasets from 3 different domains on zero-shot, few-shot and full-supervised settings, PRIMERA outperforms current state-of-the-art dataset-specific and pre-trained models on most of these settings with large margins.

pdf bib
Towards Understanding Large-Scale Discourse Structures in Pre-Trained and Fine-Tuned Language Models
Patrick Huber | Giuseppe Carenini
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

In this paper, we extend the line of BERTology work by focusing on the important, yet less explored, alignment of pre-trained and fine-tuned PLMs with large-scale discourse structures. We propose a novel approach to infer discourse information for arbitrarily long documents. In our experiments, we find that the captured discourse information is local and general, even across a collection of fine-tuning tasks. We compare the inferred discourse trees with supervised, distantly supervised and simple baselines to explore the structural overlap, finding that constituency discourse trees align well with supervised models, however, contain complementary discourse information. Lastly, we individually explore self-attention matrices to analyze the information redundancy. We find that similar discourse information is consistently captured in the same heads.

pdf bib
Human Guided Exploitation of Interpretable Attention Patterns in Summarization and Topic Segmentation
Raymond Li | Wen Xiao | Linzi Xing | Lanjun Wang | Gabriel Murray | Giuseppe Carenini
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

The multi-head self-attention mechanism of the transformer model has been thoroughly investigated recently. In one vein of study, researchers are interested in understanding why and how transformers work. In another vein, researchers propose new attention augmentation methods to make transformers more accurate, efficient and interpretable. In this paper, we combine these two lines of research in a human-in-the-loop pipeline to first discover important task-specific attention patterns. Then those patterns are injected, not only to smaller models, but also to the original model. The benefits of our pipeline and discovered patterns are demonstrated in two case studies with extractive summarization and topic segmentation. After discovering interpretable patterns in BERT-based models fine-tuned for the two downstream tasks, experiments indicate that when we inject the patterns into attention heads, the models show considerable improvements in accuracy and efficiency.

2021

pdf bib
T3-Vis: visual analytic for Training and fine-Tuning Transformers in NLP
Raymond Li | Wen Xiao | Lanjun Wang | Hyeju Jang | Giuseppe Carenini
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

Transformers are the dominant architecture in NLP, but their training and fine-tuning is still very challenging. In this paper, we present the design and implementation of a visual analytic framework for assisting researchers in such process, by providing them with valuable insights about the model’s intrinsic properties and behaviours. Our framework offers an intuitive overview that allows the user to explore different facets of the model (e.g., hidden states, attention) through interactive visualization, and allows a suite of built-in algorithms that compute the importance of model components and different parts of the input sequence. Case studies and feedback from a user focus group indicate that the framework is useful, and suggest several improvements. Our framework is available at: https://github.com/raymondzmc/T3-Vis.

pdf bib
Improving Unsupervised Dialogue Topic Segmentation with Utterance-Pair Coherence Scoring
Linzi Xing | Giuseppe Carenini
Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue

Dialogue topic segmentation is critical in several dialogue modeling problems. However, popular unsupervised approaches only exploit surface features in assessing topical coherence among utterances. In this work, we address this limitation by leveraging supervisory signals from the utterance-pair coherence scoring task. First, we present a simple yet effective strategy to generate a training corpus for utterance-pair coherence scoring. Then, we train a BERT-based neural utterance-pair coherence model with the obtained training corpus. Finally, such model is used to measure the topical relevance between utterances, acting as the basis of the segmentation inference. Experiments on three public datasets in English and Chinese demonstrate that our proposal outperforms the state-of-the-art baselines.

pdf bib
Predicting Discourse Trees from Transformer-based Neural Summarizers
Wen Xiao | Patrick Huber | Giuseppe Carenini
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Previous work indicates that discourse information benefits summarization. In this paper, we explore whether this synergy between discourse and summarization is bidirectional, by inferring document-level discourse trees from pre-trained neural summarizers. In particular, we generate unlabeled RST-style discourse trees from the self-attention matrices of the transformer model. Experiments across models and datasets reveal that the summarizer learns both, dependency- and constituency-style discourse information, which is typically encoded in a single head, covering long- and short-distance discourse dependencies. Overall, the experimental results suggest that the learned discourse information is general and transferable inter-domain.

pdf bib
W-RST: Towards a Weighted RST-style Discourse Framework
Patrick Huber | Wen Xiao | Giuseppe Carenini
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Aiming for a better integration of data-driven and linguistically-inspired approaches, we explore whether RST Nuclearity, assigning a binary assessment of importance between text segments, can be replaced by automatically generated, real-valued scores, in what we call a Weighted-RST framework. In particular, we find that weighted discourse trees from auxiliary tasks can benefit key NLP downstream applications, compared to nuclearity-centered approaches. We further show that real-valued importance distributions partially and interestingly align with the assessment and uncertainty of human annotators.

pdf bib
Demoting the Lead Bias in News Summarization via Alternating Adversarial Learning
Linzi Xing | Wen Xiao | Giuseppe Carenini
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

In news articles the lead bias is a common phenomenon that usually dominates the learning signals for neural extractive summarizers, severely limiting their performance on data with different or even no bias. In this paper, we introduce a novel technique to demote lead bias and make the summarizer focus more on the content semantics. Experiments on two news corpora with different degrees of lead bias show that our method can effectively demote the model’s learned lead bias and improve its generality on out-of-distribution data, with little to no performance loss on in-distribution data.

pdf bib
Proceedings of the Third Workshop on New Frontiers in Summarization
Giuseppe Carenini | Jackie Chi Kit Cheung | Yue Dong | Fei Liu | Lu Wang
Proceedings of the Third Workshop on New Frontiers in Summarization

pdf bib
KW-ATTN: Knowledge Infused Attention for Accurate and Interpretable Text Classification
Hyeju Jang | Seojin Bang | Wen Xiao | Giuseppe Carenini | Raymond Ng | Young ji Lee
Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures

Text classification has wide-ranging applications in various domains. While neural network approaches have drastically advanced performance in text classification, they tend to be powered by a large amount of training data, and interpretability is often an issue. As a step towards better accuracy and interpretability especially on small data, in this paper we present a new knowledge-infused attention mechanism, called KW-ATTN (KnoWledge-infused ATTentioN) to incorporate high-level concepts from external knowledge bases into Neural Network models. We show that KW-ATTN outperforms baseline models using only words as well as other approaches using concepts by classification accuracy, which indicates that high-level concepts help model prediction. Furthermore, crowdsourced human evaluation suggests that additional concept information helps interpretability of the model.

2020

pdf bib
MEGA RST Discourse Treebanks with Structure and Nuclearity from Scalable Distant Sentiment Supervision
Patrick Huber | Giuseppe Carenini
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

The lack of large and diverse discourse treebanks hinders the application of data-driven approaches, such as deep-learning, to RST-style discourse parsing. In this work, we present a novel scalable methodology to automatically generate discourse treebanks using distant supervision from sentiment annotated datasets, creating and publishing MEGA-DT, a new large-scale discourse-annotated corpus. Our approach generates discourse trees incorporating structure and nuclearity for documents of arbitrary length by relying on an efficient heuristic beam-search strategy, extended with a stochastic component. Experiments on multiple datasets indicate that a discourse parser trained on our MEGA-DT treebank delivers promising inter-domain performance gains when compared to parsers trained on human-annotated discourse corpora.

pdf bib
Systematically Exploring Redundancy Reduction in Summarizing Long Documents
Wen Xiao | Giuseppe Carenini
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing

Our analysis of large summarization datasets indicates that redundancy is a very serious problem when summarizing long documents. Yet, redundancy reduction has not been thoroughly investigated in neural summarization. In this work, we systematically explore and compare different ways to deal with redundancy when summarizing long documents. Specifically, we organize existing methods into categories based on when and how the redundancy is considered. Then, in the context of these categories, we propose three additional methods balancing non-redundancy and importance in a general and flexible way. In a series of experiments, we show that our proposed methods achieve the state-of-the-art with respect to ROUGE scores on two scientific paper datasets, Pubmed and arXiv, while reducing redundancy significantly.

pdf bib
Improving Context Modeling in Neural Topic Segmentation
Linzi Xing | Brad Hackinen | Giuseppe Carenini | Francesco Trebbi
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing

Topic segmentation is critical in key NLP tasks and recent works favor highly effective neural supervised approaches. However, current neural solutions are arguably limited in how they model context. In this paper, we enhance a segmenter based on a hierarchical attention BiLSTM network to better model context, by adding a coherence-related auxiliary task and restricted self-attention. Our optimized segmenter outperforms SOTA approaches when trained and tested on three datasets. We also the robustness of our proposed model in domain transfer setting by training a model on a large-scale dataset and testing it on four challenging real-world benchmarks. Furthermore, we apply our proposed strategy to two other languages (German and Chinese), and show its effectiveness in multilingual scenarios.

pdf bib
Neural RST-based Evaluation of Discourse Coherence
Grigorii Guz | Peyman Bateni | Darius Muglich | Giuseppe Carenini
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing

This paper evaluates the utility of Rhetorical Structure Theory (RST) trees and relations in discourse coherence evaluation. We show that incorporating silver-standard RST features can increase accuracy when classifying coherence. We demonstrate this through our tree-recursive neural model, namely RST-Recursive, which takes advantage of the text’s RST features produced by a state of the art RST parser. We evaluate our approach on the Grammarly Corpus for Discourse Coherence (GCDC) and show that when ensembled with the current state of the art, we can achieve the new state of the art accuracy on this benchmark. Furthermore, when deployed alone, RST-Recursive achieves competitive accuracy while having 62% fewer parameters.

pdf bib
Exploratory Analysis of COVID-19 Related Tweets in North America to Inform Public Health Institutes
Hyeju Jang | Emily Rempel | Giuseppe Carenini | Naveed Janjua
Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020

Social media is a rich source where we can learn about people’s reactions to social issues. As COVID-19 has significantly impacted on people’s lives, it is essential to capture how people react to public health interventions and understand their concerns. In this paper, we aim to investigate people’s reactions and concerns about COVID-19 in North America, especially focusing on Canada. We analyze COVID-19 related tweets using topic modeling and aspect-based sentiment analysis, and interpret the results with public health experts. We compare timeline of topics discussed with timing of implementation of public health interventions for COVID-19. We also examine people’s sentiment about COVID-19 related issues. We discuss how the results can be helpful for public health agencies when designing a policy for new interventions. Our work shows how Natural Language Processing (NLP) techniques could be applied to public health questions with domain expert involvement.

pdf bib
Towards Domain-Independent Text Structuring Trainable on Large Discourse Treebanks
Grigorii Guz | Giuseppe Carenini
Findings of the Association for Computational Linguistics: EMNLP 2020

Text structuring is a fundamental step in NLG, especially when generating multi-sentential text. With the goal of fostering more general and data-driven approaches to text structuring, we propose the new and domain-independent NLG task of structuring and ordering a (possibly large) set of EDUs. We then present a solution for this task that combines neural dependency tree induction with pointer networks, and can be trained on large discourse treebanks that have only recently become available. Further, we propose a new evaluation metric that is arguably more suitable for our new task compared to existing content ordering metrics. Finally, we empirically show that our approach outperforms competitive alternatives on the proposed measure and is equivalent in performance with respect to previously established measures.

pdf bib
Do We Really Need That Many Parameters In Transformer For Extractive Summarization? Discourse Can Help !
Wen Xiao | Patrick Huber | Giuseppe Carenini
Proceedings of the First Workshop on Computational Approaches to Discourse

The multi-head self-attention of popular transformer models is widely used within Natural Language Processing (NLP), including for the task of extractive summarization. With the goal of analyzing and pruning the parameter-heavy self-attention mechanism, there are multiple approaches proposing more parameter-light self-attention alternatives. In this paper, we present a novel parameter-lean self-attention mechanism using discourse priors. Our new tree self-attention is based on document-level discourse information, extending the recently proposed “Synthesizer” framework with another lightweight alternative. We show empirical results that our tree self-attention approach achieves competitive ROUGE-scores on the task of extractive summarization. When compared to the original single-head transformer model, the tree attention approach reaches similar performance on both, EDU and sentence level, despite the significant reduction of parameters in the attention component. We further significantly outperform the 8-head transformer model on sentence level when applying a more balanced hyper-parameter setting, requiring an order of magnitude less parameters.

pdf bib
Coreference for Discourse Parsing: A Neural Approach
Grigorii Guz | Giuseppe Carenini
Proceedings of the First Workshop on Computational Approaches to Discourse

We present preliminary results on investigating the benefits of coreference resolution features for neural RST discourse parsing by considering different levels of coupling of the discourse parser with the coreference resolver. In particular, starting with a strong baseline neural parser unaware of any coreference information, we compare a parser which utilizes only the output of a neural coreference resolver, with a more sophisticated model, where discourse parsing and coreference resolution are jointly learned in a neural multitask fashion. Results indicate that these initial attempts to incorporate coreference information do not boost the performance of discourse parsing in a statistically significant way.

pdf bib
From Sentiment Annotations to Sentiment Prediction through Discourse Augmentation
Patrick Huber | Giuseppe Carenini
Proceedings of the 28th International Conference on Computational Linguistics

Sentiment analysis, especially for long documents, plausibly requires methods capturing complex linguistics structures. To accommodate this, we propose a novel framework to exploit task-related discourse for the task of sentiment analysis. More specifically, we are combining the large-scale, sentiment-dependent MEGA-DT treebank with a novel neural architecture for sentiment prediction, based on a hybrid TreeLSTM hierarchical attention model. Experiments show that our framework using sentiment-related discourse augmentations for sentiment prediction enhances the overall performance for long documents, even beyond previous approaches using well-established discourse parsers trained on human annotated data. We show that a simple ensemble approach can further enhance performance by selectively using discourse, depending on the document length.

pdf bib
Unleashing the Power of Neural Discourse Parsers - A Context and Structure Aware Approach Using Large Scale Pretraining
Grigorii Guz | Patrick Huber | Giuseppe Carenini
Proceedings of the 28th International Conference on Computational Linguistics

RST-based discourse parsing is an important NLP task with numerous downstream applications, such as summarization, machine translation and opinion mining. In this paper, we demonstrate a simple, yet highly accurate discourse parser, incorporating recent contextual language models. Our parser establishes the new state-of-the-art (SOTA) performance for predicting structure and nuclearity on two key RST datasets, RST-DT and Instr-DT. We further demonstrate that pretraining our parser on the recently available large-scale “silver-standard” discourse treebank MEGA-DT provides even larger performance benefits, suggesting a novel and promising research direction in the field of discourse analysis.

pdf bib
Towards Domain-Independent Text Structuring Trainable on Large Discourse Treebanks
Grigorii Guz | Giuseppe Carenini
Proceedings of the Workshop on Discourse Theories for Text Planning

2019

pdf bib
Discourse Analysis and Its Applications
Shafiq Joty | Giuseppe Carenini | Raymond Ng | Gabriel Murray
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts

Discourse processing is a suite of Natural Language Processing (NLP) tasks to uncover linguistic structures from texts at several levels, which can support many downstream applications. This involves identifying the topic structure, the coherence structure, the coreference structure, and the conversation structure for conversational discourse. Taken together, these structures can inform text summarization, machine translation, essay scoring, sentiment analysis, information extraction, question answering, and thread recovery. The tutorial starts with an overview of basic concepts in discourse analysis – monologue vs. conversation, synchronous vs. asynchronous conversation, and key linguistic structures in discourse analysis. We also give an overview of linguistic structures and corresponding discourse analysis tasks that discourse researchers are generally interested in, as well as key applications on which these discourse structures have an impact.

pdf bib
Predicting Discourse Structure using Distant Supervision from Sentiment
Patrick Huber | Giuseppe Carenini
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Discourse parsing could not yet take full advantage of the neural NLP revolution, mostly due to the lack of annotated datasets. We propose a novel approach that uses distant supervision on an auxiliary task (sentiment classification), to generate abundant data for RST-style discourse structure prediction. Our approach combines a neural variant of multiple-instance learning, using document-level supervision, with an optimal CKY-style tree generation algorithm. In a series of experiments, we train a discourse parser (for only structure prediction) on our automatically generated dataset and compare it with parsers trained on human-annotated corpora (news domain RST-DT and Instructional domain). Results indicate that while our parser does not yet match the performance of a parser trained and tested on the same dataset (intra-domain), it does perform remarkably well on the much more difficult and arguably more useful task of inter-domain discourse structure prediction, where the parser is trained on one domain and tested/applied on another one.

pdf bib
Extractive Summarization of Long Documents by Combining Global and Local Context
Wen Xiao | Giuseppe Carenini
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

In this paper, we propose a novel neural single-document extractive summarization model for long documents, incorporating both the global context of the whole document and the local context within the current topic. We evaluate the model on two datasets of scientific papers , Pubmed and arXiv, where it outperforms previous work, both extractive and abstractive models, on ROUGE-1, ROUGE-2 and METEOR scores. We also show that, consistently with our goal, the benefits of our method become stronger as we apply it to longer documents. Rather surprisingly, an ablation study indicates that the benefits of our model seem to come exclusively from modeling the local context, even for the longest documents.

pdf bib
Evaluating Topic Quality with Posterior Variability
Linzi Xing | Michael J. Paul | Giuseppe Carenini
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Probabilistic topic models such as latent Dirichlet allocation (LDA) are popularly used with Bayesian inference methods such as Gibbs sampling to learn posterior distributions over topic model parameters. We derive a novel measure of LDA topic quality using the variability of the posterior distributions. Compared to several existing baselines for automatic topic evaluation, the proposed metric achieves state-of-the-art correlations with human judgments of topic quality in experiments on three corpora. We additionally demonstrate that topic quality estimation can be further improved using a supervised estimator that combines multiple metrics.

pdf bib
Proceedings of the 2nd Workshop on New Frontiers in Summarization
Lu Wang | Jackie Chi Kit Cheung | Giuseppe Carenini | Fei Liu
Proceedings of the 2nd Workshop on New Frontiers in Summarization

pdf bib
Comparing the Intrinsic Performance of Clinical Concept Embeddings by Their Field of Medicine
John-Jose Nunez | Giuseppe Carenini
Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)

Pre-trained word embeddings are becoming increasingly popular for natural language processing tasks. This includes medical applications, where embeddings are trained for clinical concepts using specific medical data. Recent work continues to improve on these embeddings. However, no one has yet sought to determine whether these embeddings work as well for one field of medicine as they do in others. In this work, we use intrinsic methods to evaluate embeddings from the various fields of medicine as defined by their ICD-9 systems. We find significant differences between fields, and motivate future work to investigate whether extrinsic tasks will follow a similar pattern.

2018

pdf bib
NLP for Conversations: Sentiment, Summarization, and Group Dynamics
Gabriel Murray | Giuseppe Carenini | Shafiq Joty
Proceedings of the 27th International Conference on Computational Linguistics: Tutorial Abstracts

2017

pdf bib
Detecting Dementia through Retrospective Analysis of Routine Blog Posts by Bloggers with Dementia
Vaden Masrani | Gabriel Murray | Thalia Field | Giuseppe Carenini
BioNLP 2017

We investigate if writers with dementia can be automatically distinguished from those without by analyzing linguistic markers in written text, in the form of blog posts. We have built a corpus of several thousand blog posts, some by people with dementia and others by people with loved ones with dementia. We use this dataset to train and test several machine learning methods, and achieve prediction performance at a level far above the baseline.

pdf bib
Proceedings of the Workshop on New Frontiers in Summarization
Lu Wang | Jackie Chi Kit Cheung | Giuseppe Carenini | Fei Liu
Proceedings of the Workshop on New Frontiers in Summarization

pdf bib
Multimedia Summary Generation from Online Conversations: Current Approaches and Future Directions
Enamul Hoque | Giuseppe Carenini
Proceedings of the Workshop on New Frontiers in Summarization

With the proliferation of Web-based social media, asynchronous conversations have become very common for supporting online communication and collaboration. Yet the increasing volume and complexity of conversational data often make it very difficult to get insights about the discussions. We consider combining textual summary with visual representation of conversational data as a promising way of supporting the user in exploring conversations. In this paper, we report our current work on developing visual interfaces that present multimedia summary combining text and visualization for online conversations and how our solutions have been tailored for a variety of domain problems. We then discuss the key challenges and opportunities for future work in this research space.

pdf bib
Automatic Community Creation for Abstractive Spoken Conversations Summarization
Karan Singla | Evgeny Stepanov | Ali Orkan Bayer | Giuseppe Carenini | Giuseppe Riccardi
Proceedings of the Workshop on New Frontiers in Summarization

Summarization of spoken conversations is a challenging task, since it requires deep understanding of dialogs. Abstractive summarization techniques rely on linking the summary sentences to sets of original conversation sentences, i.e. communities. Unfortunately, such linking information is rarely available or requires trained annotators. We propose and experiment automatic community creation using cosine similarity on different levels of representation: raw text, WordNet SynSet IDs, and word embeddings. We show that the abstractive summarization systems with automatic communities significantly outperform previously published results on both English and Italian corpora.

pdf bib
Generating and Evaluating Summaries for Partial Email Threads: Conversational Bayesian Surprise and Silver Standards
Jordon Johnson | Vaden Masrani | Giuseppe Carenini | Raymond Ng
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue

We define and motivate the problem of summarizing partial email threads. This problem introduces the challenge of generating reference summaries for partial threads when human annotation is only available for the threads as a whole, particularly when the human-selected sentences are not uniformly distributed within the threads. We propose an oracular algorithm for generating these reference summaries with arbitrary length, and we are making the resulting dataset publicly available. In addition, we apply a recent unsupervised method based on Bayesian Surprise that incorporates background knowledge into partial thread summarization, extend it with conversational features, and modify the mechanism by which it handles redundancy. Experiments with our method indicate improved performance over the baseline for shorter partial threads; and our results suggest that the potential benefits of background knowledge to partial thread summarization should be further investigated with larger datasets.

pdf bib
Exploring Joint Neural Model for Sentence Level Discourse Parsing and Sentiment Analysis
Bita Nejat | Giuseppe Carenini | Raymond Ng
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue

Discourse Parsing and Sentiment Analysis are two fundamental tasks in Natural Language Processing that have been shown to be mutually beneficial. In this work, we design and compare two Neural Based models for jointly learning both tasks. In the proposed approach, we first create a vector representation for all the text segments in the input sentence. Next, we apply three different Recursive Neural Net models: one for discourse structure prediction, one for discourse relation prediction and one for sentiment analysis. Finally, we combine these Neural Nets in two different joint models: Multi-tasking and Pre-training. Our results on two standard corpora indicate that both methods result in improvements in each task but Multi-tasking has a bigger impact than Pre-training. Specifically for Discourse Parsing, we see improvements in the prediction of the set of contrastive relations.

pdf bib
Chat Disentanglement: Identifying Semantic Reply Relationships with Random Forests and Recurrent Neural Networks
Shikib Mehri | Giuseppe Carenini
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Thread disentanglement is a precursor to any high-level analysis of multiparticipant chats. Existing research approaches the problem by calculating the likelihood of two messages belonging in the same thread. Our approach leverages a newly annotated dataset to identify reply relationships. Furthermore, we explore the usage of an RNN, along with large quantities of unlabeled data, to learn semantic relationships between messages. Our proposed pipeline, which utilizes a reply classifier and an RNN to generate a set of disentangled threads, is novel and performs well against previous work.

2016

pdf bib
Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Raquel Fernandez | Wolfgang Minker | Giuseppe Carenini | Ryuichiro Higashinaka | Ron Artstein | Alesia Gainer
Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue

pdf bib
Training Data Enrichment for Infrequent Discourse Relations
Kailang Jiang | Giuseppe Carenini | Raymond Ng
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Discourse parsing is a popular technique widely used in text understanding, sentiment analysis and other NLP tasks. However, for most discourse parsers, the performance varies significantly across different discourse relations. In this paper, we first validate the underfitting hypothesis, i.e., the less frequent a relation is in the training data, the poorer the performance on that relation. We then explore how to increase the number of positive training instances, without resorting to manually creating additional labeled data. We propose a training data enrichment framework that relies on co-training of two different discourse parsers on unlabeled documents. Importantly, we show that co-training alone is not sufficient. The framework requires a filtering step to ensure that only “good quality” unlabeled documents can be used for enrichment and re-training. We propose and evaluate two ways to perform the filtering. The first is to use an agreement score between the two parsers. The second is to use only the confidence score of the faster parser. Our empirical results show that agreement score can help to boost the performance on infrequent relations, and that the confidence score is a viable approximation of the agreement score for infrequent relations.

pdf bib
An Interactive System for Exploring Community Question Answering Forums
Enamul Hoque | Shafiq Joty | Lluís Màrquez | Alberto Barrón-Cedeño | Giovanni Da San Martino | Alessandro Moschitti | Preslav Nakov | Salvatore Romeo | Giuseppe Carenini
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations

We present an interactive system to provide effective and efficient search capabilities in Community Question Answering (cQA) forums. The system integrates state-of-the-art technology for answer search with a Web-based user interface specifically tailored to support the cQA forum readers. The answer search module automatically finds relevant answers for a new question by exploring related questions and the comments within their threads. The graphical user interface presents the search results and supports the exploration of related information. The system is running live at http://www.qatarliving.com/betasearch/.

2015

pdf bib
CODRA: A Novel Discriminative Framework for Rhetorical Analysis
Shafiq Joty | Giuseppe Carenini | Raymond T. Ng
Computational Linguistics, Volume 41, Issue 3 - September 2015

2014

pdf bib
Interactive Exploration of Asynchronous Conversations: Applying a User-centered Approach to Design a Visual Text Analytic System
Enamul Hoque | Giuseppe Carenini | Shafiq Joty
Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces

pdf bib
Extractive Summarization and Dialogue Act Modeling on Email Threads: An Integrated Probabilistic Approach
Tatsuro Oya | Giuseppe Carenini
Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)

pdf bib
A Template-based Abstractive Meeting Summarization: Leveraging Summary and Source Text Relationships
Tatsuro Oya | Yashar Mehdad | Giuseppe Carenini | Raymond Ng
Proceedings of the 8th International Natural Language Generation Conference (INLG)

pdf bib
Abstractive Summarization of Spoken and Written Conversations Based on Phrasal Queries
Yashar Mehdad | Giuseppe Carenini | Raymond T. Ng
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Detecting Disagreement in Conversations using Pseudo-Monologic Rhetorical Structure
Kelsey Allen | Giuseppe Carenini | Raymond Ng
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

pdf bib
Abstractive Summarization of Product Reviews Using Discourse Structure
Shima Gerani | Yashar Mehdad | Giuseppe Carenini | Raymond T. Ng | Bita Nejat
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2013

pdf bib
Towards Topic Labeling with Phrase Entailment and Aggregation
Yashar Mehdad | Giuseppe Carenini | Raymond T. Ng | Shafiq Joty
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Abstractive Meeting Summarization with Entailment and Fusion
Yashar Mehdad | Giuseppe Carenini | Frank Tompa | Raymond T. Ng
Proceedings of the 14th European Workshop on Natural Language Generation

pdf bib
Dialogue Act Recognition in Synchronous and Asynchronous Conversations
Maryam Tavafi | Yashar Mehdad | Shafiq Joty | Giuseppe Carenini | Raymond Ng
Proceedings of the SIGDIAL 2013 Conference

pdf bib
Combining Intra- and Multi-sentential Rhetorical Parsing for Document-level Discourse Analysis
Shafiq Joty | Giuseppe Carenini | Raymond Ng | Yashar Mehdad
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2012

pdf bib
Using the Omega Index for Evaluating Abstractive Community Detection
Gabriel Murray | Giuseppe Carenini | Raymond Ng
Proceedings of Workshop on Evaluation Metrics and System Comparison for Automatic Summarization

pdf bib
A Novel Discriminative Framework for Sentence-Level Discourse Analysis
Shafiq Joty | Giuseppe Carenini | Raymond Ng
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

2010

pdf bib
Domain Adaptation to Summarize Human Conversations
Oana Sandu | Giuseppe Carenini | Gabriel Murray | Raymond Ng
Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing

pdf bib
Generating and Validating Abstracts of Meeting Conversations: a User Study
Gabriel Murray | Giuseppe Carenini | Raymond Ng
Proceedings of the 6th International Natural Language Generation Conference

pdf bib
Interpretation and Transformation for Abstracting Conversations
Gabriel Murray | Giuseppe Carenini | Raymond Ng
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Exploiting Conversation Structure in Unsupervised Topic Segmentation for Emails
Shafiq Joty | Giuseppe Carenini | Gabriel Murray | Raymond T. Ng
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

2009

pdf bib
Predicting Subjectivity in Multimodal Conversations
Gabriel Murray | Giuseppe Carenini
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf bib
Optimization-based Content Selection for Opinion Summarization
Jackie Chi Kit Cheung | Giuseppe Carenini | Raymond T. Ng
Proceedings of the 2009 Workshop on Language Generation and Summarisation (UCNLG+Sum 2009)

2008

pdf bib
Summarizing Spoken and Written Conversations
Gabriel Murray | Giuseppe Carenini
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

pdf bib
Summarizing Emails with Conversational Cohesion and Subjectivity
Giuseppe Carenini | Raymond T. Ng | Xiaodong Zhou
Proceedings of ACL-08: HLT

pdf bib
Extractive vs. NLG-based Abstractive Summarization of Evaluative Text: The Effect of Corpus Controversiality
Giuseppe Carenini | Jackie C. K. Cheung
Proceedings of the Fifth International Natural Language Generation Conference

2006

pdf bib
Multi-Document Summarization of Evaluative Text
Giuseppe Carenini | Raymond Ng | Adam Pauls
11th Conference of the European Chapter of the Association for Computational Linguistics

2000

pdf bib
An Empirical Study of the Influence of Argument Conciseness on Argument Effectiveness
Giuseppe Carenini | Johanna D. Moore
Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics

pdf bib
A Task-based Framework to Evaluate Evaluative Arguments
Giuseppe Carenini
INLG’2000 Proceedings of the First International Conference on Natural Language Generation

pdf bib
A strategy for generating evaluative arguments
Giuseppe Carenini | Johanna Moore
INLG’2000 Proceedings of the First International Conference on Natural Language Generation

1998

pdf bib
Describing Complex Charts in Natural Language: A Caption Generation System
Vibhu O. Mittal | Johanna D. Moore | Giuseppe Carenini | Steven Roth
Computational-Linguistics, Volume 24, Number 3, September 1998

pdf bib
A Media-Independent Content Language for Integrated Text and Graphics Generation
Nancy Green | Giuseppe Carenini | Stephan Kerpedjiev | Steven Roth | Johanna Moore
Content Visualization and Intermedia Representations (CVIR’98)

pdf bib
A Principled Representation of Attributive Descriptions for Generating Integrated Text and Information Graphics Presentations
Nancy Green | Giuseppe Carenini | Johanna Moore
Natural Language Generation