Alessandro Oltramari


2022

pdf bib
Coalescing Global and Local Information for Procedural Text Understanding
Kaixin Ma | Filip Ilievski | Jonathan Francis | Eric Nyberg | Alessandro Oltramari
Proceedings of the 29th International Conference on Computational Linguistics

Procedural text understanding is a challenging language reasoning task that requires models to track entity states across the development of a narrative. We identify three core aspects required for modeling this task, namely the local and global view of the inputs, as well as the global view of outputs. Prior methods have considered a subset of these aspects, which leads to either low precision or low recall. In this paper, we propose a new model Coalescing Global and Local Information (CGLI), which builds entity- and timestep-aware input representations (local input) considering the whole context (global input), and we jointly model the entity states with a structured prediction objective (global output). Thus, CGLI simultaneously optimizes for both precision and recall. Moreover, we extend CGLI with additional output layers and integrate it into a story reasoning framework. Extensive experiments on a popular procedural text understanding dataset show that our model achieves state-of-the-art results, while experiments on a story reasoning benchmark show the positive impact of our model on downstream reasoning.

2021

pdf bib
Building Goal-oriented Document-grounded Dialogue Systems
Xi Chen | Faner Lin | Yeju Zhou | Kaixin Ma | Jonathan Francis | Eric Nyberg | Alessandro Oltramari
Proceedings of the 1st Workshop on Document-grounded Dialogue and Conversational Question Answering (DialDoc 2021)

In this paper, we describe our systems for solving the two Doc2Dial shared task: knowledge identification and response generation. We proposed several pre-processing and post-processing methods, and we experimented with data augmentation by pre-training the models on other relevant datasets. Our best model for knowledge identification outperformed the baseline by 10.5+ f1-score on the test-dev split, and our best model for response generation outperformed the baseline by 11+ Sacrebleu score on the test-dev split.

pdf bib
Exploring Strategies for Generalizable Commonsense Reasoning with Pre-trained Models
Kaixin Ma | Filip Ilievski | Jonathan Francis | Satoru Ozaki | Eric Nyberg | Alessandro Oltramari
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Commonsense reasoning benchmarks have been largely solved by fine-tuning language models. The downside is that fine-tuning may cause models to overfit to task-specific data and thereby forget their knowledge gained during pre-training. Recent works only propose lightweight model updates as models may already possess useful knowledge from past experience, but a challenge remains in understanding what parts and to what extent models should be refined for a given task. In this paper, we investigate what models learn from commonsense reasoning datasets. We measure the impact of three different adaptation methods on the generalization and accuracy of models. Our experiments with two models show that fine-tuning performs best, by learning both the content and the structure of the task, but suffers from overfitting and limited generalization to novel answers. We observe that alternative adaptation methods like prefix-tuning have comparable accuracy, but generalize better to unseen answers and are more robust to adversarial splits.

2019

pdf bib
Towards Generalizable Neuro-Symbolic Systems for Commonsense Question Answering
Kaixin Ma | Jonathan Francis | Quanyang Lu | Eric Nyberg | Alessandro Oltramari
Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing

Non-extractive commonsense QA remains a challenging AI task, as it requires systems to reason about, synthesize, and gather disparate pieces of information, in order to generate responses to queries. Recent approaches on such tasks show increased performance, only when models are either pre-trained with additional information or when domain-specific heuristics are used, without any special consideration regarding the knowledge resource type. In this paper, we perform a survey of recent commonsense QA methods and we provide a systematic analysis of popular knowledge resources and knowledge-integration methods, across benchmarks from multiple commonsense datasets. Our results and analysis show that attention-based injection seems to be a preferable choice for knowledge integration and that the degree of domain overlap, between knowledge bases and datasets, plays a crucial role in determining model success.

2011

pdf bib
Senso Comune, an Open Knowledge Base of Italian Language
Guido Vetere | Alessandro Oltramari | Isabella Chiari | Elisabetta Jezek | Laure Vieu | Fabio Massimo Zanzotto
Traitement Automatique des Langues, Volume 52, Numéro 3 : Ressources linguistiques libres [Free Language Resources]

2010

pdf bib
Proceedings of the 6th Workshop on Ontologies and Lexical Resources
Alessandro Oltramari | Piek Vossen | Qin Lu
Proceedings of the 6th Workshop on Ontologies and Lexical Resources

pdf bib
Data-Driven and Ontological Analysis of FrameNet for Natural Language Reasoning
Ekaterina Ovchinnikova | Laure Vieu | Alessandro Oltramari | Stefano Borgo | Theodore Alexandrov
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

This paper focuses on the improvement of the conceptual structure of FrameNet (FN) for the sake of applying this resource to knowledge-intensive NLP tasks requiring reasoning, such as question answering, information extraction etc. In this paper we show that in addition to coverage incompleteness, the current version of FN suffers from conceptual inconsistency and lacks axiomatization which can prevent appropriate inferences. For the sake of discovering and classifying conceptual problems in FN we investigate the FrameNet-Annotated corpus for Textual Entailment. Then we propose a methodology for improving the conceptual organization of FN. The main issue we focus on in our study is enriching, axiomatizing and cleaning up frame relations. Our methodology includes a data-driven analysis of frames resulting in discovering new frame relations and an ontological analysis of frames and frame relations resulting in axiomatizing relations and formulating constraints on them. In this paper, frames and frame relations are analyzed in terms of the DOLCE formal ontology. Additionally, we have described a case study aiming at demonstrating how the proposed methodology works in practice as well as investigating the impact of the restructured and axiomatized frame relations on recognizing textual entailment.

pdf bib
Senso Comune
Alessandro Oltramari | Guido Vetere | Maurizio Lenzerini | Aldo Gangemi | Nicola Guarino
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

This paper introduces the general features of Senso Comune, an open knowledge base for the Italian language, focusing on the interplay of lexical and ontological knowledge, and outlining our approach to conceptual knowledge elicitation. Senso Comune consists of a machine-readable lexicon constrained by an ontological infrastructure. The idea at the basis of Senso Comune is that natural languages exist in use, and they belong to their users. In the line of Saussure's linguistics, natural languages are seen as a social product and their main strength relies on the users’ consensus. At the same time, language has specific goals: i.e. referring to entities that belong to the users’ world (be it physical or not) and that are made up in social environments where expressions are produced and understood. This usage leverages the creativity of those who produce words and try to understand them. This is the reason why ontology, i.e. a shared conceptualization of the world, can be regarded to as the soil on which the speakers' consensus may be rooted. Some final remarks concerning future work and applications are also given.

2006

pdf bib
LexiPass methodology: a conceptual path from frames to senses and back
Alessandro Oltramari
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

In this paper we claim that an integration of FrameNet and WordNet will improve interoperability, user-friendliness and usability of both lexical resources. If the former provides a sophisticated representational structure compared to a narrow lexical coverage, the latter - on the other side - supplies a dense network of word senses and semantic relations although not supporting advanced accessibility (i.e., via frames). According to the integration perspective we present in the paper, we introduce LexiPass methodology, which combines Burckardt’s tool “WordNet Detour of FrameNet” with basic statistical analysis, enabling frame-guided search and extraction of domain synsets from WordNet.

2005

pdf bib
Interfacing Ontologies and Lexical Resources
Laurent Prevot | Stefano Borgo | Alessandro Oltramari
Proceedings of OntoLex 2005 - Ontologies and Lexical Resources