Edmundo-Pavel Soriano-Morales

Also published as: Edmundo Pavel Soriano Morales


2020

pdf bib
Project PIAF: Building a Native French Question-Answering Dataset
Rachel Keraron | Guillaume Lancrenon | Mathilde Bras | Frédéric Allary | Gilles Moyse | Thomas Scialom | Edmundo-Pavel Soriano-Morales | Jacopo Staiano
Proceedings of the Twelfth Language Resources and Evaluation Conference

Motivated by the lack of data for non-English languages, in particular for the evaluation of downstream tasks such as Question Answering, we present a participatory effort to collect a native French Question Answering Dataset. Furthermore, we describe and publicly release the annotation tool developed for our collection effort, along with the data obtained and preliminary baselines.

2016

pdf bib
Hypergraph Modelization of a Syntactically Annotated English Wikipedia Dump
Edmundo Pavel Soriano Morales | Julien Ah-Pine | Sabine Loudcher
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Wikipedia, the well known internet encyclopedia, is nowadays a widely used source of information. To leverage its rich information, already parsed versions of Wikipedia have been proposed. We present an annotated dump of the English Wikipedia. This dump draws upon previously released Wikipedia parsed dumps. Still, we head in a different direction. In this parse we focus more into the syntactical characteristics of words: aside from the classical Part-of-Speech (PoS) tags and dependency parsing relations, we provide the full constituent parse branch for each word in a succinct way. Additionally, we propose a hypergraph network representation of the extracted linguistic information. The proposed modelization aims to take advantage of the information stocked within our parsed Wikipedia dump. We hope that by releasing these resources, researchers from the concerned communities will have a ready-to-experiment Wikipedia corpus to compare and distribute their work. We render public our parsed Wikipedia dump as well as the tool (and its source code) used to perform the parse. The hypergraph network and its related metadata is also distributed.