Luís Fernando Costa

Also published as: Luís Costa


2012

pdf bib
Págico: Evaluating Wikipedia-based information retrieval in Portuguese
Cristina Mota | Alberto Simões | Cláudia Freitas | Luís Costa | Diana Santos
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

How do people behave in their everyday information seeking tasks, which often involve Wikipedia? Are there systems which can help them, or do a similar job? In this paper we describe Págico, an evaluation contest with the main purpose of fostering research in these topics. We describe its motivation, the collection of documents created, the evaluation setup, the topics chosen and their choice, the participation, as well as the measures used for evaluation and the gathered resources. The task―between information retrieval and question answering―can be further described as answering questions related to Portuguese-speaking culture in the Portuguese Wikipedia, in a number of different themes and geographic and temporal angles. This initiative allowed us to create interesting datasets and perform some assessment of Wikipedia, while also improving a public-domain open-source system for further wikipedia-based evaluations. In the paper, we provide examples of questions, we report the results obtained by the participants, and provide some discussion on complex issues.

2006

pdf bib
Esfinge — a Question Answering System in the Web using the Web
Luís Fernando Costa
Demonstrations

pdf bib
Component Evaluation in a Question Answering System
Luís Fernando Costa | Luís Sarmento
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

Automatic question answering (QA) is a complex task, which lies in the cross-road of Natural Language Processing, Information Retrieval and Human Computer Interaction. A typical QA system has four modules – question processing, document retrieval, answer extraction and answer presentation. In each of these modules, a multitude of tools can be used. Therefore, the performance evaluation of each of these components is of great importance in order to check their impact in the global performance, and to conclude whether these components are necessary, need to be improved or substituted. This paper describes some experiments performed in order to evaluate several components of the question answering system Esfinge.We describe the experimental set up and present the results of error analysis based on runtime logs of Esfinge. We present the results of component analysis, which provides good insights about the importance of the individual components and pre-processing modules at various levels, namely stemming, named-entity recognition, PoS Filtering and filtering of undesired answers. We also present the results of substituting the document source in which Esfinge tries to find possible answers and compare the results obtained using web sources such as Google, Yahoo and BACO, a large database of web documents in Portuguese.