Sabrina Kirrane


2022

pdf bib
Comparing Annotated Datasets for Named Entity Recognition in English Literature
Rositsa Ivanova | Marieke van Erp | Sabrina Kirrane
Proceedings of the Thirteenth Language Resources and Evaluation Conference

The growing interest in named entity recognition (NER) in various domains has led to the creation of different benchmark datasets, often with slightly different annotation guidelines. To better understand the different NER benchmark datasets for the domain of English literature and their impact on the evaluation of NER tools, we analyse two existing annotated datasets and create two additional gold standard datasets. Following on from this, we evaluate the performance of two NER tools, one domain-specific and one general-purpose NER tool, using the four gold standards, and analyse the sources for the differences in the measured performance. Our results show that the performance of the two tools varies significantly depending on the gold standard used for the individual evaluations.

2012

pdf bib
Expertise Mining for Enterprise Content Management
Georgeta Bordea | Sabrina Kirrane | Paul Buitelaar | Bianca Pereira
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

Enterprise content analysis and platform configuration for enterprise content management is often carried out by external consultants that are not necessarily domain experts. In this paper, we propose a set of methods for automatic content analysis that allow users to gain a high level view of the enterprise content. Here, a main concern is the automatic identification of key stakeholders that should ideally be involved in analysis interviews. The proposed approach employs recent advances in term extraction, semantic term grounding, expert profiling and expert finding in an enterprise content management setting. Extracted terms are evaluated using human judges, while term grounding is evaluated using a manually created gold standard for the DBpedia datasource.