Eva Forsbom


2009

pdf bib
Extending the View: Explorations in Bootstrapping a Swedish PoS Tagger
Eva Forsbom
Proceedings of the 17th Nordic Conference of Computational Linguistics (NODALIDA 2009)

2008

pdf bib
Language Resources and Tools for Swedish: A Survey
Kjell Elenius | Eva Forsbom | Beáta Megyesi
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

Language resources and tools to create and process these resources are necessary components in human language technology and natural language applications. In this paper, we describe a survey of existing language resources for Swedish, and the need for Swedish language resources to be used in research and real-world applications in language technology as well as in linguistic research. The survey is based on a questionnaire sent to industry and academia, institutions and organizations, and to experts involved in the development of Swedish language resources in Sweden, the Nordic countries and world-wide.

2007

pdf bib
Inducing Baseform Models from a Swedish Vocabulary Pool
Eva Forsbom
Proceedings of the 16th Nordic Conference of Computational Linguistics (NODALIDA 2007)

2004

pdf bib
MT Goes Farming: Comparing Two Machine Translation Approaches on a New Domain
Per Weijnitz | Eva Forsbom | Ebba Gustavii | Eva Pettersson | Jörg Tiedemann
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

2003

pdf bib
MATS – a glass box machine translation system
Anna Sågvall Hein | Eva Forsbom | Per Weijnitz | Ebba Gustavii | Jörg Tiedemann
Proceedings of Machine Translation Summit IX: System Presentations

pdf bib
Training a super model look-alike
Eva Forsbom
Workshop on Systemizing MT Evaluation

Two string comparison measures, edit distance and n-gram co-occurrence, are tested for automatic evaluation of translation quality, where the quality is compared to one or several reference translations. The measures are tested in combination for diagnostic evaluation on segments. Both measures have been used for evaluation of translation quality before, but for another evaluation purpose (performance) and with another granularity (system). Preliminary experiments showed that the measures are not portable without redefinitions, so two new measures are defined, WAFT and NEVA. The new measures could be applied for both purposes and granularities.

2002

pdf bib
Scaling Up an MT Prototype for Industrial Use - Databases and Data Flow
Anna Sågvall Hein | Eva Forsbom | Jörg Tiedemann | Per Weijnitz | Ingrid Almqvist | Leif-Jöran Olsson | Sten Thaning
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)