Hideki Shima


2013

pdf bib
An English Reading Tool as a NLP Showcase
Mahmoud Azab | Ahmed Salama | Kemal Oflazer | Hideki Shima | Jun Araki | Teruko Mitamura
The Companion Volume of the Proceedings of IJCNLP 2013: System Demonstrations

pdf bib
An NLP-based Reading Tool for Aiding Non-native English Readers
Mahmoud Azab | Ahmed Salama | Kemal Oflazer | Hideki Shima | Jun Araki | Teruko Mitamura
Proceedings of the International Conference Recent Advances in Natural Language Processing RANLP 2013

2012

pdf bib
Diversifiable Bootstrapping for Acquiring High-Coverage Paraphrase Resource
Hideki Shima | Teruko Mitamura
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

Recognizing similar or close meaning on different surface form is a common challenge in various Natural Language Processing and Information Access applications. However, we identified multiple limitations in existing resources that can be used for solving the vocabulary mismatch problem. To this end, we will propose the Diversifiable Bootstrapping algorithm that can learn paraphrase patterns with a high lexical coverage. The algorithm works in a lightly-supervised iterative fashion, where instance and pattern acquisition are interleaved, each using information provided by the other. By tweaking a parameter in the algorithm, resulting patterns can be diversifiable with a specific degree one can control.

2011

pdf bib
Diversity-aware Evaluation for Paraphrase Patterns
Hideki Shima | Teruko Mitamura
Proceedings of the TextInfer 2011 Workshop on Textual Entailment

2006

pdf bib
Keyword Translation Accuracy and Cross-Lingual Question Answering inChinese and Japanese
Teruko Mitamura | Mengqiu Wang | Hideki Shima | Frank Lin
Proceedings of the Workshop on Multilingual Question Answering - MLQA ‘06

pdf bib
Modular Approach to Error Analysis and Evaluation for Multilingual Question Answering
Hideki Shima | Mengqiu Wang | Frank Lin | Teruko Mitamura
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

Multilingual Question Answering systems are generally very complex, integrating several sub-modules to achieve their result. Global metrics (such as average precision and recall) are insufficient when evaluating the performance of individual sub-modules and their influence on each other. In this paper, we present a modular approach to error analysis and evaluation; we use manually-constructed, gold-standard input for each module to obtain an upper-bound for the (local) performance of that module. This approach enables us to identify existing problem areas quickly, and to target improvements accordingly.