Yasuhiro Akiba


2004

pdf bib
Using a Mixture of N-Best Lists from Multiple MT Systems in Rank-Sum-Based Confidence Measure for MT Outputs
Yasuhiro Akiba | Eiichiro Sumita | Hiromi Nakaiwa | Seiichi Yamamoto | Hiroshi G. Okuno
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

pdf bib
Incremental Methods to Select Test Sentences for Evaluating Translation Ability
Yasuhiro Akiba | Eiichiro Sumita | Hiromi Nakaiwa | Seiichi Yamamoto | Hiroshi G. Okuno
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
How Does Automatic Machine Translation Evaluation Correlate with Human Scoring as the Number of Reference Translations Increases?
Andrew Finch | Yasuhiro Akiba | Eiichiro Sumita
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Overview of the IWSLT evaluation campaign
Yasuhiro Akiba | Marcello Federico | Noriko Kando | Hiromi Nakaiwa | Michael Paul | Jun’ichi Tsujii
Proceedings of the First International Workshop on Spoken Language Translation: Evaluation Campaign

pdf bib
EBMT, SMT, hybrid and more: ATR spoken language translation system
Eiichiro Sumita | Yasuhiro Akiba | Takao Doi | Andrew Finch | Kenji Imamura | Hideo Okuma | Michael Paul | Mitsuo Shimohata | Taro Watanabe
Proceedings of the First International Workshop on Spoken Language Translation: Evaluation Campaign

2003

pdf bib
A corpus-centered approach to spoken language translation
Eiichiro Sumita | Yasuhiro Akiba | Takao Doi | Andrew Finch | Kenji Imamura | Michael Paul | Mitsuo Shimohata | Taro Watanabe
10th Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
Automatic Expansion of Equivalent Sentence Set Based on Syntactic Substitution
Kenji Imamura | Yasuhiro Akiba | Eiichiro Sumita
Companion Volume of the Proceedings of HLT-NAACL 2003 - Short Papers

pdf bib
Experimental comparison of MT evaluation methods: RED vs.BLEU
Yasuhiro Akiba | Eiichiro Sumita | Hiromi Nakaiwa | Seiichi Yamamoto | Hiroshi G. Okuno
Proceedings of Machine Translation Summit IX: Papers

This paper experimentally compares two automatic evaluators, RED and BLEU, to determine how close the evaluation results of each automatic evaluator are to average evaluation results by human evaluators, following the ATR standard of MT evaluation. This paper gives several cautionary remarks intended to prevent MT developers from drawing misleading conclusions when using the automatic evaluators. In addition, this paper reports a way of using the automatic evaluators so that their results agree with those of human evaluators.

2002

pdf bib
Using Language and Translation Models to Select the Best among Outputs from Multiple MT Systems
Yasuhiro Akiba | Taro Watanabe | Eiichiro Sumita
COLING 2002: The 19th International Conference on Computational Linguistics

2001

pdf bib
Using multiple edit distances to automatically rank machine translation output
Yasuhiro Akiba | Kenji Imamura | Eiichiro Sumita
Proceedings of Machine Translation Summit VIII

This paper addresses the challenging problem of automatically evaluating output from machine translation (MT) systems in order to support the developers of these systems. Conventional approaches to the problem include methods that automatically assign a rank such as A, B, C, or D to MT output according to a single edit distance between this output and a correct translation example. The single edit distance can be differently designed, but changing its design makes assigning a certain rank more accurate, but another rank less accurate. This inhibits improving accuracy of rank assignment. To overcome this obstacle, this paper proposes an automatic ranking method that, by using multiple edit distances, encodes machine-translated sentences with a rank assigned by humans into multi-dimensional vectors from which a classifier of ranks is learned in the form of a decision tree (DT). The proposed method assigns a rank to MT output through the learned DT. The proposed method is evaluated using transcribed texts of real conversations in the travel arrangement domain. Experimental results show that the proposed method is more accurate than the single-edit-distance-based ranking methods, in both closed and open tests. Moreover, the proposed method could estimate MT quality within 3% error in some cases.

1994

pdf bib
Two Methods for Learning ALT-J/E Translation Rules from Examples and a Semantic Hierarchy
Ilussein Allmuallim | Yasuhiro Akiba | Takefumi Yamazaki | Akio Yokoo | Shigeo Kaneda
COLING 1994 Volume 1: The 15th International Conference on Computational Linguistics