Fang Xu


2010

pdf bib
Paragraph Acquisition and Selection for List Question Using Amazon’s Mechanical Turk
Fang Xu | Dietrich Klakow
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

Creating more fine-grained annotated data than previously relevent document sets is important for evaluating individual components in automatic question answering systems. In this paper, we describe using the Amazon's Mechanical Turk (AMT) to judge whether paragraphs in relevant documents answer corresponding list questions in TREC QA track 2004. Based on AMT results, we build a collection of 1300 gold-standard supporting paragraphs for list questions. Our online experiments suggested that recruiting more people per task assures better annotation quality. In order to learning true labels from AMT annotations, we investigated three approaches on two datasets with different levels of annotation errors. Experimental studies show that the Naive Bayesian model and EM-based GLAD model can generate results highly agreeing with gold-standard annotations, and dominate significantly over the majority voting method for true label learning. We also suggested setting higher HIT approval rate to assure better online annotation quality, which leads to better performance of learning methods.

2006

pdf bib
A Hybrid Approach to Chinese Base Noun Phrase Chunking
Fang Xu | Chengqing Zong | Jun Zhao
Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing