Ran Xu


2023

pdf bib
Cold-Start Data Selection for Better Few-shot Language Model Fine-tuning: A Prompt-based Uncertainty Propagation Approach
Yue Yu | Rongzhi Zhang | Ran Xu | Jieyu Zhang | Jiaming Shen | Chao Zhang
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We present PATRON, a prompt-based data selection method for pre-trained language model fine-tuning under cold-start scenarios, i.e., no initial labeled data are available. In PATRON, we design (1) a prompt-based uncertainty propagation approach to estimate the importance of data points and (2) a partition-then-rewrite (PTR) strategy to promote sample diversity when querying for annotations. Experiments on six text classification datasets show that PATRON outperforms the strongest cold-start data selection baselines by up to 6.9%. Besides, with 128 labels only, PATRON achieves 91.0% and 92.1% of the fully supervised performance based on vanilla fine-tuning and prompt-based learning respectively. Our implementation of PATRON will be published upon acceptance.

2022

pdf bib
DocQueryNet: Value Retrieval with Arbitrary Queries for Form-like Documents
Mingfei Gao | Le Xue | Chetan Ramaiah | Chen Xing | Ran Xu | Caiming Xiong
Proceedings of the 29th International Conference on Computational Linguistics

We propose, DocQueryNet, a value retrieval method with arbitrary queries for form-like documents to reduce human effort of processing forms. Unlike previous methods that only address a fixed set of field items, our method predicts target value for an arbitrary query based on the understanding of the layout and semantics of a form. To further boost model performance, we propose a simple document language modeling (SimpleDLM) strategy to improve document understanding on large-scale model pre-training. Experimental results show that DocQueryNet outperforms previous designs significantly and the SimpleDLM further improves our performance on value retrieval by around 17% F1 score compared with the state-of-the-art pre-training method. Code is available here, https://github.com/salesforce/QVR-SimpleDLM.

pdf bib
Field Extraction from Forms with Unlabeled Data
Mingfei Gao | Zeyuan Chen | Nikhil Naik | Kazuma Hashimoto | Caiming Xiong | Ran Xu
Proceedings of the 1st Workshop on Semiparametric Methods in NLP: Decoupling Logic from Knowledge

We propose a novel framework to conduct field extraction from forms with unlabeled data. To bootstrap the training process, we develop a rule-based method for mining noisy pseudo-labels from unlabeled forms. Using the supervisory signal from the pseudo-labels, we extract a discriminative token representation from a transformer-based model by modeling the interaction between text in the form. To prevent the model from overfitting to label noise, we introduce a refinement module based on a progressive pseudo-label ensemble. Experimental results demonstrate the effectiveness of our framework.

2014

pdf bib
Evaluating Term Extraction Methods for Interpreters
Ran Xu | Serge Sharoff
Proceedings of the 4th International Workshop on Computational Terminology (Computerm)