Chi Wang


2021

pdf bib
An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models
Xueqing Liu | Chi Wang
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

The performance of fine-tuning pre-trained language models largely depends on the hyperparameter configuration. In this paper, we investigate the performance of modern hyperparameter optimization methods (HPO) on fine-tuning pre-trained language models. First, we study and report three HPO algorithms’ performances on fine-tuning two state-of-the-art language models on the GLUE dataset. We find that using the same time budget, HPO often fails to outperform grid search due to two reasons: insufficient time budget and overfitting. We propose two general strategies and an experimental procedure to systematically troubleshoot HPO’s failure cases. By applying the procedure, we observe that HPO can succeed with more appropriate settings in the search space and time budget; however, in certain cases overfitting remains. Finally, we make suggestions for future work. Our implementation can be found in https://github.com/microsoft/FLAML/tree/main/flaml/nlp/

2017

pdf bib
Identifying Semantically Deviating Outlier Documents
Honglei Zhuang | Chi Wang | Fangbo Tao | Lance Kaplan | Jiawei Han
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

A document outlier is a document that substantially deviates in semantics from the majority ones in a corpus. Automatic identification of document outliers can be valuable in many applications, such as screening health records for medical mistakes. In this paper, we study the problem of mining semantically deviating document outliers in a given corpus. We develop a generative model to identify frequent and characteristic semantic regions in the word embedding space to represent the given corpus, and a robust outlierness measure which is resistant to noisy content in documents. Experiments conducted on two real-world textual data sets show that our method can achieve an up to 135% improvement over baselines in terms of recall at top-1% of the outlier ranking.

2014

pdf bib
The Wisdom of Minority: Unsupervised Slot Filling Validation based on Multi-dimensional Truth-Finding
Dian Yu | Hongzhao Huang | Taylor Cassidy | Heng Ji | Chi Wang | Shi Zhi | Jiawei Han | Clare Voss | Malik Magdon-Ismail
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers