Yibin Liu


2021

pdf bib
Fine-grained Entity Typing without Knowledge Base
Jing Qian | Yibin Liu | Lemao Liu | Yangming Li | Haiyun Jiang | Haisong Zhang | Shuming Shi
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Existing work on Fine-grained Entity Typing (FET) typically trains automatic models on the datasets obtained by using Knowledge Bases (KB) as distant supervision. However, the reliance on KB means this training setting can be hampered by the lack of or the incompleteness of the KB. To alleviate this limitation, we propose a novel setting for training FET models: FET without accessing any knowledge base. Under this setting, we propose a two-step framework to train FET models. In the first step, we automatically create pseudo data with fine-grained labels from a large unlabeled dataset. Then a neural network model is trained based on the pseudo data, either in an unsupervised way or using self-training under the weak guidance from a coarse-grained Named Entity Recognition (NER) model. Experimental results show that our method achieves competitive performance with respect to the models trained on the original KB-supervised datasets.