On Task-personalized Multimodal Few-shot Learning for Visually-rich Document Entity Retrieval

Jiayi Chen, Hanjun Dai, Bo Dai, Aidong Zhang, Wei Wei


Abstract
Visually-rich document entity retrieval (VDER), which extracts key information (e.g. date, address) from document images like invoices and receipts, has become an important topic in industrial NLP applications. The emergence of new document types at a constant pace, each with its unique entity types, presents a unique challenge: many documents contain unseen entity types that occur only a couple of times. Addressing this challenge requires models to have the ability of learning entities in a few-shot manner. However, prior works for Few-shot VDER mainly address the problem at the document level with a predefined global entity space, which doesn’t account for the entity-level few-shot scenario: target entity types are locally personalized by each task and entity occurrences vary significantly among documents. To address this unexplored scenario, this paper studies a novel entity-level few-shot VDER task. The challenges lie in the uniqueness of the label space for each task and the increased complexity of out-of-distribution (OOD) contents. To tackle this novel task, we present a task-aware meta-learning based framework, with a central focus on achieving effective task personalization that distinguishes between in-task and out-of-task distribution. Specifically, we adopt a hierarchical decoder (HC) and employ contrastive learning (ContrastProtoNet) to achieve this goal. Furthermore, we introduce a new dataset, FewVEX, to boost future research in the field of entity-level few-shot VDER. Experimental results demonstrate our approaches significantly improve the robustness of popular meta-learning baselines.
Anthology ID:
2023.findings-emnlp.604
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9006–9025
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.604
DOI:
10.18653/v1/2023.findings-emnlp.604
Bibkey:
Cite (ACL):
Jiayi Chen, Hanjun Dai, Bo Dai, Aidong Zhang, and Wei Wei. 2023. On Task-personalized Multimodal Few-shot Learning for Visually-rich Document Entity Retrieval. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9006–9025, Singapore. Association for Computational Linguistics.
Cite (Informal):
On Task-personalized Multimodal Few-shot Learning for Visually-rich Document Entity Retrieval (Chen et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.604.pdf