Summarization-based Data Augmentation for Document Classification

Yueguan Wang, Naoki Yoshinaga


Abstract
Despite the prevalence of pretrained language models in natural language understanding tasks, understanding lengthy text such as document is still challenging due to the data sparseness problem. Inspired by that humans develop their ability of understanding lengthy text form reading shorter text, we propose a simple yet effective summarization-based data augmentation, SUMMaug, for document classification. We first obtain easy-to-learn examples for the target document classification task by summarizing the input of the original training examples, while optionally merging the original labels to conform to the summarized input. We then use the generated pseudo examples to perform curriculum learning. Experimental results on two datasets confirmed the advantage of our method compared to existing baseline methods in terms of robustness and accuracy. We release our code and data at https://github.com/etsurin/summaug.
Anthology ID:
2023.newsum-1.5
Volume:
Proceedings of the 4th New Frontiers in Summarization Workshop
Month:
December
Year:
2023
Address:
Singapore
Editors:
Yue Dong, Wen Xiao, Lu Wang, Fei Liu, Giuseppe Carenini
Venue:
NewSum
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
49–55
Language:
URL:
https://aclanthology.org/2023.newsum-1.5
DOI:
10.18653/v1/2023.newsum-1.5
Bibkey:
Cite (ACL):
Yueguan Wang and Naoki Yoshinaga. 2023. Summarization-based Data Augmentation for Document Classification. In Proceedings of the 4th New Frontiers in Summarization Workshop, pages 49–55, Singapore. Association for Computational Linguistics.
Cite (Informal):
Summarization-based Data Augmentation for Document Classification (Wang & Yoshinaga, NewSum 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.newsum-1.5.pdf
Supplementary material:
 2023.newsum-1.5.SupplementaryMaterial.txt