Enhancing Abstractiveness of Summarization Models through Calibrated Distillation

Hwanjun Song, Igor Shalyminov, Hang Su, Siffi Singh, Kaisheng Yao, Saab Mansour


Abstract
In this paper, we propose a novel approach named DisCal to enhance the level of abstractiveness (measured by n-gram overlap) without sacrificing the informativeness (measured by ROUGE) of generated summaries. DisCal exposes diverse pseudo summaries with two supervision to the student model. Firstly, the best pseudo summary is identified in terms of abstractiveness and informativeness and used for sequence-level distillation. Secondly, their ranks are used to ensure the student model to assign higher prediction scores to summaries with higher ranks. Our experiments show that DisCal outperforms prior methods in abstractive summarization distillation, producing highly abstractive and informative summaries.
Anthology ID:
2023.findings-emnlp.468
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7026–7036
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.468
DOI:
10.18653/v1/2023.findings-emnlp.468
Bibkey:
Cite (ACL):
Hwanjun Song, Igor Shalyminov, Hang Su, Siffi Singh, Kaisheng Yao, and Saab Mansour. 2023. Enhancing Abstractiveness of Summarization Models through Calibrated Distillation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7026–7036, Singapore. Association for Computational Linguistics.
Cite (Informal):
Enhancing Abstractiveness of Summarization Models through Calibrated Distillation (Song et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.468.pdf