Knowledge Distillation Label Smoothing: Fact or Fallacy?

Md Sultan


Abstract
Originally proposed as a method for knowledge transfer from one model to another, some recent studies have suggested that knowledge distillation (KD) is in fact a form of regularization. Perhaps the strongest argument of all for this new perspective comes from its apparent similarities with label smoothing (LS). Here we re-examine this stated equivalence between the two methods by comparing the predictive confidences of the models they train. Experiments on four text classification tasks involving models of different sizes show that: (a) In most settings, KD and LS drive model confidence in completely opposite directions, and (b) In KD, the student inherits not only its knowledge but also its confidence from the teacher, reinforcing the classical knowledge transfer view.
Anthology ID:
2023.emnlp-main.271
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4469–4477
Language:
URL:
https://aclanthology.org/2023.emnlp-main.271
DOI:
10.18653/v1/2023.emnlp-main.271
Bibkey:
Cite (ACL):
Md Sultan. 2023. Knowledge Distillation ≈ Label Smoothing: Fact or Fallacy?. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4469–4477, Singapore. Association for Computational Linguistics.
Cite (Informal):
Knowledge Distillation ≈ Label Smoothing: Fact or Fallacy? (Sultan, EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.271.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.271.mp4