Toward Zero-Shot Instruction Following

Renze Lou, Wenpeng Yin


Abstract
This work proposes a challenging yet more realistic setting for zero-shot cross-task generalization: zero-shot instruction following, presuming the existence of a paragraph-style task definition while no demonstrations exist. To better learn the task supervision from the definition, we propose two strategies: first, to automatically find out the critical sentences in the definition; second, a ranking objective to force the model to generate the gold outputs with higher probabilities when those critical parts are highlighted in the definition. The joint efforts of the two strategies yield state-of-the-art performance on the Super-NaturalInstructions. Our code is available on GitHub.
Anthology ID:
2024.eacl-srw.5
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Neele Falk, Sara Papi, Mike Zhang
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
50–60
Language:
URL:
https://aclanthology.org/2024.eacl-srw.5
DOI:
Bibkey:
Cite (ACL):
Renze Lou and Wenpeng Yin. 2024. Toward Zero-Shot Instruction Following. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 50–60, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Toward Zero-Shot Instruction Following (Lou & Yin, EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-srw.5.pdf