G-SPEED: General SParse Efficient Editing MoDel

Haoke Zhang, Yue Wang, Juntao Li, Xiabing Zhou, Min Zhang


Abstract
Large Language Models (LLMs) have demonstrated incredible capabilities in understanding, generating, and manipulating languages. Through human-model interactions, LLMs can automatically understand human-issued instructions and output the expected contents, which can significantly increase working efficiency. In various types of real-world demands, editing-oriented tasks account for a considerable proportion, which involves an interactive process that entails the continuous refinement of existing texts to meet specific criteria. Due to the need for multi-round human-model interaction and the generation of complicated editing tasks, there is an emergent need for efficient general editing models. In this paper, we propose General SParse Efficient Editing MoDel (G-SPEED), which can fulfill diverse editing requirements through a single model while maintaining low computational costs. Specifically, we first propose a novel unsupervised text editing data clustering algorithm to deal with the data scarcity problem. Subsequently, we introduce a sparse editing model architecture to mitigate the inherently limited learning capabilities of small language models. The experimental outcomes indicate that G-SPEED, with its 508M parameters, can surpass LLMs equipped with 175B parameters. Our code and model checkpoints are available at https://github.com/Banner-Z/G-SPEED.
Anthology ID:
2023.findings-emnlp.142
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2160–2175
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.142
DOI:
10.18653/v1/2023.findings-emnlp.142
Bibkey:
Cite (ACL):
Haoke Zhang, Yue Wang, Juntao Li, Xiabing Zhou, and Min Zhang. 2023. G-SPEED: General SParse Efficient Editing MoDel. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2160–2175, Singapore. Association for Computational Linguistics.
Cite (Informal):
G-SPEED: General SParse Efficient Editing MoDel (Zhang et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.142.pdf