Learning to Compose Representations of Different Encoder Layers towards Improving Compositional Generalization

Lei Lin, Shuangtao Li, Yafang Zheng, Biao Fu, Shan Liu, Yidong Chen, Xiaodong Shi


Abstract
Recent studies have shown that sequence-to-sequence (seq2seq) models struggle with compositional generalization (CG), i.e., the ability to systematically generalize to unseen compositions of seen components. There is mounting evidence that one of the reasons hindering CG is the representation of the encoder uppermost layer is entangled, i.e., the syntactic and semantic representations of sequences are entangled. However, we consider that the previously identified representation entanglement problem is not comprehensive enough. Additionally, we hypothesize that the source keys and values representations passing into different decoder layers are also entangled. Starting from this intuition, we propose CompoSition (Compose Syntactic and Semantic Representations), an extension to seq2seq models which learns to compose representations of different encoder layers dynamically for different tasks, since recent studies reveal that the bottom layers of the Transformer encoder contain more syntactic information and the top ones contain more semantic information. Specifically, we introduce a composed layer between the encoder and decoder to compose different encoder layers’ representations to generate specific keys and values passing into different decoder layers. CompoSition achieves competitive results on two comprehensive and realistic benchmarks, which empirically demonstrates the effectiveness of our proposal. Codes are available at https://github.com/thinkaboutzero/COMPOSITION.
Anthology ID:
2023.findings-emnlp.108
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1599–1614
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.108
DOI:
10.18653/v1/2023.findings-emnlp.108
Bibkey:
Cite (ACL):
Lei Lin, Shuangtao Li, Yafang Zheng, Biao Fu, Shan Liu, Yidong Chen, and Xiaodong Shi. 2023. Learning to Compose Representations of Different Encoder Layers towards Improving Compositional Generalization. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 1599–1614, Singapore. Association for Computational Linguistics.
Cite (Informal):
Learning to Compose Representations of Different Encoder Layers towards Improving Compositional Generalization (Lin et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.108.pdf