Cross-Modal Attribute Insertions for Assessing the Robustness of Vision-and-Language Learning

Shivaen Ramshetty, Gaurav Verma, Srijan Kumar


Abstract
The robustness of multimodal deep learning models to realistic changes in the input text is critical for applicability on important tasks such as text-to-image retrieval and cross-modal entailment. To measure robustness, several existing approaches edit the text data, but without leveraging the cross-modal information present in multimodal data. Such information from the visual modality, such as color, size, and shape, provides additional attributes that users can include in their inputs. Thus, we propose cross-modal attribute insertions as a realistic perturbation strategy for vision-and-language data that inserts visual attributes of the objects in the image into the corresponding text (e.g., “girl on a chair” to “little girl on a wooden chair”). Our proposed approach for cross-modal attribute insertions is modular, controllable, and task-agnostic. We find that augmenting input text using cross-modal insertions causes state-of-the-art approaches for text-to-image retrieval and cross-modal entailment to perform poorly, resulting in relative drops of ~15% in MRR and ~20% in F1 score, respectively. Crowd-sourced annotations demonstrate that cross-modal insertions lead to higher quality augmentations for multimodal data than augmentations using text-only data, and are equivalent in quality to original examples. We release the code to encourage robustness evaluations of deep vision-and-language models: https://github.com/claws-lab/multimodal-robustness-xmai
Anthology ID:
2023.acl-long.890
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15974–15990
Language:
URL:
https://aclanthology.org/2023.acl-long.890
DOI:
10.18653/v1/2023.acl-long.890
Bibkey:
Cite (ACL):
Shivaen Ramshetty, Gaurav Verma, and Srijan Kumar. 2023. Cross-Modal Attribute Insertions for Assessing the Robustness of Vision-and-Language Learning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15974–15990, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Cross-Modal Attribute Insertions for Assessing the Robustness of Vision-and-Language Learning (Ramshetty et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.890.pdf
Video:
 https://aclanthology.org/2023.acl-long.890.mp4