Knowledge Generation for Zero-shot Knowledge-based VQA

Rui Cao, Jing Jiang


Abstract
Previous solutions to knowledge-based visual question answering (K-VQA) retrieve knowledge from external knowledge bases and use supervised learning to train the K-VQA model.Recently pre-trained LLMs have been used as both a knowledge source and a zero-shot QA model for K-VQA and demonstrated promising results.However, these recent methods do not explicitly show the knowledge needed to answer the questions and thus lack interpretability.Inspired by recent work on knowledge generation from LLMs for text-based QA, in this work we propose and test a similar knowledge-generation-based K-VQA method, which first generates knowledge from an LLM and then incorporates the generated knowledge for K-VQA in a zero-shot manner. We evaluate our method on two K-VQA benchmarks and found that our method performs better than previous zero-shot K-VQA methods and our generated knowledge is generally relevant and helpful.
Anthology ID:
2024.findings-eacl.36
Volume:
Findings of the Association for Computational Linguistics: EACL 2024
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
533–549
Language:
URL:
https://aclanthology.org/2024.findings-eacl.36
DOI:
Bibkey:
Cite (ACL):
Rui Cao and Jing Jiang. 2024. Knowledge Generation for Zero-shot Knowledge-based VQA. In Findings of the Association for Computational Linguistics: EACL 2024, pages 533–549, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Knowledge Generation for Zero-shot Knowledge-based VQA (Cao & Jiang, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-eacl.36.pdf
Software:
 2024.findings-eacl.36.software.zip