Relation-Aware Image Captioning with Hybrid-Attention for Explainable Visual Question Answering

Ying Jia Lin, Ching Shan Tseng, Hung Yu Kao*

*Corresponding author for this work

Research output: Contribution to journalJournal Article peer-review

Abstract

Recent studies leveraging object detection as the preliminary step for Visual Question Answering (VQA) ignore the relationships between different objects inside an image based on the textual question. In addition, the previous VQA models work like black-box functions, which means it is difficult to explain why a model provides such answers to the corresponding inputs. To address the issues above, we propose a new model structure to strengthen the representations for different objects and provide explainability for the VQA task. We construct a relation graph to capture the relative positions between region pairs and then create relation-aware visual features with a relation encoder based on graph attention networks. To make the final VQA predictions explainable, we introduce a multi-task learning framework with an additional explanation generator to help our model produce reasonable explanations. Simultaneously, the generated explanations are incorporated with the visual features using a novel Hybrid-Attention mechanism to enhance cross-modal understanding. Experiments show that the proposed method performs better on the VQA task than the several baselines. In addition, incorporation with the explanation generator can provide reasonable explanations along with the predicted answers.

Original languageEnglish
Pages (from-to)649-659
Number of pages11
JournalJournal of Information Science and Engineering
Volume40
Issue number3
DOIs
StatePublished - 05 2024
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2024 Institute of Information Science. All rights reserved.

Keywords

  • explainable VQA
  • graph attention networks
  • multi-task learning
  • vision-language model
  • visual question answering

Fingerprint

Dive into the research topics of 'Relation-Aware Image Captioning with Hybrid-Attention for Explainable Visual Question Answering'. Together they form a unique fingerprint.

Cite this