Query-Guided Spatial-Temporal-Frequency Interaction for Music Audio-Visual Question Answering
This paper proposes QSTar, a novel query-guided spatial-temporal-frequency interaction method enhanced by a Query Context Reasoning block, which significantly improves Audio-Visual Question Answering performance by deeply integrating question-guided clues and audio frequency characteristics with visual perception, outperforming existing multimodal approaches on multiple benchmarks.