Listening with the Eyes: Benchmarking Egocentric Co-Speech Grounding across Space and Time
This paper introduces EcoG-Bench, a rigorous bilingual benchmark for egocentric co-speech grounding that reveals a significant performance gap between humans and state-of-the-art MLLMs, highlighting how multimodal interface limitations rather than reasoning deficits hinder the alignment of speech with pointing gestures in situated collaboration.