Uncertainty Quantification for Multimodal Large Language Models with Incoherence-adjusted Semantic Volume
This paper introduces UMPIRE, a training-free, efficient uncertainty quantification framework for Multimodal Large Language Models that leverages internal modality features to compute incoherence-adjusted semantic volumes, demonstrating superior performance in error detection and calibration across diverse modalities and challenging settings without relying on external tools.