Scaling Test-Time Robustness of Vision-Language Models via Self-Critical Inference Framework
This paper proposes a Self-Critical Inference (SCI) framework that enhances the robustness of Large Vision-Language Models against language bias and sensitivity through multi-round counterfactual reasoning with textual and visual perturbations, alongside a new Dynamic Robustness Benchmark (DRBench) for model-specific evaluation.