Induced Numerical Instability: Hidden Costs in Multimodal Large Language Models
This paper introduces a novel attack method that induces numerical instability in multimodal large language models by optimizing a specific loss function to generate images, causing significant performance degradation across state-of-the-art models and datasets that is distinct from traditional adversarial perturbations.