Attribute Distribution Modeling and Semantic-Visual Alignment for Generative Zero-shot Learning
This paper proposes ADiVA, a generative zero-shot learning framework that addresses the class-instance and semantic-visual domain gaps by jointly modeling attribute distributions to capture instance-specific variability and employing visual-guided alignment to refine semantic representations, thereby significantly outperforming state-of-the-art methods on benchmark datasets.