LaVCa: LLM-assisted Visual Cortex Captioning
The paper proposes LaVCa, a novel data-driven approach that leverages large language models to generate natural-language captions for images, thereby providing more accurate and detailed interpretations of human visual cortex voxel selectivity and revealing fine-grained functional differentiation within the visual cortex compared to existing deep neural network-based methods.