MV-Fashion: Towards Enabling Virtual Try-On and Size Estimation with Multi-View Paired Data
This paper introduces MV-Fashion, a large-scale multi-view video dataset featuring 3,273 sequences with pixel-level annotations, ground-truth material properties, and paired flat/worn garment images, designed to bridge the realism and annotation gaps in existing datasets for virtual try-on and size estimation tasks.