Controllable Dance Generation with Style-Guided Motion Diffusion
This paper proposes Style-Guided Motion Diffusion (SGMD), a novel framework that integrates a Transformer-based architecture with a Style Modulation module and spatial-temporal masking to generate realistic, music-aligned dance sequences that are both stylistically consistent and flexibly controllable for tasks like trajectory generation, in-betweening, and inpainting.