SynHLMA:Synthesizing Hand Language Manipulation for Articulated Object with Discrete Human Object Interaction Representation
This paper introduces SynHLMA, a novel framework that synthesizes hand manipulation sequences for articulated objects by aligning natural language instructions with a discrete human-object interaction representation, thereby enabling robust grasp generation, prediction, and interpolation for applications in embodied AI and robotics.