Interpretable Multimodal Gesture Recognition for Drone and Mobile Robot Teleoperation via Log-Likelihood Ratio Fusion
This paper proposes an interpretable, multimodal gesture recognition framework that fuses inertial and capacitive sensor data via log-likelihood ratio to enable robust, real-time, hands-free teleoperation of drones and mobile robots, supported by a new dataset and demonstrating performance comparable to vision-based methods with significantly lower computational costs.