Continuous prosthetic control from EMG

In this project, I develop machine learning strategies to enable continuous control of prosthetic limbs using signals from implanted electromyography (EMG) electrodes. I work with both upper- and lower-limb amputees, aiming to decode natural movement intent—like individual finger or leg motions—without direct access to ground truth kinematics. This is a fundamental challenge in prosthetics: human users cannot produce ground-truth motion data for training decoders, so we must rely on indirect cues and creative prompting to guide model learning. My work focuses on building robust algorithms that can learn from weak or noisy training labels and still produce stable, intuitive control in real-time.
To address this, I’ve developed a framework that leverages controlled experiments in non-human primates, where true kinematics are available. By systematically corrupting the ground-truth labels in the monkey data, we evaluate how different learning strategies cope with noisy supervision—allowing us to identify methods that are most promising for real-world application in humans. This approach bridges the gap between experimental neuroscience and clinical translation, helping improve the performance and reliability of prosthetic systems used outside the lab.
In the video above, you can see one of our transradial amputee participants using a real-time system to independently control the index and middle–ring–small (MRS) fingers of a virtual hand. While the decoder was trained without access to true movement trajectories, it still enables smooth and differentiated finger control—showcasing the potential of our weakly supervised learning approach.