Selected Projects

I enjoy working on interesting problems. Here are a selection of projects that I have worked on over the years.

Continuous prosthetic control from EMG

Continuous prosthetic control from EMG

In this project, I develop machine learning strategies to enable continuous control of prosthetic limbs using signals from implanted electromyography (EMG) electrodes. I work with both upper- and lower-limb amputees, aiming to decode natural movement intent—like individual finger or leg motions—without direct access to ground truth kinematics. This is a fundamental challenge in prosthetics: human users cannot produce ground-truth motion data for training decoders, so we must rely on indirect cues and creative prompting to guide model learning. My work focuses on building robust algorithms that can learn from weak or noisy training labels and still produce stable, intuitive control in real-time. To address this, I’ve developed a framework that leverages controlled experiments in non-human primates, where true kinematics are available. By systematically corrupting the ground-truth labels in the monkey data, we evaluate how different learning strategies cope with noisy supervision—allowing us to identify methods that are most promising for real-world application in humans. This approach bridges the gap between experimental neuroscience and clinical translation, helping improve the performance and reliability of prosthetic systems used outside the lab. In the video above, you can see one of our transradial amputee participants using a real-time system to independently control the index and middle–ring–small (MRS) fingers of a virtual hand. While the decoder was trained without access to true movement trajectories, it still enables smooth and differentiated finger control—showcasing the potential of our weakly supervised learning approach.

Explainable deep-learning for brain-machine interfaces

Explainable deep-learning for brain-machine interfaces

The motivation for this project arose from a critical challenge in brain-machine interface (BMI) technology: balancing the high performance of deep-learning models with the explainability and safety required for real-world applications. Current state-of-the-art deep-learning decoders excel in decoding brain activity for tasks like speech and movement but function as “black boxes,” posing risks in sensitive applications like controlling prosthetic limbs. Our goal was to explore whether KalmanNet—a hybrid model combining Kalman filter principles with deep-learning flexibility—could achieve both high performance and interpretability in BMI systems. To test this hypothesis, we designed a BMI experiment using data from two non-human primates performing dexterous finger movements (see video above). We compared KalmanNet’s decoding performance against traditional Kalman filters and advanced deep-learning algorithms, including LSTM and tcFNN models. KalmanNet uniquely integrates recurrent neural networks to dynamically adjust trust between neural inputs and dynamic predictions, leveraging explainability while achieving flexible performance. We evaluated these models both offline (on pre-recorded data) and online (real-time task execution), focusing on metrics like prediction accuracy, task completion rates, and path efficiency. The results showed that KalmanNet matched or exceeded the performance of deep-learning decoders in most metrics while retaining its explainable structure. Offline, KalmanNet achieved prediction accuracies comparable to the leading LSTM model. Online, it outperformed simpler models like the Kalman filter in terms of task completion rates and movement smoothness, demonstrating its capability to operate in real-time scenarios. However, we also found that KalmanNet, like other decoders, struggled with generalization in new task contexts and exhibited sensitivity to out-of-distribution noise, highlighting areas for further optimization. This work was accepted for publication at NeurIPS 2024, and is available here.