I enjoy working on interesting problems. Here are a selection of projects that I have worked on over the years.
In this project, I develop machine learning strategies to enable continuous control of prosthetic limbs using signals from implanted electromyography (EMG) electrodes. I work with both upper- and lower-limb amputees, aiming to decode natural movement intent—like individual finger or leg motions—without direct access to ground truth kinematics. This is a fundamental challenge in prosthetics: human users cannot produce ground-truth motion data for training decoders, so we must rely on indirect cues and creative prompting to guide model learning. My work focuses on building robust algorithms that can learn from weak or noisy training labels and still produce stable, intuitive control in real-time. To address this, I’ve developed a framework that leverages controlled experiments in non-human primates, where true kinematics are available. By systematically corrupting the ground-truth labels in the monkey data, we evaluate how different learning strategies cope with noisy supervision—allowing us to identify methods that are most promising for real-world application in humans. This approach bridges the gap between experimental neuroscience and clinical translation, helping improve the performance and reliability of prosthetic systems used outside the lab. In the video above, you can see one of our transradial amputee participants using a real-time system to independently control the index and middle–ring–small (MRS) fingers of a virtual hand. While the decoder was trained without access to true movement trajectories, it still enables smooth and differentiated finger control—showcasing the potential of our weakly supervised learning approach.
The motivation for this project arose from a critical challenge in brain-machine interface (BMI) technology: balancing the high performance of deep-learning models with the explainability and safety required for real-world applications. Current state-of-the-art deep-learning decoders excel in decoding brain activity for tasks like speech and movement but function as “black boxes,” posing risks in sensitive applications like controlling prosthetic limbs. Our goal was to explore whether KalmanNet—a hybrid model combining Kalman filter principles with deep-learning flexibility—could achieve both high performance and interpretability in BMI systems. To test this hypothesis, we designed a BMI experiment using data from two non-human primates performing dexterous finger movements (see video above). We compared KalmanNet’s decoding performance against traditional Kalman filters and advanced deep-learning algorithms, including LSTM and tcFNN models. KalmanNet uniquely integrates recurrent neural networks to dynamically adjust trust between neural inputs and dynamic predictions, leveraging explainability while achieving flexible performance. We evaluated these models both offline (on pre-recorded data) and online (real-time task execution), focusing on metrics like prediction accuracy, task completion rates, and path efficiency. The results showed that KalmanNet matched or exceeded the performance of deep-learning decoders in most metrics while retaining its explainable structure. Offline, KalmanNet achieved prediction accuracies comparable to the leading LSTM model. Online, it outperformed simpler models like the Kalman filter in terms of task completion rates and movement smoothness, demonstrating its capability to operate in real-time scenarios. However, we also found that KalmanNet, like other decoders, struggled with generalization in new task contexts and exhibited sensitivity to out-of-distribution noise, highlighting areas for further optimization. This work was accepted for publication at NeurIPS 2024, and is available here.
The inspiration for this project stemmed from the need to improve brain-machine interface (BMI) performance, particularly in decoding complex multi-degree-of-freedom (DoF) movements. While the brain handles high-DoF tasks effortlessly, current BMI systems struggle to match this level of control. Our study was motivated by the hypothesis that the brain simplifies high-DoF tasks through synergies, functional groupings of muscles and neural activities. By examining whether these synergies could enhance decoding in implanted BMIs, we aimed to uncover strategies to bridge the gap between natural and artificial control.
In this project, I explored how reliably we can measure the mechanical impedance of the ankle during standing and walking. We used a custom-built robotic platform and motion capture setup, and measured the ankle torque and angle response to small perturbations (example perturbation shown in gif above). The goal was evaluating how reliably we can measure impedance parameters such as stiffness, damping and inertia, as an accurate and reliable measurement can help as a clinical metric as well as to develop biomimetic prostheses and exoskeletons.
The motivation for this project was to address significant barriers in procedural skill learning research. Traditional motor learning experiments require controlled lab environments, which limit sample size, accessibility, and diversity. Inspired by the need for more inclusive and scalable research methods, I developed an open-source online platform. This platform democratizes access to motor learning studies by removing coding prerequisites and simplifying experimental setup, making cutting-edge research feasible for a broader audience.