Explainable deep-learning for brain-machine interfaces

Dec 29, 2024 · 2 min read

The motivation for this project arose from a critical challenge in brain-machine interface (BMI) technology: balancing the high performance of deep-learning models with the explainability and safety required for real-world applications. Current state-of-the-art deep-learning decoders excel in decoding brain activity for tasks like speech and movement but function as “black boxes,” posing risks in sensitive applications like controlling prosthetic limbs. Our goal was to explore whether KalmanNet—a hybrid model combining Kalman filter principles with deep-learning flexibility—could achieve both high performance and interpretability in BMI systems.

To test this hypothesis, we designed a BMI experiment using data from two non-human primates performing dexterous finger movements (see video above). We compared KalmanNet’s decoding performance against traditional Kalman filters and advanced deep-learning algorithms, including LSTM and tcFNN models. KalmanNet uniquely integrates recurrent neural networks to dynamically adjust trust between neural inputs and dynamic predictions, leveraging explainability while achieving flexible performance. We evaluated these models both offline (on pre-recorded data) and online (real-time task execution), focusing on metrics like prediction accuracy, task completion rates, and path efficiency.

The results showed that KalmanNet matched or exceeded the performance of deep-learning decoders in most metrics while retaining its explainable structure. Offline, KalmanNet achieved prediction accuracies comparable to the leading LSTM model. Online, it outperformed simpler models like the Kalman filter in terms of task completion rates and movement smoothness, demonstrating its capability to operate in real-time scenarios. However, we also found that KalmanNet, like other decoders, struggled with generalization in new task contexts and exhibited sensitivity to out-of-distribution noise, highlighting areas for further optimization.

This work has been accepted for publication at NeurIPS 2024, and is available here.