Motivation
Brain-computer interfaces (BCIs) aim to restore lost neural function—such as vision or movement—by interfacing directly with the brain's remaining functional circuits. With recent advances in high-resolution, large-scale neural implants, we can now record from and stimulate large populations of neurons at cellular resolution across multiple brain regions. These developments have generated unprecedented datasets, providing new opportunities to understand the brain and interact with it in real-time.
Our overarching goal is to develop a computational toolbox capable of decoding neural activity and delivering targeted stimulation—effectively "reading from" and "writing to" the brain—in real-time and at cellular resolution.
Our Solution
Developed a real-time BCI system for decoding speech and full-body movement in VR using OmniGibson, enabling clinical human trials. To support closed-loop experimentation, we built a modular 2D/3D BCI task system in Python with real-time logging, goal tracking, and adjustable difficulty. A custom Pygame-based interface provided dynamic feedback and supported multiple input modalities (keyboard, SpaceMouse, Utah Array) for both in-lab and at-home use. We implemented a BLENT-inspired shared-autonomy algorithm using goal belief and confidence-weighted blending to assist user control. Additionally, we contributed to an adaptive decoder framework using online multinomial logistic regression with entropy-weighted sampling and automated metrics for performance evaluation.
My Contributions
In RiceBCI’s shared-autonomy BCI platform, I led the development of shared-autonomy for robotics control and adaptive learning modules to support real-time brain-driven experiments. I designed a 2D cursor and robotic control framework implementing a belief-based intent inference algorithm (BLENT), which uses a maximum-entropy model to predict user goals and blend their input with autonomous assistive actions. This module features a modular, cross-platform architecture with support for multiple control modes (human, autonomous, noisy, and shared), along with a real-time Pygame interface for dynamic task execution, comprehensive logging, and automated performance reporting. On the learning side, I developed a novel adaptive trial-selection algorithm to accelerate BCI decoder training. This method integrates online multinomial logistic regression with a surrogate model to compute expected information gain, enabling real-time selection of the most informative targets. Additionally, I engineered a custom game interface that connects neural data streams (e.g., Utah array signals) directly to task environments, enabling closed-loop BCI control.
Project Outcomes
Manuscripts in Preparation
Looking to discuss further? Contact me at research@mkmaharana.com