Motivation
Brain-computer interfaces have the potential to restore lost function — giving movement back to the paralyzed, communication back to the nonverbal — by reading directly from the brain's surviving circuits. With recent advances in high-resolution neural implants capable of recording from thousands of neurons simultaneously across multiple brain regions, the bottleneck is no longer the hardware. It's the intelligence layer: decoding what the brain intends, fast enough and accurately enough to drive real-world actions in real time. That's the problem I went to Rice to work on.
Our Solution
I built a real-time shared-autonomy BCI platform designed to decode speech and full-body movement in virtual reality, targeting closed-loop clinical human trials. The system combines a modular task environment, a live neural signal interface, an intent-inference controller, and an adaptive decoder — all running in real time.
My Contributions
I designed and built BLENT — a shared-autonomy controller that infers what a user is trying to do from their movement alone, without ever being told the true goal. It maintains a probabilistic belief (a continuously updated probability distribution over which target the user is heading toward) over candidate targets, updates that belief in real time using a maximum-entropy model (an algorithm that makes the least assumptions possible about intent, staying maximally open until the data speaks) over input direction, and blends user control with autonomous assistance via a confidence-weighted sigmoid (a smooth curve that scales how much the system helps based on how certain it is of the user's goal) — backing off when intent is ambiguous, stepping in when it's clear.
I built the full task architecture around it: a modular Python system supporting human, autonomous, shared, and noisy control modes, a real-time Pygame interface with dynamic feedback, and support for multiple input devices — with all testing conducted using a SpaceMouse as a stand-in for neural control, simulating the kind of noisy, imprecise input a BCI user would produce. Automated logging and performance reporting spanned 154 systematically varied conditions (every combination of 11 assistance levels and 14 noise levels, run 100 times each), fully verifying the system ahead of real neural data collection.
I also developed an adaptive trial-selection algorithm to accelerate decoder training — pairing online multinomial logistic regression (a model that learns in real time which neural patterns map to which intended actions) with a surrogate model that computes expected information gain (a measure of how much each possible next target would teach the system about the user's neural signals) in real time, so the system always presents the most informative target next rather than selecting randomly. This module was built and validated in simulation, designed to plug directly into a live neural data stream when human trials begin.
Project Outcomes
Manuscripts in Preparation
Looking to discuss further? Contact me at research@mkmaharana.com