Motivation
Memory is not stored in a single place in the brain — it's assembled and reconstructed across distributed neural circuits every time we recall something. Understanding how the brain encodes and retrieves structured information is one of the core open questions in systems neuroscience, and it's directly relevant to restoring memory in patients with epilepsy, Alzheimer's, and other neurological conditions. My collaboration with the Texas Computational Memory Lab investigates how schema — prior knowledge structures — shape what gets encoded and how reliably it's recalled. The goal is to probe these memory circuits directly using intracranial sEEG recordings in human patients, capturing neural activity at millisecond resolution across hundreds of channels simultaneously.
This project was in collaboration with Dr. Bradley Lega's group at the Texas Computational Memory Lab
Our Solution
The experimental platform I built presents participants with word lists that follow a hidden category-position rule — a schema — and tests whether learning that schema improves serial recall. The system uses an adaptive training phase to bring each participant to criterion before the main experiment begins, then runs 20 study-test cycles mixing schema-conforming and random lists, with a math distractor between encoding and recall to prevent active rehearsal. Every encoding event, recall placement, and sync pulse is logged with millisecond precision and aligned to the sEEG hardware for direct neural analysis.
My Contributions
I built the entire experimental system from scratch in Python using PsychoPy. That includes the adaptive training engine — which scores only the schema-relevant odd positions and loops until the participant achieves three consecutive perfect trials — the 80/20 schema/random list generator drawing from a disjoint, category-labeled word pool, and a drag-and-drop serial recall interface that scales dynamically to any monitor, scrambles item display order, and only allows submission once every slot is filled.
I also built the EEG synchronization layer: a PennSyncBox USB module that sends hardware sync pulses at every encoding and retrieval event, timestamps them against the experiment clock, and writes them to a dedicated pulses file — all with a clean TEST_MODE fallback so the experiment runs identically in mock sessions without hardware. Every session produces three structured CSV files ready for direct alignment with neural recordings the moment human data collection begins.
Project Outcomes
Looking to discuss further? Contact me at research@mkmaharana.com
Motivation
Stereo-EEG (sEEG — thin electrode wires implanted directly into brain tissue during epilepsy surgery evaluation) monitoring gives researchers rare access to high-resolution neural activity across deep and distributed brain regions simultaneously. Because patients spend several days implanted with multi-contact depth electrodes during routine clinical monitoring, this setting enables controlled experiments on finger-specific motor representations, error detection, conflict resolution, and short-timescale motor learning — signals that are simply inaccessible with non-invasive methods like EEG or fMRI. The goal of this project is to determine whether sEEG contains the temporal precision, spatial selectivity, and stability required for future minimally invasive, bidirectional BCIs (brain-computer interfaces that both read motor commands and write stimulation back to the brain). Achieving this requires a behavioral paradigm with precisely timed motor events, explicit error structure, and synchronized logging aligned to clinical-grade neural recordings — all running reliably on a hospital workstation with a patient connected.
Our Solution
We built a fully modular EMU (epilepsy monitoring unit — the clinical setting where patients are observed with implanted electrodes) experimental stack centered on a Typing Blind paradigm, orchestrated through the Dareplane Control Room (a web-based operator interface that launches and manages every module from a single screen). The entire system starts from a single macro configuration file, and from one UI the operator can launch the typing task, control LabRecorder (the tool that captures all synchronized data streams), and run a mock neural streamer for dry runs without a patient connected. The typing task emits structured JSON markers over LSL (Lab Streaming Layer — a protocol that synchronizes data streams from multiple devices in real time) for every trial-level event — motor targets, keypresses, hits, misses, error types, and feedback delivery. These markers are recorded in lockstep with the Neurofax sEEG stream, producing unified XDF files (a standardized format for time-synchronized multimodal recordings) where neural and behavioral data are precisely aligned. In parallel, the task writes TSV logs sharing the same filename stem as the corresponding XDF, ensuring clean alignment during analysis. Sessions are fully parameterized through macros and can be configured as single-finger blocks, mixed-hand blocks, or error-enriched runs — supporting clean isolation of motor execution, error detection, and learning-related neural dynamics.
My Contributions
I built the entire experimental stack from the ground up. At the core, I engineered the Typing Blind task end-to-end — its logic, timing guarantees, and the complete JSON marker schema (a structured data format defining exactly what gets recorded at every event) defining targets, responses, errors, and feedback delivery. I implemented synchronized TSV logging that shares filename stems with LabRecorder XDF outputs, keeping behavioral and neural data tightly coupled throughout the pipeline.
I configured and extended the Dareplane Control Room and macro system to parameterize experimental blocks across finger and hand sets, cue speeds, and feedback modes, and to orchestrate LabRecorder PREP/START/STOP/CLEAR actions so streams and filenames stay aligned across every run. I integrated the full LSL pipeline combining task-level markers with Neurofax sEEG acquisition, and implemented a mock Neurofax streamer enabling dry runs, debugging, and operator training without a patient connected.
Beyond the core task, I built the broader session infrastructure from scratch: a session-sheet generator that randomizes 9/12/18-block designs under finger and hand constraints and exports JSON/Markdown/HTML run plans; a browser-based response pad with per-finger calibration and LSL marker streaming; and a Cedrus cPOD/XID bridge (a device converting software events into electrical pulses sent to recording hardware) delivering TTL triggers (hardware synchronization signals) locked to every task event.
Finally, I developed the full post-session analysis suite: tools that replay XDF recordings, validate synchronization against TSV logs, overlay behavioral events on multichannel sEEG, and generate QA figures (quality assurance visualizations confirming data integrity) including rolling accuracy and latency curves, KDE reaction time distributions (smooth statistical curves showing how response times are spread across trials), Wilson confidence intervals (a robust method for estimating accuracy ranges from small samples), commission confusion matrices (grids showing which finger presses were correctly identified versus misclassified), and inter-response-interval analysis (measuring timing gaps between responses to detect anomalies). The entire system has been verified end-to-end in mock sessions and is ready for live human data collection.
Project Outcomes
Looking to discuss further? Contact me at research@mkmaharana.com