Research

My research group specializes in designing biologically-informed statistical models of the spectrotemporal properties of neural signals. Our work is organized around three questions, addressing (1) using the shape of the power spectrum of meso-scale recordings to decompose the dominant sub-processes such as excitatory and inhibitory population activity; (2) characterizing how the relationship between blood-oxygen-level-dependent responses and neural activity varies across time, location, and brain state, complicating the interpretation of functional magnetic resonance imaging (fMRI) data; and (3) resolving cross-scale mysteries between micro-scale biophysics, meso-scale network effects, and macro-scale noninvasive brain recordings like fMRI and electroencephalography (EEG).

Filtered Point Process Framework

Developing and validating the Filtered Point Process framework to study broadband and rhythmic neural dynamics

The power spectra of neural voltage recordings (e.g. local field potentials, LFPs) show robust changes during a wide variety of brain states, showing both narrowband (i.e. rhythmic) and broadband (i.e. spanning a large frequency range) effects. Broadband effects have gained recent interest, and are interpreted in terms of asynchronous population activity and excitatory/inhibitory balance. Many decomposition approaches have been developed to algorithmically decompose power spectra into rhythmic and broadband components, leading to a large body of empirical results: for example, the specparam/fooof method was cited more than 1500 times in its first five years.

Our group is working to extend this literature by (1) developing stochastic models to explicitly link power spectral components to their biophysical generators, and (2) developing better statistical and machine learning tools to perform spectral decomposition. Our most recent publication (Bloniasz et al., 2025) develops the Filtered Point Process framework to capture both narrowband and broadband contributors to electrophysiological power spectra and predict sub-second cross-frequency coupling. Ongoing work focuses on (1) developing probabilistic machine learning techniques for inference on these models and (2) validating the models against experimental data and theoretical models, updating them to capture nonlinear effects.

  1. Bloniasz, P. F., Oyama, S., & Stephen, E. P. (2025). Filtered Point Processes Tractably Capture Rhythmic and Broadband Power Spectral Structure in Neural Electrophysiological Recordings. Journal of Neural Engineering. doi:10.1088/1741-2552/ade28b
Functional Connectivity

Statistical modeling and software for dynamic functional connectivity estimation

Understanding how communication between brain areas evolves to support dynamic function remains a fundamental challenge in neuroscience. One approach to this question is functional connectivity analysis, in which statistical coupling measures are employed to detect signatures of interactions between brain regions. Because the brain uses multiple communication mechanisms at different temporal and spatial scales, and because the neuronal signatures of communication are often weak, powerful connectivity inference methodologies require continued development specific to these challenges.

For my graduate work, I developed a statistical framework for dynamic functional connectivity estimation using pairwise coupling statistics, with a deep investigation of coherence and canonical coherence (Stephen et al., 2014). To make these metrics more accessible to the community, Dr. Eric Denovellis and I developed an open-source Python software package, spectral_connectivity, to compute frequency-domain connectivity measures and their statistics (Denovellis et al., 2022). This package has been forked over 40 times and starred over 100 times on Github.

The classical methods for rhythmic functional connectivity estimation that I studied early in my career are most sensitive to sinusoidal coupling, because they either operate in the frequency domain or make use of filtering to identify rhythmic activity. Actual neural rhythms, however, are often non-stationary, non-periodic, and non-sinusoidal, and multiple rhythms are often present in the same data. My recent significant contribution has been a new functional connectivity estimation framework based on time-domain rhythm models, or “State Space Oscillators” (SSOs), which are not subject to these limitations. Our models can detect discrete changes in brain state such as those occurring at the moment of loss of consciousness in anesthesia (Hsin et al., 2022), when both slow waves (<1Hz) and alpha rhythms (8-12Hz) change their network structures.

Most recently, we have extended this framework to account for three different types of connectivity, in order to capture situations when (1) multiple rhythms are driven by a common source, (2) rhythms are influenced by shared non-rhythmic noise, or (3) rhythms are directly influencing each other (Hsin et al., in submission). We show that these models are much more powerful than traditional methods at detecting changes in functional connectivity.

  1. Stephen, E. P., Lepage, K. Q., Eden, U. T., Brunner, P., Schalk, G., Brumberg, J. S, Guenther, F. H., Kramer, M. A. (2014). Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses. Frontiers in Computational Neuroscience, 8. doi:10.3389/fncom.2014.00031
  2. Denovellis, E.L., Myroshnychenko, M., Sarmashghi, M. and Stephen, E.P. (2022). Spectral Connectivity: a python package for computing multitaper spectral estimates and frequency-domain brain connectivity measures on the CPU and GPU. Journal of Open Source Software, 7(80), p.4840. doi.org/10.21105/joss.04840
  3. Hsin, W., Eden, U. T., Stephen, E. P. (2022). Switching Functional Network Models of Oscillatory Brain Dynamics. In 2022 56th Asilomar Conference on Signals, Systems, and Computers (pp. 607-612). IEEE. doi:10.1109/IEEECONF56349.2022.10052077
  4. Hsin, W., Eden, U. T., Stephen, E. P. (in submission). Switching Models of Oscillatory Networks Improve Inference of Dynamic Functional Connectivity. Preprint. doi:10.48550/arXiv.2404.18854
Anesthesia

Broadband and rhythmic interactions during anesthesia distinguish between two distinct unconscious states

During my postdoctoral training with Drs. Emery Brown and Patrick Purdon at MIT, I studied human electroencephalography (EEG) during propofol anesthesia. I found that broadband power is coupled to the phase of the slow oscillation, likely a macro-scale consequence of up- and down-states in neural spiking at the cortical surface. The spatial pattern of this broadband-rhythmic interaction in EEG distinguishes between light and deep anesthesia (Stephen et al, 2020). In other words, there are at least two distinct brain states during what would traditionally be lumped together as “unconsciousness”. It is unknown what causes this dissociation between slow waves alone and slow waves with broadband coupling, because the up- and down-state hypothesis predicts that broadband coupling should occur whenever slow waves do. This mystery is observed at the highest spatial scale (scalp recordings), but its generators likely involve micro-scale cortical biophysics and meso-scale network effects. In addition, it involves interactions between rhythmic and broadband contributors to the power spectrum, which current models of field potentials are not designed to capture. In my current research, I am designing broadband-rhythmic models (based on Filtered Point Processes) that can span spatial scales, to study this kind of mystery.

I have also continued with anesthesia research since starting my faculty position. I provided statistical expertise in a study of human electrocorticography (ECoG) during propofol-induced unconsciousness (Weiner et al 2023), which supported the idea that anesthetic cortical alpha rhythms (8-12Hz) are synchronized by a common subcortical (thalamic) source. More recently, I co-mentored Dr. John Tauber with Drs. Emery Brown and Earl Miller, developing statistical models of up- and down-states in macaques under propofol anesthesia, investigating whether sensory signals are processed similarly in up-states as they are during wakefulness (Tauber et al, 2024).

  1. Weiner, V. S., Zhou, D. W., Kahali, P.*, Stephen, E. P., Peterfreund, R. A., Aglio, L. S., Szabo, M. D., Eskandar, E. N., Salazar-Gomez, A. F., Sampson, A. L., Cash, S. S., Brown, E. N., Purdon, P. L. (2022). Propofol disrupts alpha dynamics in distinct thalamocortical networks underlying sensory and cognitive function during loss of consciousness. PNAS, 120 (11), e2207831120. doi:10.1073/pnas.2207831120.
  2. Tauber, J. M., Brincat, S. L., Stephen, E. P., Donoghue, J. A., Kozachkov, L., Brown, E. N., & Miller, E. K. (2024). Propofol-mediated Unconsciousness Disrupts Progression of Sensory Signals through the Cortical Hierarchy. Journal of Cognitive Neuroscience, 1-20. doi:10.1162/jocn_a_02081
  3. Stephen, E. P., Hotan, G. C., Pierce, E. T., Harrell, P. G., Walsh, J. L., Brown, E. N., Purdon, P. L. (2020). Broadband slow-wave modulation in posterior and anterior cortex tracks distinct states of propofol-induced unconsciousness. Scientific Reports, 10(1), 1-11. doi:10.1038/s41598-020-68756-y
  4. Guidera, J. A., Taylor, N. E., Lee, J. T., Vlasov, K. Y., Pei, J., Stephen, E. P., Mayo, J. P., Brown, E. N., Solt, K. (2017). Sevoflurane induces coherent slow-delta oscillations in rats. Frontiers in Neural Circuits, 11, 36. doi:10.3389/fncir.2017.00036
Speech
Figure 5 from Stephen et al, 2023

Latent state analysis of neural responses during speech perception reveals possible mechanism of temporal binding

During my postdoctoral research with Dr. Edward Chang at UCSF, I applied statistical machine learning to human electrocorticography (ECoG) recordings during speech perception (Stephen et al., 2023). Research into the cortical basis of auditory speech perception has successfully modeled how high gamma (70-150Hz) responses in ECoG over the superior temporal gyrus (STG) encode phonetic features such as consonants and vowels. Furthermore, some STG populations respond to sentence onsets and acoustic onset edges (“peak rate” events), which represent sentence-level and syllable-level timing, respectively. Using a modern statistical machine learning approach, I demonstrated that these timing representations are largely shared across neural populations in a low-dimensional latent state. These spatially distributed timing signals could serve to provide temporal context for, and possibly bind across time, the concurrent processing of individual phonetic features, to compose higher-order phonological (e.g. word-level) representations. In other words, the observed geometry of the dynamics could be used by brain networks to bind short speech features such as phonemes into longer sequences such as words and phrases.

The same statistical machine learning approach is proving very relevant in our current research, where we are using it to quantify dynamic neurovascular coupling in fast-fMRI in humans (with Dr. Tyler Perrachione) and in joint optical and local field potential recordings in mice (with Drs. Anna Devor and David Boas). Preliminary work has led to a manuscript in review with Nature Neuroscience (Rauscher et al., in review). We have also provided statistical support for more general speech research at BU (Tomassi et al., 2025).

  1. Stephen, E. P., Li, Y., Metzger, S., Oganian, Y., and Chang, E. F. (2023). Latent neural dynamics encode temporal context in speech. Hearing Research, 437, 108838. doi:10.1016/j.heares.2023.108838
  2. Rauscher, B. C., Fomin-Thunemann, N., Kura, S., Doran, P. R., Perez, P. D., … , Stephen, E. P., Thunemann, M., Boas, D., & Devor, A. (in revision, Nature Neuroscience). Neurovascular Impulse Response Function (IRF) during spontaneous activity differentially reflects intrinsic neuromodulation across cortical regions. bioRxiv, 2024-09. doi:10.1101/2024.09.14.612514
  3. Tomassi, N. E., Turshvili, D., Williams, A., Walsh, B., Stephen, E. P., Stepp, C. E. (2025). Investigating cognitive load and autonomic arousal during voice production and vocal auditory-motor adaptation. Journal of Speech, Language, and Hearing Research. doi:10.1044/2024_jslhr-24-00601