Currently I’m working on several different projects, focusing on methods for novel two-photon microscopy (including volumetric imaging and validation tools), statistical modeling of neural spiking behavior, theoretical analysis of recurrent network properties, and statistical models for correlated/structured signals (e.g. new dynamic filtering methods and fMRI analysis). Previous work includes analysis of hyperspectral imagery, computational neural networks, and stochastic filtering.

Neural Anatomy and Optical Microscopy (NAOMi) Simulation

Functional fluorescence microscopy has become a staple in measuring neural activity during behavior in rodents, birds and fish (and more recently primates!). Recent advances have created both novel optical set-ups (e.g. vTwINS for volumetric imaging) and more complex algorithms for demixing the neural activity from the recorded fluorescence videos, however methods to validate these improved methods are lacking. NAOMi seeks to use the extensive literature on neural anatomy and knowledge of cellular calcium dynamics to provide simulations of fluorescence data with full ground truth of neural activity for validation purposes.

Detecting and correcting false transients in calcium image analysis

Accurate scientific results for circuits neuroscience using multi-photon calcium imaging depends heavily on accurate segmentation of the video into the neural activity for each individual neuron. Current automated time-trace extraction, regardless if whether sources were manually or automatically discovered, can erroneously attribute activity, resulting in scientifically significant errors such as spurious correlations or false representations (e.g. false place fields in the hippocampus). We devised methods to diagnose and correct such contaminating activity. To detect such errors we devised a combination of visualization of metrics via frequency-and-level of contamination (FaLCon) plots, as well as a user interface to easily permit practitioners to explore their data and identify falsely attributed activity. Furthermore, we also derived a new time-trace estimation algorithm robust to contamination from other sources. Our estimation procedure, Sparse Emulation of Unknown Dictionary Objects (SEUDO) directly models structured contaminants to the benefit of scientific discovery.

Two-Photon Microscopy

Two-Photon Microscopy (TPM) is a vital tool for recording large numbers of neurons over long time-scales. TPM recordings have allowed researchers to analyze entire networks of neurons in a number of cortical areas during awake behavior. Traditional TPM focuses on raster-scanning a single slice of neural tissue at relatively high frame-rates. To record additional neurons, volumetric imaging has been explored as well. Volumetric scanning, however, requires scanning many planes sequentially, lowering the overall scan rate. In this area I am collaborating with Dr. David Tank’s lab working on volumetric Two-photon Imaging of Neurons using Stereoscopy (vTwINS) method for imaging entire volumes with no reduction in frame rate. Specifically, our method records stereoscopic projections of the volume where each neuron is imaged twice, and the distance between the neuron’s images encodes the depth of that neuron. Our novel greedy demixing method, SCISM, can then decode these images and return the neural locations and activity patterns. In addition to volumetric imaging, I am also working on alternative pre-processing techniques to denoise TPM data and to robustly filter out structured fluorescence contaminations to get more accurate estimates of neural activity.

Statistical Modeling of Neural Spiking

Interpreting neural spike trains is a critical step in decoding the relationship between neural activity and external stimuli. Basic probabilistic models of neural firing assume Poisson firing statistics. Recent work studying the over-dispersion of neural firing, however, has found that often the statistics of neural are decidedly not Poisson and may vary between neurons. Subsequent work has sought more flexible models which can account for the variability of neural firing. In this area I am working on creating simple yet flexible models for neural firing for which a small number of parameters can be learned directly (e.g. see code here) from neural recordings.

Matrix-normal Models of fMRI

Functional Magnetic Resonance Imaging (fMRI) is a widely used modality for whole-brain imaging in humans. To better understand the cognitive processing taking place in the brain, many methods have been independently created to infer correlations between activity in different brain areas and across subjects and tasks. To further improve such inference I, with Michael Shvartsman and Mikio Aoi, have shown that the most prominent of these models can be placed in a single matrix-normal framework. This observation allows us to connect these models, devising faster, more flexible, and more accurate inference techniques for fMRI.

Analysis of Recurrent Neural Networks

Interconnected networks of simple nodes have been shown to have computational abilities far beyond the sum of the individual neurons. Understanding how the network connectivity facilitates this huge increase in computational capacity has recently become increasingly important, in particular in the context of relating better understood theoretical network models and biological neural systems. In this area I’ve worked on both mathematical models of networks which compute the solution to various optimization problems, as well as deriving theoretical bounds on the short-term memory (STM) of linear neural networks. In particular I’ve worked on theory for single-input networks with sparse inputs and multiple-input networks with either sparse or low-rank inputs.

Sparsity-aware Stochastic Filtering

While the majority of work related to inferring and learning sparse signals and their representations has focused on static signals, (e.g., block processing of video), many applications with non-trivial temporal dynamics must be considered in a causal setting. I work to expand the ideas of sparse signal estimation to the realm of dynamic state tracking. The foundational formulation of the Kaman filter does not trivially expand to cover regimes of sparse signal and noise models, due to the loss of Gausian structure in the state statistics, as well as the impracticality of linear dynamics or retention of full covariance matrices. Instead other methods need to be developed. I am currently working on a series of algorithms based on probabilistic modeling to find fast updates for consecutive sparse state estimation. As an extension, similar methods can be employed to spatially correlated signals, resulting in more general, efficient multi-dimensional stochastic filtering techniques for correlated sparse signals.

Hyperspectral Imagery (HSI)

The sparse coding framework is particularly fitting for remote imaging using HSI. HSI uses many more spectral measurements than other imaging modalities (e.g., multispectral imaging (MSI) which typically takes ~8-12 spectral measurements), capturing data at 200-300+ wavelengths spanning the infrared to ultraviolet ranges. This level of spectral detail allows HSI to capture much richer information about the materials and features present in a scene. To discover the materials in a dataset, we can perform sparsity-based dictionary learning (code here). This unsupervised method extracts the specific spectra corresponding to different materials using only the basic assumption that few pure materials are present in any voxel. In addition to spectral demixing, I also use the sparsity-based inference procedures and learned dictionaries for unmixing, classification and other inverse problems that may arise in the use of HSI data. In particular, I have focused on spectral super-resolution of multispectral measurements to hyperspectral-level resolutions.