Determining Effects of IP & IE on Ensemble Activity & Assembly Structure for Flexible & Stable Memory
PhD Student in David J Freedman’s and Brent Doiron’s lab at University of Chicago
Thesis Project: The significance of this work lies in its potential to unravel the mechanisms governing memory formation in the brain, with a focus on intrinsic excitability (IE) and inhibitory plasticity (IP). I am particularly interested in understanding how the brain integrates external information into complex representations and robustly employs these representations for contextual learning and memory. My project will identify conditions for producing stable neural assemblies in simple recurrent E&I Population (mean field-like) rate networks with IE and IP. Secondly, it will use the Full rate models (N neurons in model populations) to simulate the effects of IP and IE on neural assembly and ensemble overlap across contexts and time. Such overlap, for neural ensembles, can be considered a proxy or mechanism by which memories can be mutually associated. The analysis will involve modeling of differential equations, linear stability analysis from dynamical systems, and correlation analysis to identify correlated neural activity. I will address the first goal by calculating conditions of stable assembly formation given a set of circuit and cellular properties. I have made headway with this analysis by doing the equivalent calculations in the 2 population E+I network with IP and IE in project VI above. I will systematically work from 2 populations and 1 context to 2m populations and m contexts (m=2). I will address aim 2 by simulating populations with N neurons across contexts and days and make predictions for the ensemble and assembly overlap. The Cai lab, our collaborates, have two datasets of data tracking neural activity across days for freely moving mice in water port reward tasks. They have given me permission to conduct analysis on their two datasets, which they have processed, identified and tracked cells across days. This will allow me to study the activity of 1000’s of neurons (~700/mouse across 17 and 15 mice) in freely moving mice in a task-updating no-context experiment (dataset 1). I will also be able to do a similar analysis on their 4-task 20day context-dependent task (dataset 2). I will be able to take advantage of their large-scale recorded neural data to make comparisons with the predictions from my model. I will use the same ensemble identification pipeline using principal component analysis and independent component analysis which the Cai lab used in dataset 1. I have a working Full model of 2 populations with N neurons across two contexts and simply need to expand the system to include more populations. My project will thus produce a general characterization of the Population model circuit with IP and IE, which is novel. It will also determine how IP and IE collectively shape engram formation in a memory updating and a multi-context paradigm, which I will compare with experiment. This will give insight into how they also shape ensembles and assemblies in vivo, the latter of which we do not have in experiment. This will lead to advances in understanding relevant theoretical neural circuit motifs and how memories emerge.
Stability of Recurrent Networks with Excitatory & Inhibitory Plasticity
[2023] PhD Student in David J Freedman’s and Brent Doiron’s lab at University of Chicago
This project is an initial stage of analysis for the work I have proposed in this application. I modeled a
two population Excitatory and Inhibitory fully recurrent rate model with inhibitory and excitatory plasticity. Previous analysis had focused on feedforward coupled models with excitatory to excitatory (EE) and inhibitory to excitatory plasticity (EI). I considered the learning dynamics in a fully recurrent network. By assuming slow plasticity dynamics relative to the firing rate dynamics, we imposed conditions that the E to E weights and the I to E weights must satisfy to produce stable firing rates. This allowed me to calculate the conditions of stability for the weight dynamics under these constraints, which gave me an upper bound on the plasticity threshold in both models. I showed that the region of stability in the recurrent case can be constrained to a basin of attraction around a line of fixed points (line attractor) in the weight dynamics, which is fully parameterized by the initial conditions, recurrent E to I and I to I weights, and the stimulus input. I have submitted this result to the Cosyne 2024 conference and will begin writing the results into a paper.
Results: Paper in progress
Mathematical analysis of unsupervised computational models, data analysis, and simulation
Ensemble remodeling supports memory-updating (with 1 layer network with RL)
[2022-2023] PhD Student UChicago in David J Freedman’s lab; Collaborating with Denise Cai’s lab at Mt. Sinai
This project was the beginning of our collaboration with the Cai lab. Experimental results from the Cai lab suggested that task-updating remodels neural ensembles. The experimental setup entails a mouse in a circle track. Along the track there are 8 possible water reward ports. On any given day, two of the ports are rewarded with water. The mice are water-deprived and enter the track every day for 20 minutes. On the fifth day, the location of the water reward switches to two new ports. To model this and ask questions about neural activity during memory updating, I built a custom 1-layer artificial neural network (ANN) to model a reinforcement learning agent in a virtual track. I studied the network activations as the model agent learns the water port task across 4 “days” in the model. On “day” 5, the ports switch to two new locations. The model re-captures the behavior of the real animal by reaching peak hit rate, correct rejection rate, and discriminability index across sessions, which then drop and begin to improve again upon task switching. The network also exhibits reward port over-representation. I found the model relies on a stable spatial representation of co-active neurons to be able to modulate the output activity to action. Further, I find a large proportion of co-active neurons which “fade” in their co-activity upon task updating. We are in the process of writing this result in a paper, which we hope to discern whether these fading ensembles represent reward representations switching.
Results: Paper in Progress Pending re-analysis of experimental data and a Poster at SfN 2023
Artificial Neuronal Ensembles with Learned Context Dependent Gating
[2022] Rotating PhD student in David J Freedman’s lab at University of Chicago
I began this project during a rotation in the Freedman lab prior to joining. Biological neural networks are
capable of recruiting subsets of neurons to encode different memories. However, when training artificial NNs (ANNs) on a set of tasks, because ANNs have no such mechanism, they suffer from catastrophic forgetting, in which their performance rapidly deteriorates as tasks are learned sequentially. We expanded upon a prior continual learning model called Context Dependent Gating (XDG) in which subnetworks of weights are allocated in a random way. We call our method Learned Context Dependent Gating (LXDG). We introduced 3 new regularization terms, allowing subnetworks to be learned by supporting changing of old weights for new tasks, keeping old weights, and maintaining sparsity in the network. We found the model was able to learn how to allocate subnetworks of neurons effectively relative to control models on a continual learning benchmark called rotated MNIST. This method effectively mitigated catastrophic forgetting. This produced a paper which was accepted by ICLR (2023) and a poster at the Society for Neuroscience (SfN 2022.
Result: Paper accepted by ICLR 2023 and Poster at SfN 2022
Skills: model architecture development, and weight optimization. .
Investigation of Divisive Normalization (DN) in Image Classification
[2019-2021] Research Assistant in Ken Miller’s Lab Columbia University, Center for Theoretical Neuroscience
A year after college, I joined Ken D. Miller’s lab upon deciding I wanted to switch into computational neuroscience. While there, I investigated divisive normalization (DN), a phenomenon observed in the cortex where neuron responses can be inhibited by nearby neurons (e.g. lateral inhibition). My work involved incorporating a custom DN technique into a 5-layer convolutional NN, aiming to understand its impact on performance and learning. The key findings of my research, which resulted in a paper at the International Conference for Learned Representations (ICLR), revealed that DN differs from traditional normalization methods, such as Batch, Group, and Layer norm, by inducing competition among neurons. Through parameter tuning, I achieved optimal results and compared the DN model to models with combinations of normalization, which outperformed on both ImageNet and CIFAR100 datasets. Furthermore, I explored changes in neural manifolds, observed increased sparsity influenced by DN, and identified shifts in the radial profile of Fourier power, potentially selecting for large-scale structures. This project honed my skills in customizing NN architectures, analyzing their activity, and presenting research findings effectively at conferences.
Result:
● Paper accepted for ICLR 2022; Attended Cold Spring Harbour NAISys conference 2020 (online)
● Developed skills regarding benchmarking convolutional neural networks and vision models
● NN Metrics: multidimensional scaling(MDS), manifold capacity, clustering algorithms, dimensionality
reduction(PCA, TSne, Isomap), representational dissimilarity matrices, Adversarial attacks, etc.
Effect of Space Charge on Oscillation Frequency Distributions
[2018-2021] Accelerator Physics Masters Student: advised by Steven Lund through Indiana University/USPAS
Upon graduating from Brown, I was offered a scholarship to continue the US Particle Accelerator School (USPAS) and to enroll in their joint Master’s in Physics with Indiana University, which I had been attending part time since freshman year. It’s a unique intensive program offering accelerator physics courses every 6 months around the country. For the thesis project,my research focused on understanding the behavior of particle beams in high-intensity particle accelerators. I utilized a particle-in-cell code to study the effect of different distributions of space charge (e.g. charged particles) on the stability of the particle beam. I coded (in Python) a custom Fast Fourier Transform to accumulate the particle trajectories. Through these simulations, I found that when these particle forces are very strong, it causes the particle beam to become more stable, as opposed to unstable, which was historically thought to be the case. This research is important for accelerator physics, as it showed that high magnitude electromagnetic forces in accelerators can actually help stabilize the particle beams, preventing what is called “beam blowup”. It provides valuable insights into how we can control and optimize these high-intensity particle accelerators, which are vital tools in the world of physics.
Result:
Masters Thesis : Effect of Self-Consistent Space Charge on the Distribution of Particle Oscillation Frequencies in Continuously Focused Beams
Gained skills working with particle-in-cell sims running simulation tests; Custom algorithm development.
Comparative Resilience of Machine Learning Models in Inference
[2019] Summer Research Student: Los Alamos National Laboratory
In order to make the transition to work in computational neuroscience, I attended the Radiation Effects Summer School at Los Alamos National Laboratory. I loaded a neural network (NN) model (MobileNet) trained on ImageNet onto a Movidius Intel chip interfacing with an NVIDIA GPU and CPU. We irradiated the configuration with 14.1MeV neutrons to test the resilience of the model during the inference stage of image classification. My goal was to test whether different modifications to the NN architecture would make it more resilient to the single-event neutron radiation damage, which would be informative for ascertaining relevant architectures. Specifically, I tested whether dropout is a useful regularization method to mitigate these radiation effects, which we found did not statistically mitigate these effects relative to the control.
Result:
Wrote up project summary and presented work at end of summer symposium to division-wide scientists
Gained skills for optimizing neural network architectures and motifs which are robust to radiation effects