Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Fitzgerald Lab / Publications
general_search_page-panel_pane_1 | views_panes

29 Publications

Showing 1-10 of 29 results
02/12/25 | Learning produces an orthogonalized state machine in the hippocampus.
Sun W, Winnubst J, Natrajan M, Lai C, Kajikawa K, Michaelos M, Gattoni R, Stringer C, Flickinger D, Fitzgerald JE, Spruston N
Nature. 2025 February 12;640:. doi: 10.1038/s41586-024-08548-w

Cognitive maps confer animals with flexible intelligence by representing spatial, temporal and abstract relationships that can be used to shape thought, planning and behaviour. Cognitive maps have been observed in the hippocampus1, but their algorithmic form and learning mechanisms remain obscure. Here we used large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different linear tracks in virtual reality. Throughout learning, both animal behaviour and hippocampal neural activity progressed through multiple stages, gradually revealing improved task representation that mirrored improved behavioural efficiency. The learning process involved progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent structure of the task. This decorrelation process was driven by individual neurons acquiring task-state-specific responses (that is, 'state cells'). Although various standard artificial neural networks did not naturally capture these dynamics, the clone-structured causal graph, a hidden Markov model variant, uniquely reproduced both the final orthogonalized states and the learning trajectory seen in animals. The observed cellular and population dynamics constrain the mechanisms underlying cognitive map formation in the hippocampus, pointing to hidden state inference as a fundamental computational principle, with implications for both biological and artificial intelligence.

View Publication Page
Romani LabFitzgerald Lab
11/01/24 | From the fly connectome to exact ring attractor dynamics
Biswas T, Stanoev A, Romani S, Fitzgerald JE
bioRxiv. 2024 Nov 01:. doi: 10.1101/2024.11.01.621596

A cognitive compass enabling spatial navigation requires neural representation of heading direction (HD), yet the neural circuit architecture enabling this representation remains unclear. While various network models have been proposed to explain HD systems, these models rely on simplified circuit architectures that are incompatible with empirical observations from connectomes. Here we construct a novel network model for the fruit fly HD system that satisfies both connectome-derived architectural constraints and the functional requirement of continuous heading representation. We characterize an ensemble of continuous attractor networks where compass neurons providing local mutual excitation are coupled to inhibitory neurons. We discover a new mechanism where continuous heading representation emerges from combining symmetric and anti-symmetric activity patterns. Our analysis reveals three distinct realizations of these networks that all match observed compass neuron activity but differ in their predictions for inhibitory neuron activation patterns. Further, we found that deviations from these realizations can be compensated by cell-type-specific rescaling of synaptic weights, which could be potentially achieved through neuromodulation. This framework can be extended to incorporate the complete fly central complex connectome and could reveal principles of neural circuits representing other continuous quantities, such as spatial location, across insects and vertebrates.

View Publication Page
Fitzgerald Lab
04/25/24 | Optimization in Visual Motion Estimation.
Clark DA, Fitzgerald JE
Annu Rev Vis Sci. 2024 Apr 25:. doi: 10.1146/annurev-vision-101623-025432

Sighted animals use visual signals to discern directional motion in their environment. Motion is not directly detected by visual neurons, and it must instead be computed from light signals that vary over space and time. This makes visual motion estimation a near universal neural computation, and decades of research have revealed much about the algorithms and mechanisms that generate directional signals. The idea that sensory systems are optimized for performance in natural environments has deeply impacted this research. In this article, we review the many ways that optimization has been used to quantitatively model visual motion estimation and reveal its underlying principles. We emphasize that no single optimization theory has dominated the literature. Instead, researchers have adeptly incorporated different computational demands and biological constraints that are pertinent to the specific brain system and animal model under study. The successes and failures of the resulting optimization models have thereby provided insights into how computational demands and biological constraints together shape neural computation.

View Publication Page
Turner LabFitzgerald LabFunke Lab
12/12/23 | Model-Based Inference of Synaptic Plasticity Rules
Yash Mehta , Danil Tyulmankov , Adithya E. Rajagopalan , Glenn C. Turner , James E. Fitzgerald , Jan Funke
bioRxiv. 2023 Dec 12:. doi: 10.1101/2023.12.11.571103

Understanding learning through synaptic plasticity rules in the brain is a grand challenge for neuroscience. Here we introduce a novel computational framework for inferring plasticity rules from experimental data on neural activity trajectories and behavioral learning dynamics. Our methodology parameterizes the plasticity function to provide theoretical interpretability and facilitate gradient-based optimization. For instance, we use Taylor series expansions or multilayer perceptrons to approximate plasticity rules, and we adjust their parameters via gradient descent over entire trajectories to closely match observed neural activity and behavioral data. Notably, our approach can learn intricate rules that induce long nonlinear time-dependencies, such as those incorporating postsynaptic activity and current synaptic weights. We validate our method through simulations, accurately recovering established rules, like Oja’s, as well as more complex hypothetical rules incorporating reward-modulated terms. We assess the resilience of our technique to noise and, as a tangible application, apply it to behavioral data from Drosophila during a probabilistic reward-learning experiment. Remarkably, we identify an active forgetting component of reward learning in flies that enhances the predictive accuracy of previous models. Overall, our modeling framework provides an exciting new avenue to elucidate the computational principles governing synaptic plasticity and learning in the brain.

View Publication Page
Fitzgerald Lab
10/31/23 | Tensor formalism for predicting synaptic connections with ensemble modeling or optimization.
Tirthabir Biswas , Tianzhi Lambus Li , James E. Fitzgerald
arXiv. 2023 Oct 31:. doi: 10.48550/arXiv.2310.20309

Theoretical neuroscientists often try to understand how the structure of a neural network relates to its function by focusing on structural features that would either follow from optimization or occur consistently across possible implementations. Both optimization theories and ensemble modeling approaches have repeatedly proven their worth, and it would simplify theory building considerably if predictions from both theory types could be derived and tested simultaneously. Here we show how tensor formalism from theoretical physics can be used to unify and solve many optimization and ensemble modeling approaches to predicting synaptic connectivity from neuronal responses. We specifically focus on analyzing the solution space of synaptic weights that allow a thresholdlinear neural network to respond in a prescribed way to a limited number of input conditions. For optimization purposes, we compute the synaptic weight vector that minimizes an arbitrary quadratic loss function. For ensemble modeling, we identify synaptic weight features that occur consistently across all solutions bounded by an arbitrary quadratic function. We derive a common solution to this suite of nonlinear problems by showing how each of them reduces to an equivalent linear problem that can be solved analytically. Although identifying the equivalent linear problem is nontrivial, our tensor formalism provides an elegant geometrical perspective that allows us to solve the problem numerically. The final algorithm is applicable to a wide range of interesting neuroscience problems, and the associated geometric insights may carry over to other scientific problems that require constrained optimization.

View Publication Page
09/26/23 | Reward expectations direct learning and drive operant matching in Drosophila
Adithya E. Rajagopalan , Ran Darshan , Karen L. Hibbard , James E. Fitzgerald , Glenn C. Turner
Proceedings of the National Academy of Sciences of the U.S.A.. 2023 Sep 26;120(39):e2221415120. doi: 10.1073/pnas.2221415120

Foraging animals must use decision-making strategies that dynamically adapt to the changing availability of rewards in the environment. A wide diversity of animals do this by distributing their choices in proportion to the rewards received from each option, Herrnstein’s operant matching law. Theoretical work suggests an elegant mechanistic explanation for this ubiquitous behavior, as operant matching follows automatically from simple synaptic plasticity rules acting within behaviorally relevant neural circuits. However, no past work has mapped operant matching onto plasticity mechanisms in the brain, leaving the biological relevance of the theory unclear. Here we discovered operant matching in Drosophila and showed that it requires synaptic plasticity that acts in the mushroom body and incorporates the expectation of reward. We began by developing a novel behavioral paradigm to measure choices from individual flies as they learn to associate odor cues with probabilistic rewards. We then built a model of the fly mushroom body to explain each fly’s sequential choice behavior using a family of biologically-realistic synaptic plasticity rules. As predicted by past theoretical work, we found that synaptic plasticity rules could explain fly matching behavior by incorporating stimulus expectations, reward expectations, or both. However, by optogenetically bypassing the representation of reward expectation, we abolished matching behavior and showed that the plasticity rule must specifically incorporate reward expectations. Altogether, these results reveal the first synaptic level mechanisms of operant matching and provide compelling evidence for the role of reward expectation signals in the fly brain.

View Publication Page
Spruston LabFitzgerald Lab
08/01/23 | Organizing memories for generalization in complementary learning systems.
Weinan Sun , Madhu Advani , Nelson Spruston , Andrew Saxe , James E. Fitzgerald
Nature Neuroscience. 2023 Aug 01;26(8):1438-1448. doi: 10.1038/s41593-023-01382-9

Our ability to remember the past is essential for guiding our future behavior. Psychological and neurobiological features of declarative memories are known to transform over time in a process known as systems consolidation. While many theories have sought to explain the time-varying role of hippocampal and neocortical brain areas, the computational principles that govern these transformations remain unclear. Here we propose a theory of systems consolidation in which hippocampal-cortical interactions serve to optimize generalizations that guide future adaptive behavior. We use mathematical analysis of neural network models to characterize fundamental performance tradeoffs in systems consolidation, revealing that memory components should be organized according to their predictability. The theory shows that multiple interacting memory systems can outperform just one, normatively unifying diverse experimental observations and making novel experimental predictions. Our results suggest that the psychological taxonomy and neurobiological organization of declarative memories reflect a system optimized for behaving well in an uncertain future.

View Publication Page
12/22/22 | A brainstem integrator for self-localization and positional homeostasis
Yang E, Zwart MF, Rubinov M, James B, Wei Z, Narayan S, Vladimirov N, Mensh BD, Fitzgerald JE, Ahrens MB
Cell. 2022 Dec 22;185(26):5011-5027.e20. doi: 10.1101/2021.11.26.468907

To accurately track self-location, animals need to integrate their movements through space. In amniotes, representations of self-location have been found in regions such as the hippocampus. It is unknown whether more ancient brain regions contain such representations and by which pathways they may drive locomotion. Fish displaced by water currents must prevent uncontrolled drift to potentially dangerous areas. We found that larval zebrafish track such movements and can later swim back to their earlier location. Whole-brain functional imaging revealed the circuit enabling this process of positional homeostasis. Position-encoding brainstem neurons integrate optic flow, then bias future swimming to correct for past displacements by modulating inferior olive and cerebellar activity. Manipulation of position-encoding or olivary neurons abolished positional homeostasis or evoked behavior as if animals had experienced positional shifts. These results reveal a multiregional hindbrain circuit in vertebrates for optic flow integration, memory of self-location, and its neural pathway to behavior.Competing Interest StatementThe authors have declared no competing interest.

View Publication Page
Fitzgerald Lab
12/09/22 | Exact learning dynamics of deep linear networks with prior knowledge
Lukas Braun , Clémentine Dominé , James Fitzgerald , Andrew Saxe
Neural Information Processing Systems:

Learning in deep neural networks is known to depend critically on the knowledge embedded in the initial network weights. However, few theoretical results have precisely linked prior knowledge to learning dynamics. Here we derive exact solutions to the dynamics of learning with rich prior knowledge in deep linear networks by generalising Fukumizu's matrix Riccati solution \citep{fukumizu1998effect}. We obtain explicit expressions for the evolving network function, hidden representational similarity, and neural tangent kernel over training for a broad class of initialisations and tasks. The expressions reveal a class of task-independent initialisations that radically alter learning dynamics from slow non-linear dynamics to fast exponential trajectories while converging to a global optimum with identical representational similarity, dissociating learning trajectories from the structure of initial internal representations. We characterise how network weights dynamically align with task structure, rigorously justifying why previous solutions successfully described learning from small initial weights without incorporating their fine-scale structure. Finally, we discuss the implications of these findings for continual learning, reversal learning and learning of structured knowledge. Taken together, our results provide a mathematical toolkit for understanding the impact of prior knowledge on deep learning.

View Publication Page
Fitzgerald Lab
06/29/22 | A geometric framework to predict structure from function in neural networks
Biswas T, Fitzgerald JE
Physical Review Research. 2022 Jun 29;4(2):023255. doi: 10.1103/PhysRevResearch.4.023255

Neural computation in biological and artificial networks relies on nonlinear synaptic integration. The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function. However, quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of rectified-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. We then use this analytical characterization to rigorously analyze the solution space geometry and derive certainty conditions guaranteeing a non-zero synapse between neurons. Numerical simulations of feedforward and recurrent networks verify our analytical results. Our theoretical framework could be applied to neural activity data to make anatomical predictions that follow generally from the model architecture. It thus provides novel opportunities for discerning what model features are required to accurately relate neural network structure and function.

View Publication Page