Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Lippincottschwartz Lab / Publications
custom | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block

Associated Lab

facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block
facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block
general_search_page-panel_pane_1 | views_panes

3945 Publications

Showing 2011-2020 of 3945 results
01/01/12 | Learning hierarchical similarity metrics.
Verma N, Mahajan D, Sellamanickam S, Nair V
IEEE Conference on Computer Vision and Pattern Recognition. 2012:
06/01/05 | Learning in realistic networks of spiking neurons and spike-driven plastic synapses.
Mongillo G, Curti E, Romani S, Amit DJ
European Journal of Neuroscience. 2005 Jun;21(11):3143-60. doi: 10.1111/j.1460-9568.2005.04087.x

We have used simulations to study the learning dynamics of an autonomous, biologically realistic recurrent network of spiking neurons connected via plastic synapses, subjected to a stream of stimulus-delay trials, in which one of a set of stimuli is presented followed by a delay. Long-term plasticity, produced by the neural activity experienced during training, structures the network and endows it with active (working) memory, i.e. enhanced, selective delay activity for every stimulus in the training set. Short-term plasticity produces transient synaptic depression. Each stimulus used in training excites a selective subset of neurons in the network, and stimuli can share neurons (overlapping stimuli). Long-term plasticity dynamics are driven by presynaptic spikes and coincident postsynaptic depolarization; stability is ensured by a refresh mechanism. In the absence of stimulation, the acquired synaptic structure persists for a very long time. The dependence of long-term plasticity dynamics on the characteristics of the stimulus response (average emission rates, time course and synchronization), and on the single-cell emission statistics (coefficient of variation) is studied. The study clarifies the specific roles of short-term synaptic depression, NMDA receptors, stimulus representation overlaps, selective stimulation of inhibition, and spike asynchrony during stimulation. Patterns of network spiking activity before, during and after training reproduce most of the in vivo physiological observations in the literature.

View Publication Page
09/14/22 | Learning of probabilistic punishment as a model of anxiety produces changes in action but not punisher encoding in the dmPFC and VTA.
Jacobs DS, Allen MC, Park J, Moghaddam B
eLife. 2022 Sep 14;11:. doi: 10.7554/eLife.78912

Previously, we developed a novel model for anxiety during motivated behavior by training rats to perform a task where actions executed to obtain a reward were probabilistically punished and observed that after learning, neuronal activity in the ventral tegmental area (VTA) and dorsomedial prefrontal cortex (dmPFC) represent the relationship between action and punishment risk (Park & Moghaddam, 2017). Here we used male and female rats to expand on the previous work by focusing on neural changes in the dmPFC and VTA that were associated with the learning of probabilistic punishment, and anxiolytic treatment with diazepam after learning. We find that adaptive neural responses of dmPFC and VTA during the learning of anxiogenic contingencies are independent from the punisher experience and occur primarily during the peri-action and reward period. Our results also identify peri-action ramping of VTA neural calcium activity, and VTA-dmPFC correlated activity, as potential markers for the anxiolytic properties of diazepam.

View Publication Page
08/07/23 | Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine.
Weinan Sun , Johan Winnubst , Maanasa Natrajan , Chongxi Lai , Koichiro Kajikawa , Michalis Michaelos , Rachel Gattoni , Carsen Stringer , Daniel Flickinger , James E. Fitzgerald , Nelson Spruston
bioRxiv. 2023 Aug 07:. doi: 10.1101/2023.08.03.551900

Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task understanding and behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent structure of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.

View Publication Page
11/01/16 | Learning recurrent representations for hierarchical behavior modeling.
Eyjolfsdottir E, Branson K, Yue Y, Perona P
arXiv. 2016 Nov 1;arXiv:1611.00094(arXiv:1611.00094):

We propose a framework for detecting action patterns from motion sequences and modeling the sensory-motor relationship of animals, using a generative recurrent neural network. The network has a discriminative part (classifying actions) and a generative part (predicting motion), whose recurrent cells are laterally connected, allowing higher levels of the network to represent high level phenomena. We test our framework on two types of data, fruit fly behavior and online handwriting. Our results show that 1) taking advantage of unlabeled sequences, by predicting future motion, significantly improves action detection performance when training labels are scarce, 2) the network learns to represent high level phenomena such as writer identity and fly gender, without supervision, and 3) simulated motion trajectories, generated by treating motion prediction as input to the network, look realistic and may be used to qualitatively evaluate whether the model has learnt generative control rules.

View Publication Page
08/21/24 | Learning reshapes the hippocampal representation hierarchy
Chiossi HS, Nardin M, Tkačik G, Csicsvari JL
bioRxiv. 2024 Aug 21:. doi: 10.1101/2024.08.21.608911

A key feature of biological and artificial neural networks is the progressive refinement of their neural representations with experience. In neuroscience, this fact has inspired several recent studies in sensory and motor systems. However, less is known about how higher associational cortical areas, such as the hippocampus, modify representations throughout the learning of complex tasks. Here we focus on associative learning, a process that requires forming a connection between the representations of different variables for appropriate behavioral response. We trained rats in a spatial-context associative task and monitored hippocampal neural activity throughout the entire learning period, over several days. This allowed us to assess changes in the representations of context, movement direction and position, as well as their relationship to behavior. We identified a hierarchical representational structure in the encoding of these three task variables that was preserved throughout learning. Nevertheless, we also observed changes at the lower levels of the hierarchy where context was encoded. These changes were local in neural activity space and restricted to physical positions where context identification was necessary for correct decision making, supporting better context decoding and contextual code compression. Our results demonstrate that the hippocampal code not only accommodates hierarchical relationships between different variables but also enables efficient learning through minimal changes in neural activity space. Beyond the hippocampus, our work reveals a representation learning mechanism that might be implemented in other biological and artificial networks performing similar tasks.

View Publication Page
12/17/11 | Learning to Agglomerate Superpixel Hierarchies
Viren Jain , Srinivas C. Turaga , K Briggman , Moritz N. Helmstaedter , Winfried Denk , H. S. Seung
Advances in Neural Information Processing Systems 24 (NIPS 2011). 12/2011;24:

An agglomerative clustering algorithm merges the most similar pair of clusters at every iteration. The function that evaluates similarity is traditionally hand- designed, but there has been recent interest in supervised or semisupervised settings in which ground-truth clustered data is available for training. Here we show how to train a similarity function by regarding it as the action-value function of a reinforcement learning problem. We apply this general method to segment images by clustering superpixels, an application that we call Learning to Agglomerate Superpixel Hierarchies (LASH). When applied to a challenging dataset of brain images from serial electron microscopy, LASH dramatically improved segmentation accuracy when clustering supervoxels generated by state of the boundary detection algorithms. The naive strategy of directly training only supervoxel similarities and applying single linkage clustering produced less improvement.

View Publication Page
01/01/11 | Learning to agglomerate superpixel hierarchies.
Jain V, Turaga S, Briggman K, Helmstaedter MN, Denk W, Seung S
Neural Information Processing Systems. 2011;24:648-56

An agglomerative clustering algorithm merges the most similar pair of clusters at every iteration. The function that evaluates similarity is traditionally handdesigned, but there has been recent interest in supervised or semisupervised settings in which ground-truth clustered data is available for training. Here we show how to train a similarity function by regarding it as the action-value function of a reinforcement learning problem. We apply this general method to segment images by clustering superpixels, an application that we call Learning to Agglomerate Superpixel Hierarchies (LASH). When applied to a challenging dataset of brain images from serial electron microscopy, LASH dramatically improved segmentation accuracy when clustering supervoxels generated by state of the boundary detection algorithms. The naive strategy of directly training only supervoxel similarities and applying single linkage clustering produced less improvement.

View Publication Page
Darshan Lab
04/05/22 | Learning to represent continuous variables in heterogeneous neural networks
Ran Darshan , Alexander Rivkind
Cell Reports. 2022 Apr 05;39(1):110612. doi: 10.1016/j.celrep.2022.110612

Manifold attractors are a key framework for understanding how continuous variables, such as position or head direction, are encoded in the brain. In this framework, the variable is represented along a continuum of persistent neuronal states which forms a manifold attactor. Neural networks with symmetric synaptic connectivity that can implement manifold attractors have become the dominant model in this framework. In addition to a symmetric connectome, these networks imply homogeneity of individual-neuron tuning curves and symmetry of the representational space; these features are largely inconsistent with neurobiological data. Here, we developed a theory for computations based on manifold attractors in trained neural networks and show how these manifolds can cope with diverse neuronal responses, imperfections in the geometry of the manifold and a high level of synaptic heterogeneity. In such heterogeneous trained networks, a continuous representational space emerges from a small set of stimuli used for training. Furthermore, we find that the network response to external inputs depends on the geometry of the representation and on the level of synaptic heterogeneity in an analytically tractable and interpretable way. Finally, we show that a too complex geometry of the neuronal representation impairs the attractiveness of the manifold and may lead to its destabilization. Our framework reveals that continuous features can be represented in the recurrent dynamics of heterogeneous networks without assuming unrealistic symmetry. It suggests that the representational space of putative manifold attractors in the brain dictates the dynamics in their vicinity.

View Publication Page
12/03/12 | Learning visual motion in recurrent neural networks.
Pachitariu M, Sahani M
Neural Information Processing Systems (NIPS 2012). 12/2003:

We present a dynamic nonlinear generative model for visual motion based on a latent representation of binary-gated Gaussian variables. Trained on sequences of images, the model learns to represent different movement directions in different variables. We use an online approximate-inference scheme that can be mapped to the dynamics of networks of neurons. Probed with drifting grating stimuli and moving bars of light, neurons in the model show patterns of responses analogous to those of direction-selective simple cells in primary visual cortex. Most model neurons also show speed tuning and respond equally well to a range of motion directions and speeds aligned to the constraint line of their respective preferred speed. We show how these computations are enabled by a specific pattern of recurrent connections learned by the model.

View Publication Page