Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Lippincottschwartz Lab / Publications
custom | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block

Associated Lab

facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block
facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block
general_search_page-panel_pane_1 | views_panes

4106 Publications

Showing 2101-2110 of 4106 results
10/09/19 | Learning from action: reconsidering movement signaling in midbrain dopamine neuron activity.
Coddington LT, Dudman JT
Neuron. 2019 Oct 09;104(1):63-77. doi: 10.1016/j.neuron.2019.08.036

Animals infer when and where a reward is available from experience with informative sensory stimuli and their own actions. In vertebrates, this is thought to depend upon the release of dopamine from midbrain dopaminergic neurons. Studies of the role of dopamine have focused almost exclusively on their encoding of informative sensory stimuli; however, many dopaminergic neurons are active just prior to movement initiation, even in the absence of sensory stimuli. How should current frameworks for understanding the role of dopamine incorporate these observations? To address this question, we review recent anatomical and functional evidence for action-related dopamine signaling. We conclude by proposing a framework in which dopaminergic neurons encode subjective signals of action initiation to solve an internal credit assignment problem.

View Publication Page
10/04/20 | Learning Guided Electron Microscopy with Active Acquisition
Mi L, Wang H, Meirovitch Y, Schalek R, Turaga SC, Lichtman JW, Samuel AD, Shavit N, Martel AL, Abolmaesumi P, Stoyanov D, Mateus D, Zuluaga MA, Zhou SK, Racoceanu D, Joskowicz L
Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. 10/2020:

Single-beam scanning electron microscopes (SEM) are widely used to acquire massive datasets for biomedical study, material analysis, and fabrication inspection. Datasets are typically acquired with uniform acquisition: applying the electron beam with the same power and duration to all image pixels, even if there is great variety in the pixels' importance for eventual use. Many SEMs are now able to move the beam to any pixel in the field of view without delay, enabling them, in principle, to invest their time budget more effectively with non-uniform imaging.

View Publication Page
01/01/12 | Learning hierarchical similarity metrics.
Verma N, Mahajan D, Sellamanickam S, Nair V
IEEE Conference on Computer Vision and Pattern Recognition. 2012:
06/01/05 | Learning in realistic networks of spiking neurons and spike-driven plastic synapses.
Mongillo G, Curti E, Romani S, Amit DJ
European Journal of Neuroscience. 2005 Jun;21(11):3143-60. doi: 10.1111/j.1460-9568.2005.04087.x

We have used simulations to study the learning dynamics of an autonomous, biologically realistic recurrent network of spiking neurons connected via plastic synapses, subjected to a stream of stimulus-delay trials, in which one of a set of stimuli is presented followed by a delay. Long-term plasticity, produced by the neural activity experienced during training, structures the network and endows it with active (working) memory, i.e. enhanced, selective delay activity for every stimulus in the training set. Short-term plasticity produces transient synaptic depression. Each stimulus used in training excites a selective subset of neurons in the network, and stimuli can share neurons (overlapping stimuli). Long-term plasticity dynamics are driven by presynaptic spikes and coincident postsynaptic depolarization; stability is ensured by a refresh mechanism. In the absence of stimulation, the acquired synaptic structure persists for a very long time. The dependence of long-term plasticity dynamics on the characteristics of the stimulus response (average emission rates, time course and synchronization), and on the single-cell emission statistics (coefficient of variation) is studied. The study clarifies the specific roles of short-term synaptic depression, NMDA receptors, stimulus representation overlaps, selective stimulation of inhibition, and spike asynchrony during stimulation. Patterns of network spiking activity before, during and after training reproduce most of the in vivo physiological observations in the literature.

View Publication Page
09/14/22 | Learning of probabilistic punishment as a model of anxiety produces changes in action but not punisher encoding in the dmPFC and VTA.
Jacobs DS, Allen MC, Park J, Moghaddam B
eLife. 2022 Sep 14;11:. doi: 10.7554/eLife.78912

Previously, we developed a novel model for anxiety during motivated behavior by training rats to perform a task where actions executed to obtain a reward were probabilistically punished and observed that after learning, neuronal activity in the ventral tegmental area (VTA) and dorsomedial prefrontal cortex (dmPFC) represent the relationship between action and punishment risk (Park & Moghaddam, 2017). Here we used male and female rats to expand on the previous work by focusing on neural changes in the dmPFC and VTA that were associated with the learning of probabilistic punishment, and anxiolytic treatment with diazepam after learning. We find that adaptive neural responses of dmPFC and VTA during the learning of anxiogenic contingencies are independent from the punisher experience and occur primarily during the peri-action and reward period. Our results also identify peri-action ramping of VTA neural calcium activity, and VTA-dmPFC correlated activity, as potential markers for the anxiolytic properties of diazepam.

View Publication Page
02/12/25 | Learning produces an orthogonalized state machine in the hippocampus.
Sun W, Winnubst J, Natrajan M, Lai C, Kajikawa K, Michaelos M, Gattoni R, Stringer C, Flickinger D, Fitzgerald JE, Spruston N
Nature. 2025 February 12;640:. doi: 10.1038/s41586-024-08548-w

Cognitive maps confer animals with flexible intelligence by representing spatial, temporal and abstract relationships that can be used to shape thought, planning and behaviour. Cognitive maps have been observed in the hippocampus1, but their algorithmic form and learning mechanisms remain obscure. Here we used large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different linear tracks in virtual reality. Throughout learning, both animal behaviour and hippocampal neural activity progressed through multiple stages, gradually revealing improved task representation that mirrored improved behavioural efficiency. The learning process involved progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent structure of the task. This decorrelation process was driven by individual neurons acquiring task-state-specific responses (that is, 'state cells'). Although various standard artificial neural networks did not naturally capture these dynamics, the clone-structured causal graph, a hidden Markov model variant, uniquely reproduced both the final orthogonalized states and the learning trajectory seen in animals. The observed cellular and population dynamics constrain the mechanisms underlying cognitive map formation in the hippocampus, pointing to hidden state inference as a fundamental computational principle, with implications for both biological and artificial intelligence.

View Publication Page
11/01/16 | Learning recurrent representations for hierarchical behavior modeling.
Eyjolfsdottir E, Branson K, Yue Y, Perona P
arXiv. 2016 Nov 1;arXiv:1611.00094(arXiv:1611.00094):

We propose a framework for detecting action patterns from motion sequences and modeling the sensory-motor relationship of animals, using a generative recurrent neural network. The network has a discriminative part (classifying actions) and a generative part (predicting motion), whose recurrent cells are laterally connected, allowing higher levels of the network to represent high level phenomena. We test our framework on two types of data, fruit fly behavior and online handwriting. Our results show that 1) taking advantage of unlabeled sequences, by predicting future motion, significantly improves action detection performance when training labels are scarce, 2) the network learns to represent high level phenomena such as writer identity and fly gender, without supervision, and 3) simulated motion trajectories, generated by treating motion prediction as input to the network, look realistic and may be used to qualitatively evaluate whether the model has learnt generative control rules.

View Publication Page
03/10/25 | Learning reshapes the hippocampal representation hierarchy
Chiossi HS, Nardin M, Tkačik G, Csicsvari JL
Proc. Natl. Acad. Sci. U.S.A.. 2025 Mar 10:. doi: 10.1073/pnas.2417025122

Biological neural networks seem to efficiently select and represent task-relevant features of their inputs, an ability that is highly sought after also in artificial networks. A lot of work has gone into identifying such representations in both sensory and motor systems; however, less is understood about how representations form during complex learning conditions to support behavior, especially in higher associative brain areas. Our work shows that the hippocampus maintains a robust hierarchical representation of task variables and that this structure can support new learning through minimal changes to the neural representations.

bioRxiv Preprint: https://www.doi.org/10.1101/2024.08.21.608911

View Publication Page
12/17/11 | Learning to Agglomerate Superpixel Hierarchies
Viren Jain , Srinivas C. Turaga , K Briggman , Moritz N. Helmstaedter , Winfried Denk , H. S. Seung
Advances in Neural Information Processing Systems 24 (NIPS 2011). 12/2011;24:

An agglomerative clustering algorithm merges the most similar pair of clusters at every iteration. The function that evaluates similarity is traditionally hand- designed, but there has been recent interest in supervised or semisupervised settings in which ground-truth clustered data is available for training. Here we show how to train a similarity function by regarding it as the action-value function of a reinforcement learning problem. We apply this general method to segment images by clustering superpixels, an application that we call Learning to Agglomerate Superpixel Hierarchies (LASH). When applied to a challenging dataset of brain images from serial electron microscopy, LASH dramatically improved segmentation accuracy when clustering supervoxels generated by state of the boundary detection algorithms. The naive strategy of directly training only supervoxel similarities and applying single linkage clustering produced less improvement.

View Publication Page
01/01/11 | Learning to agglomerate superpixel hierarchies.
Jain V, Turaga S, Briggman K, Helmstaedter MN, Denk W, Seung S
Neural Information Processing Systems. 2011;24:648-56

An agglomerative clustering algorithm merges the most similar pair of clusters at every iteration. The function that evaluates similarity is traditionally handdesigned, but there has been recent interest in supervised or semisupervised settings in which ground-truth clustered data is available for training. Here we show how to train a similarity function by regarding it as the action-value function of a reinforcement learning problem. We apply this general method to segment images by clustering superpixels, an application that we call Learning to Agglomerate Superpixel Hierarchies (LASH). When applied to a challenging dataset of brain images from serial electron microscopy, LASH dramatically improved segmentation accuracy when clustering supervoxels generated by state of the boundary detection algorithms. The naive strategy of directly training only supervoxel similarities and applying single linkage clustering produced less improvement.

View Publication Page