Main Menu (Mobile)- Block

Main Menu - Block

custom | custom

Search Results

filters_region_cap | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block

Associated Lab

facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block
facetapi-61yz1V0li8B1bixrCWxdAe2aYiEXdhd0 | block
facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
general_search_page-panel_pane_1 | views_panes

2529 Janelia Publications

Showing 1311-1320 of 2529 results
10/24/14 | Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution.
Chen B, Legant WR, Wang K, Shao L, Milkie DE, Davidson MW, Janetopoulos C, Wu XS, Hammer JA, Liu Z, English BP, Mimori-Kiyosue Y, Romero DP, Ritter AT, Lippincott-Schwartz J, Fritz-Laylin L, Mullins RD, Mitchell DM, Bembenek JN, Reymann A, Böhme R, Grill SW, Wang JT, Seydoux G, Tulu US, Kiehart DP, Betzig E
Science. 2014 Oct 24;346(6208):1257998. doi: 10.1126/science.1257998

Although fluorescence microscopy provides a crucial window into the physiology of living specimens, many biological processes are too fragile, are too small, or occur too rapidly to see clearly with existing tools. We crafted ultrathin light sheets from two-dimensional optical lattices that allowed us to image three-dimensional (3D) dynamics for hundreds of volumes, often at subsecond intervals, at the diffraction limit and beyond. We applied this to systems spanning four orders of magnitude in space and time, including the diffusion of single transcription factor molecules in stem cell spheroids, the dynamic instability of mitotic microtubules, the immunological synapse, neutrophil motility in a 3D matrix, and embryogenesis in Caenorhabditis elegans and Drosophila melanogaster. The results provide a visceral reminder of the beauty and the complexity of living systems.

View Publication Page
Svoboda Lab
10/17/16 | Layer 4 fast-spiking interneurons filter thalamocortical signals during active somatosensation.
Yu J, Gutnisky DA, Hires SA, Svoboda K
Nature Neuroscience. 2016 Oct 17;19(12):1647-57. doi: 10.1038/nn.4412

We rely on movement to explore the environment, for example, by palpating an object. In somatosensory cortex, activity related to movement of digits or whiskers is suppressed, which could facilitate detection of touch. Movement-related suppression is generally assumed to involve corollary discharges. Here we uncovered a thalamocortical mechanism in which cortical fast-spiking interneurons, driven by sensory input, suppress movement-related activity in layer 4 (L4) excitatory neurons. In mice locating objects with their whiskers, neurons in the ventral posteromedial nucleus (VPM) fired in response to touch and whisker movement. Cortical L4 fast-spiking interneurons inherited these responses from VPM. In contrast, L4 excitatory neurons responded mainly to touch. Optogenetic experiments revealed that fast-spiking interneurons reduced movement-related spiking in excitatory neurons, enhancing selectivity for touch-related information during active tactile sensation. These observations suggest a fundamental computation performed by the thalamocortical circuit to accentuate salient tactile information.

View Publication Page
03/10/20 | Layer 6b Is driven by intracortical long-range projection neurons.
Zolnik TA, Ledderose J, Toumazou M, Trimbuch T, Oram T, Rosenmund C, Eickholt BJ, Sachdev RN, Larkum ME
Cell Reports. 2020 Mar 10;30(10):3492 - 3505.e5. doi: 10.1016/j.celrep.2020.02.044

Layer 6b (L6b), the deepest neocortical layer, projects to cortical targets and higher-order thalamus and is the only layer responsive to the wake-promoting neuropeptide orexin/hypocretin. These characteristics suggest that L6b can strongly modulate brain state, but projections to L6b and their influence remain unknown. Here, we examine the inputs to L6b ex vivo in the mouse primary somatosensory cortex with rabies-based retrograde tracing and channelrhodopsin-assisted circuit mapping in brain slices. We find that L6b receives its strongest excitatory input from intracortical long-range projection neurons, including those in the contralateral hemisphere. In contrast, local intracortical input and thalamocortical input were significantly weaker. Moreover, our data suggest that L6b receives far less thalamocortical input than other cortical layers. L6b was most strongly inhibited by PV and SST interneurons. This study shows that L6b integrates long-range intracortical information and is not part of the traditional thalamocortical loop.

View Publication Page
10/31/16 | Learning a metric for class-conditional KNN.
Im DJ, Taylor GW
International Joint Conference on Neural Networks, IJCNN 2016. 2016 Oct 31:. doi: 10.1109/IJCNN.2016.7727436

Naïve Bayes Nearest Neighbour (NBNN) is a simple and effective framework which addresses many of the pitfalls of K-Nearest Neighbour (KNN) classification. It has yielded competitive results on several computer vision benchmarks. Its central tenet is that during NN search, a query is not compared to every example in a database, ignoring class information. Instead, NN searches are performed within each class, generating a score per class. A key problem with NN techniques, including NBNN, is that they fail when the data representation does not capture perceptual (e.g. class-based) similarity. NBNN circumvents this by using independent engineered descriptors (e.g. SIFT). To extend its applicability outside of image-based domains, we propose to learn a metric which captures perceptual similarity. Similar to how Neighbourhood Components Analysis optimizes a differentiable form of KNN classification, we propose 'Class Conditional' metric learning (CCML), which optimizes a soft form of the NBNN selection rule. Typical metric learning algorithms learn either a global or local metric. However, our proposed method can be adjusted to a particular level of locality by tuning a single parameter. An empirical evaluation on classification and retrieval tasks demonstrates that our proposed method clearly outperforms existing learned distance metrics across a variety of image and non-image datasets.

View Publication Page
11/01/12 | Learning animal social behavior from trajectory features.
Eyjolfsdottir E, Burgos-Artizzu XP, Branson S, Branson K, Anderson D, Perona P
Workshop on Visual Observation and Analysis of Animal and Insect Behavior. 2012 Nov:
10/09/19 | Learning from action: reconsidering movement signaling in midbrain dopamine neuron activity.
Coddington LT, Dudman JT
Neuron. 2019 Oct 09;104(1):63-77. doi: 10.1016/j.neuron.2019.08.036

Animals infer when and where a reward is available from experience with informative sensory stimuli and their own actions. In vertebrates, this is thought to depend upon the release of dopamine from midbrain dopaminergic neurons. Studies of the role of dopamine have focused almost exclusively on their encoding of informative sensory stimuli; however, many dopaminergic neurons are active just prior to movement initiation, even in the absence of sensory stimuli. How should current frameworks for understanding the role of dopamine incorporate these observations? To address this question, we review recent anatomical and functional evidence for action-related dopamine signaling. We conclude by proposing a framework in which dopaminergic neurons encode subjective signals of action initiation to solve an internal credit assignment problem.

View Publication Page
10/04/20 | Learning Guided Electron Microscopy with Active Acquisition
Mi L, Wang H, Meirovitch Y, Schalek R, Turaga SC, Lichtman JW, Samuel AD, Shavit N, Martel AL, Abolmaesumi P, Stoyanov D, Mateus D, Zuluaga MA, Zhou SK, Racoceanu D, Joskowicz L
Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. 10/2020:

Single-beam scanning electron microscopes (SEM) are widely used to acquire massive datasets for biomedical study, material analysis, and fabrication inspection. Datasets are typically acquired with uniform acquisition: applying the electron beam with the same power and duration to all image pixels, even if there is great variety in the pixels' importance for eventual use. Many SEMs are now able to move the beam to any pixel in the field of view without delay, enabling them, in principle, to invest their time budget more effectively with non-uniform imaging.

View Publication Page
09/14/22 | Learning of probabilistic punishment as a model of anxiety produces changes in action but not punisher encoding in the dmPFC and VTA.
Jacobs DS, Allen MC, Park J, Moghaddam B
eLife. 2022 Sep 14;11:. doi: 10.7554/eLife.78912

Previously, we developed a novel model for anxiety during motivated behavior by training rats to perform a task where actions executed to obtain a reward were probabilistically punished and observed that after learning, neuronal activity in the ventral tegmental area (VTA) and dorsomedial prefrontal cortex (dmPFC) represent the relationship between action and punishment risk (Park & Moghaddam, 2017). Here we used male and female rats to expand on the previous work by focusing on neural changes in the dmPFC and VTA that were associated with the learning of probabilistic punishment, and anxiolytic treatment with diazepam after learning. We find that adaptive neural responses of dmPFC and VTA during the learning of anxiogenic contingencies are independent from the punisher experience and occur primarily during the peri-action and reward period. Our results also identify peri-action ramping of VTA neural calcium activity, and VTA-dmPFC correlated activity, as potential markers for the anxiolytic properties of diazepam.

View Publication Page
08/07/23 | Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine.
Weinan Sun , Johan Winnubst , Maanasa Natrajan , Chongxi Lai , Koichiro Kajikawa , Michalis Michaelos , Rachel Gattoni , Carsen Stringer , Daniel Flickinger , James E. Fitzgerald , Nelson Spruston
bioRxiv. 2023 Aug 07:. doi: 10.1101/2023.08.03.551900

Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task understanding and behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent structure of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.

View Publication Page
11/01/16 | Learning recurrent representations for hierarchical behavior modeling.
Eyjolfsdottir E, Branson K, Yue Y, Perona P
arXiv. 2016 Nov 1;arXiv:1611.00094(arXiv:1611.00094):

We propose a framework for detecting action patterns from motion sequences and modeling the sensory-motor relationship of animals, using a generative recurrent neural network. The network has a discriminative part (classifying actions) and a generative part (predicting motion), whose recurrent cells are laterally connected, allowing higher levels of the network to represent high level phenomena. We test our framework on two types of data, fruit fly behavior and online handwriting. Our results show that 1) taking advantage of unlabeled sequences, by predicting future motion, significantly improves action detection performance when training labels are scarce, 2) the network learns to represent high level phenomena such as writer identity and fly gender, without supervision, and 3) simulated motion trajectories, generated by treating motion prediction as input to the network, look realistic and may be used to qualitatively evaluate whether the model has learnt generative control rules.

View Publication Page