Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Hantman Lab / Publications
custom | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block

Associated Lab

facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block
facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block
general_search_page-panel_pane_1 | views_panes

3920 Publications

Showing 3271-3280 of 3920 results
01/01/23 | Structured cerebellar connectivity supports resilient pattern separation.
Nguyen TM, Thomas LA, Rhoades JL, Ricchi I, Yuan XC, Sheridan A, Hildebrand DG, Funke J, Regehr WG, Lee WA
Nature. 2023 Jan 01;613(7944):543-549. doi: 10.1038/s41586-022-05471-w

The cerebellum is thought to help detect and correct errors between intended and executed commands and is critical for social behaviours, cognition and emotion. Computations for motor control must be performed quickly to correct errors in real time and should be sensitive to small differences between patterns for fine error correction while being resilient to noise. Influential theories of cerebellar information processing have largely assumed random network connectivity, which increases the encoding capacity of the network's first layer. However, maximizing encoding capacity reduces the resilience to noise. To understand how neuronal circuits address this fundamental trade-off, we mapped the feedforward connectivity in the mouse cerebellar cortex using automated large-scale transmission electron microscopy and convolutional neural network-based image segmentation. We found that both the input and output layers of the circuit exhibit redundant and selective connectivity motifs, which contrast with prevailing models. Numerical simulations suggest that these redundant, non-random connectivity motifs increase the resilience to noise at a negligible cost to the overall encoding capacity. This work reveals how neuronal network structure can support a trade-off between encoding capacity and redundancy, unveiling principles of biological network architecture with implications for the design of artificial neural networks.

View Publication Page
02/18/16 | Structured dendritic inhibition supports branch-selective integration in CA1 pyramidal cells.
Bloss EB, Cembrowski MS, Karsh B, Colonell J, Fetter RD, Spruston N
Neuron. 2016 Feb 18:. doi: 10.1016/j.neuron.2016.01.029

Neuronal circuit function is governed by precise patterns of connectivity between specialized groups of neurons. The diversity of GABAergic interneurons is a hallmark of cortical circuits, yet little is known about their targeting to individual postsynaptic dendrites. We examined synaptic connectivity between molecularly defined inhibitory interneurons and CA1 pyramidal cell dendrites using correlative light-electron microscopy and large-volume array tomography. We show that interneurons can be highly selective in their connectivity to specific dendritic branch types and, furthermore, exhibit precisely targeted connectivity to the origin or end of individual branches. Computational simulations indicate that the observed subcellular targeting enables control over the nonlinear integration of synaptic input or the initiation and backpropagation of action potentials in a branch-selective manner. Our results demonstrate that connectivity between interneurons and pyramidal cell dendrites is more precise and spatially segregated than previously appreciated, which may be a critical determinant of how inhibition shapes dendritic computation.

View Publication Page
07/15/08 | Structured illumination in total internal reflection fluorescence microscopy using a spatial light modulator.
Fiolka R, Beck M, Stemmer A
Optics Letters. 2008 Jul 15;33(14):1629-31

In wide-field fluorescence microscopy, illuminating the specimen with evanescent standing waves increases lateral resolution more than twofold. We report a versatile setup for standing-wave illumination in total internal reflection fluorescence microscopy. An adjustable diffraction grating written on a phase-only spatial light modulator controls the illumination field. Selecting appropriate diffraction orders and displaying a sheared (tilted) diffraction grating allows one to tune the penetration depth in very fine steps. The setup achieves 91 nm lateral resolution for green emission.

View Publication Page
03/06/14 | Structured illumination microscopy (Chapter 15.)
Shao L, Rego EH
Fluorescence Microscopy: Super-resolution and other novel techniques:213–225. doi: 10.1016/B978-0-12-409513-7.00015-4
Cardona LabFunke Lab
04/13/16 | Structured learning of assignment models for neuron reconstruction to minimize topological errors.
Funke J, Klein J, Moreno-Noguer F, Cardona A, Cook M
IEEE 13th International Symposium on Biomedical Imaging (ISBI). 2016 Ap 13:607-11. doi: 10.1109/ ISBI.2016.7493341

Structured learning provides a powerful framework for empirical risk minimization on the predictions of structured models. It allows end-to-end learning of model parameters to minimize an application specific loss function. This framework is particularly well suited for discrete optimization models that are used for neuron reconstruction from anisotropic electron microscopy (EM) volumes. However, current methods are still learning unary potentials by training a classifier that is agnostic about the model it is used in. We believe the reason for that lies in the difficulties of (1) finding a representative training sample, and (2) designing an application specific loss function that captures the quality of a proposed solution. In this paper, we show how to find a representative training sample from human generated ground truth, and propose a loss function that is suitable to minimize topological errors in the reconstruction. We compare different training methods on two challenging EM-datasets. Our structured learning approach shows consistently higher reconstruction accuracy than other current learning methods.

View Publication Page
08/11/21 | Structured patterns of activity in pulse-coupled oscillator networks with varied connectivity.
Kadhim KL, Hermundstad AM, Brown KS
PLoS One. 2021 Aug 11;16(8):e0256034. doi: 10.1371/journal.pone.0256034

Identifying coordinated activity within complex systems is essential to linking their structure and function. We study collective activity in networks of pulse-coupled oscillators that have variable network connectivity and integrate-and-fire dynamics. Starting from random initial conditions, we see the emergence of three broad classes of behaviors that differ in their collective spiking statistics. In the first class ("temporally-irregular"), all nodes have variable inter-spike intervals, and the resulting firing patterns are irregular. In the second ("temporally-regular"), the network generates a coherent, repeating pattern of activity in which all nodes fire with the same constant inter-spike interval. In the third ("chimeric"), subgroups of coherently-firing nodes coexist with temporally-irregular nodes. Chimera states have previously been observed in networks of oscillators; here, we find that the notions of temporally-regular and chimeric states encompass a much richer set of dynamical patterns than has yet been described. We also find that degree heterogeneity and connection density have a strong effect on the resulting state: in binomial random networks, high degree variance and intermediate connection density tend to produce temporally-irregular dynamics, while low degree variance and high connection density tend to produce temporally-regular dynamics. Chimera states arise with more frequency in networks with intermediate degree variance and either high or low connection densities. Finally, we demonstrate that a normalized compression distance, computed via the Lempel-Ziv complexity of nodal spike trains, can be used to distinguish these three classes of behavior even when the phase relationship between nodes is arbitrary.

View Publication Page
02/13/22 | Structured random receptive fields enable informative sensory encodings
Biraj Pandey , Marius Pachitariu , Bingni W. Brunton , Kameron Decker Harris
bioRxiv. 2022 Feb 13:. doi: 10.1101/2021.09.09.459651

Brains must represent the outside world so that animals survive and thrive. In early sensory systems, neural populations have diverse receptive fields structured to detect important features in inputs, yet significant variability has been ignored in classical models of sensory neurons. We model neuronal receptive fields as random, variable samples from parametrized distributions in two sensory modalities, using data from insect mechanosensors and neurons of mammalian primary visual cortex. We show that these random feature neurons perform a randomized wavelet transform on inputs which removes high frequency noise and boosts the signal. Our result makes a significant theoretical connection between the foundational concepts of receptive fields in neuroscience and random features in artificial neural networks. Further, these random feature neurons enable learning from fewer training samples and with smaller networks in artificial tasks. This structured random model of receptive fields provides a unifying, mathematically tractable framework to understand sensory encodings across both spatial and temporal domains.

View Publication Page
10/10/22 | Structured random receptive fields enable informative sensory encodings.
Pandey B, Pachitariu M, Brunton BW, Harris KD
PLoS Computational Biology. 2022 Oct 10;18(10):e1010484. doi: 10.1371/journal.pcbi.1010484

Brains must represent the outside world so that animals survive and thrive. In early sensory systems, neural populations have diverse receptive fields structured to detect important features in inputs, yet significant variability has been ignored in classical models of sensory neurons. We model neuronal receptive fields as random, variable samples from parameterized distributions and demonstrate this model in two sensory modalities using data from insect mechanosensors and mammalian primary visual cortex. Our approach leads to a significant theoretical connection between the foundational concepts of receptive fields and random features, a leading theory for understanding artificial neural networks. The modeled neurons perform a randomized wavelet transform on inputs, which removes high frequency noise and boosts the signal. Further, these random feature neurons enable learning from fewer training samples and with smaller networks in artificial tasks. This structured random model of receptive fields provides a unifying, mathematically tractable framework to understand sensory encodings across both spatial and temporal domains.

View Publication Page
08/08/22 | Structured sampling of olfactory input by the fly mushroom body.
Zheng Z, Li F, Fisher C, Ali IJ, Sharifi N, Calle-Schuler S, Hsu J, Masoodpanah N, Kmecova L, Kazimiers T, Perlman E, Nichols M, Li PH, Jain V, Bock DD
Current Biology. 2022 Aug 08;32(15):3334-3349.e6. doi: 10.1016/j.cub.2022.06.031

Associative memory formation and recall in the fruit fly Drosophila melanogaster is subserved by the mushroom body (MB). Upon arrival in the MB, sensory information undergoes a profound transformation from broadly tuned and stereotyped odorant responses in the olfactory projection neuron (PN) layer to narrowly tuned and nonstereotyped responses in the Kenyon cells (KCs). Theory and experiment suggest that this transformation is implemented by random connectivity between KCs and PNs. However, this hypothesis has been challenging to test, given the difficulty of mapping synaptic connections between large numbers of brain-spanning neurons. Here, we used a recent whole-brain electron microscopy volume of the adult fruit fly to map PN-to-KC connectivity at synaptic resolution. The PN-KC connectome revealed unexpected structure, with preponderantly food-responsive PN types converging at above-chance levels on downstream KCs. Axons of the overconvergent PN types tended to arborize near one another in the MB main calyx, making local KC dendrites more likely to receive input from those types. Overconvergent PN types preferentially co-arborize and connect with dendrites of αβ and α'β' KC subtypes. Computational simulation of the observed network showed degraded discrimination performance compared with a random network, except when all signal flowed through the overconvergent, primarily food-responsive PN types. Additional theory and experiment will be needed to fully characterize the impact of the observed non-random network structure on associative memory formation and recall.

View Publication Page
Druckmann LabMagee Lab
02/05/14 | Structured synaptic connectivity between hippocampal regions.
Shaul Druckmann , Feng L, Lee B, Yook C, Zhao T, Magee JC, Kim J
Neuron. 2014 Feb 5;81:629-40. doi: 10.1016/j.neuron.2013.11.026

The organization of synaptic connectivity within a neuronal circuit is a prime determinant of circuit function. We performed a comprehensive fine-scale circuit mapping of hippocampal regions (CA3-CA1) using the newly developed synapse labeling method, mGRASP. This mapping revealed spatially nonuniform and clustered synaptic connectivity patterns. Furthermore, synaptic clustering was enhanced between groups of neurons that shared a similar developmental/migration time window, suggesting a mechanism for establishing the spatial structure of synaptic connectivity. Such connectivity patterns are thought to effectively engage active dendritic processing and storage mechanisms, thereby potentially enhancing neuronal feature selectivity.

View Publication Page