Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Lippincottschwartz Lab / Publications
custom | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block

Associated Lab

facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block
facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block
general_search_page-panel_pane_1 | views_panes

3947 Publications

Showing 3301-3310 of 3947 results
Cardona LabFunke Lab
04/13/16 | Structured learning of assignment models for neuron reconstruction to minimize topological errors.
Funke J, Klein J, Moreno-Noguer F, Cardona A, Cook M
IEEE 13th International Symposium on Biomedical Imaging (ISBI). 2016 Ap 13:607-11. doi: 10.1109/ ISBI.2016.7493341

Structured learning provides a powerful framework for empirical risk minimization on the predictions of structured models. It allows end-to-end learning of model parameters to minimize an application specific loss function. This framework is particularly well suited for discrete optimization models that are used for neuron reconstruction from anisotropic electron microscopy (EM) volumes. However, current methods are still learning unary potentials by training a classifier that is agnostic about the model it is used in. We believe the reason for that lies in the difficulties of (1) finding a representative training sample, and (2) designing an application specific loss function that captures the quality of a proposed solution. In this paper, we show how to find a representative training sample from human generated ground truth, and propose a loss function that is suitable to minimize topological errors in the reconstruction. We compare different training methods on two challenging EM-datasets. Our structured learning approach shows consistently higher reconstruction accuracy than other current learning methods.

View Publication Page
08/11/21 | Structured patterns of activity in pulse-coupled oscillator networks with varied connectivity.
Kadhim KL, Hermundstad AM, Brown KS
PLoS One. 2021 Aug 11;16(8):e0256034. doi: 10.1371/journal.pone.0256034

Identifying coordinated activity within complex systems is essential to linking their structure and function. We study collective activity in networks of pulse-coupled oscillators that have variable network connectivity and integrate-and-fire dynamics. Starting from random initial conditions, we see the emergence of three broad classes of behaviors that differ in their collective spiking statistics. In the first class ("temporally-irregular"), all nodes have variable inter-spike intervals, and the resulting firing patterns are irregular. In the second ("temporally-regular"), the network generates a coherent, repeating pattern of activity in which all nodes fire with the same constant inter-spike interval. In the third ("chimeric"), subgroups of coherently-firing nodes coexist with temporally-irregular nodes. Chimera states have previously been observed in networks of oscillators; here, we find that the notions of temporally-regular and chimeric states encompass a much richer set of dynamical patterns than has yet been described. We also find that degree heterogeneity and connection density have a strong effect on the resulting state: in binomial random networks, high degree variance and intermediate connection density tend to produce temporally-irregular dynamics, while low degree variance and high connection density tend to produce temporally-regular dynamics. Chimera states arise with more frequency in networks with intermediate degree variance and either high or low connection densities. Finally, we demonstrate that a normalized compression distance, computed via the Lempel-Ziv complexity of nodal spike trains, can be used to distinguish these three classes of behavior even when the phase relationship between nodes is arbitrary.

View Publication Page
02/13/22 | Structured random receptive fields enable informative sensory encodings
Biraj Pandey , Marius Pachitariu , Bingni W. Brunton , Kameron Decker Harris
bioRxiv. 2022 Feb 13:. doi: 10.1101/2021.09.09.459651

Brains must represent the outside world so that animals survive and thrive. In early sensory systems, neural populations have diverse receptive fields structured to detect important features in inputs, yet significant variability has been ignored in classical models of sensory neurons. We model neuronal receptive fields as random, variable samples from parametrized distributions in two sensory modalities, using data from insect mechanosensors and neurons of mammalian primary visual cortex. We show that these random feature neurons perform a randomized wavelet transform on inputs which removes high frequency noise and boosts the signal. Our result makes a significant theoretical connection between the foundational concepts of receptive fields in neuroscience and random features in artificial neural networks. Further, these random feature neurons enable learning from fewer training samples and with smaller networks in artificial tasks. This structured random model of receptive fields provides a unifying, mathematically tractable framework to understand sensory encodings across both spatial and temporal domains.

View Publication Page
10/10/22 | Structured random receptive fields enable informative sensory encodings.
Pandey B, Pachitariu M, Brunton BW, Harris KD
PLoS Computational Biology. 2022 Oct 10;18(10):e1010484. doi: 10.1371/journal.pcbi.1010484

Brains must represent the outside world so that animals survive and thrive. In early sensory systems, neural populations have diverse receptive fields structured to detect important features in inputs, yet significant variability has been ignored in classical models of sensory neurons. We model neuronal receptive fields as random, variable samples from parameterized distributions and demonstrate this model in two sensory modalities using data from insect mechanosensors and mammalian primary visual cortex. Our approach leads to a significant theoretical connection between the foundational concepts of receptive fields and random features, a leading theory for understanding artificial neural networks. The modeled neurons perform a randomized wavelet transform on inputs, which removes high frequency noise and boosts the signal. Further, these random feature neurons enable learning from fewer training samples and with smaller networks in artificial tasks. This structured random model of receptive fields provides a unifying, mathematically tractable framework to understand sensory encodings across both spatial and temporal domains.

View Publication Page
08/08/22 | Structured sampling of olfactory input by the fly mushroom body.
Zheng Z, Li F, Fisher C, Ali IJ, Sharifi N, Calle-Schuler S, Hsu J, Masoodpanah N, Kmecova L, Kazimiers T, Perlman E, Nichols M, Li PH, Jain V, Bock DD
Current Biology. 2022 Aug 08;32(15):3334-3349.e6. doi: 10.1016/j.cub.2022.06.031

Associative memory formation and recall in the fruit fly Drosophila melanogaster is subserved by the mushroom body (MB). Upon arrival in the MB, sensory information undergoes a profound transformation from broadly tuned and stereotyped odorant responses in the olfactory projection neuron (PN) layer to narrowly tuned and nonstereotyped responses in the Kenyon cells (KCs). Theory and experiment suggest that this transformation is implemented by random connectivity between KCs and PNs. However, this hypothesis has been challenging to test, given the difficulty of mapping synaptic connections between large numbers of brain-spanning neurons. Here, we used a recent whole-brain electron microscopy volume of the adult fruit fly to map PN-to-KC connectivity at synaptic resolution. The PN-KC connectome revealed unexpected structure, with preponderantly food-responsive PN types converging at above-chance levels on downstream KCs. Axons of the overconvergent PN types tended to arborize near one another in the MB main calyx, making local KC dendrites more likely to receive input from those types. Overconvergent PN types preferentially co-arborize and connect with dendrites of αβ and α'β' KC subtypes. Computational simulation of the observed network showed degraded discrimination performance compared with a random network, except when all signal flowed through the overconvergent, primarily food-responsive PN types. Additional theory and experiment will be needed to fully characterize the impact of the observed non-random network structure on associative memory formation and recall.

View Publication Page
Druckmann LabMagee Lab
02/05/14 | Structured synaptic connectivity between hippocampal regions.
Shaul Druckmann , Feng L, Lee B, Yook C, Zhao T, Magee JC, Kim J
Neuron. 2014 Feb 5;81:629-40. doi: 10.1016/j.neuron.2013.11.026

The organization of synaptic connectivity within a neuronal circuit is a prime determinant of circuit function. We performed a comprehensive fine-scale circuit mapping of hippocampal regions (CA3-CA1) using the newly developed synapse labeling method, mGRASP. This mapping revealed spatially nonuniform and clustered synaptic connectivity patterns. Furthermore, synaptic clustering was enhanced between groups of neurons that shared a similar developmental/migration time window, suggesting a mechanism for establishing the spatial structure of synaptic connectivity. Such connectivity patterns are thought to effectively engage active dendritic processing and storage mechanisms, thereby potentially enhancing neuronal feature selectivity.

View Publication Page
Tjian Lab
07/01/09 | Structures of three distinct activator-TFIID complexes.
Liu W, Coleman RA, Ma E, Grob P, Yang JL, Zhang Y, Dailey G, Nogales E, Tjian R
Genes & Development. 2009 Jul 1;23(13):1510-21. doi: 10.1073/pnas.1100640108

Sequence-specific DNA-binding activators, key regulators of gene expression, stimulate transcription in part by targeting the core promoter recognition TFIID complex and aiding in its recruitment to promoter DNA. Although it has been established that activators can interact with multiple components of TFIID, it is unknown whether common or distinct surfaces within TFIID are targeted by activators and what changes if any in the structure of TFIID may occur upon binding activators. As a first step toward structurally dissecting activator/TFIID interactions, we determined the three-dimensional structures of TFIID bound to three distinct activators (i.e., the tumor suppressor p53 protein, glutamine-rich Sp1 and the oncoprotein c-Jun) and compared their structures as determined by electron microscopy and single-particle reconstruction. By a combination of EM and biochemical mapping analysis, our results uncover distinct contact regions within TFIID bound by each activator. Unlike the coactivator CRSP/Mediator complex that undergoes drastic and global structural changes upon activator binding, instead, a rather confined set of local conserved structural changes were observed when each activator binds holo-TFIID. These results suggest that activator contact may induce unique structural features of TFIID, thus providing nanoscale information on activator-dependent TFIID assembly and transcription initiation.

View Publication Page
06/24/11 | Studying sensorimotor integration in insects.
Huston* SJ, Jayaraman V
Current Opinion in Neurobiology. 2011 Jun 24;21(4):527-34. doi: 10.1016/j.conb.2011.05.030

Sensorimotor integration is a field rich in theory backed by a large body of psychophysical evidence. Relating the underlying neural circuitry to these theories has, however, been more challenging. With a wide array of complex behaviors coordinated by their small brains, insects provide powerful model systems to study key features of sensorimotor integration at a mechanistic level. Insect neural circuits perform both hard-wired and learned sensorimotor transformations. They modulate their neural processing based on both internal variables, such as the animal’s behavioral state, and external ones, such as the time of day. Here we present some studies using insect model systems that have produced insights, at the level of individual neurons, about sensorimotor integration and the various ways in which it can be modified by context.

View Publication Page
01/01/11 | Studying sensorimotor processing with physiology in behaving Drosophila.
Seelig JD, Jayaraman V
International Review of Neurobiology. 2011;99:169-89. doi: 10.1016/B978-0-12-387003-2.00007-0

The neural underpinnings of sensorimotor integration are best studied in the context of well-characterized behavior. A rich trove of Drosophila behavioral genetics research offers a variety of well-studied behaviors and candidate brain regions that can form the bases of such studies. The development of tools to perform in vivo physiology from the Drosophila brain has made it possible to monitor activity in defined neurons in response to sensory stimuli. More recently still, it has become possible to perform recordings from identified neurons in the brain of head-fixed flies during walking or flight behaviors. In this chapter, we discuss how experiments that simultaneously monitor behavior and physiology in Drosophila can be combined with other techniques to produce testable models of sensorimotor circuit function.

View Publication Page
01/28/16 | Studying small brains to understand the building blocks of cognition.
Haberkern H, Jayaraman V
Current Opinion in Neurobiology. 2016 Jan 28;37:59-65. doi: 10.1016/j.conb.2016.01.007

Cognition encompasses a range of higher-order mental processes, such as attention, working memory, and model-based decision-making. These processes are thought to involve the dynamic interaction of multiple central brain regions. A mechanistic understanding of such computations requires not only monitoring and manipulating specific neural populations during behavior, but also knowing the connectivity of the underlying circuitry. These goals are experimentally challenging in mammals, but are feasible in numerically simpler insect brains. In Drosophila melanogaster in particular, genetic tools enable precisely targeted physiology and optogenetics in actively behaving animals. In this article we discuss how these advantages are increasingly being leveraged to study abstract neural representations and sensorimotor computations that may be relevant for cognition in both insects and mammals.

View Publication Page