Filter
Associated Lab
- Aguilera Castrejon Lab (17) Apply Aguilera Castrejon Lab filter
- Ahrens Lab (68) Apply Ahrens Lab filter
- Aso Lab (42) Apply Aso Lab filter
- Baker Lab (38) Apply Baker Lab filter
- Betzig Lab (115) Apply Betzig Lab filter
- Beyene Lab (14) Apply Beyene Lab filter
- Bock Lab (17) Apply Bock Lab filter
- Branson Lab (54) Apply Branson Lab filter
- Card Lab (43) Apply Card Lab filter
- Cardona Lab (64) Apply Cardona Lab filter
- Chklovskii Lab (13) Apply Chklovskii Lab filter
- Clapham Lab (15) Apply Clapham Lab filter
- Cui Lab (19) Apply Cui Lab filter
- Darshan Lab (12) Apply Darshan Lab filter
- Dennis Lab (1) Apply Dennis Lab filter
- Dickson Lab (46) Apply Dickson Lab filter
- Druckmann Lab (25) Apply Druckmann Lab filter
- Dudman Lab (52) Apply Dudman Lab filter
- Eddy/Rivas Lab (30) Apply Eddy/Rivas Lab filter
- Egnor Lab (11) Apply Egnor Lab filter
- Espinosa Medina Lab (20) Apply Espinosa Medina Lab filter
- Feliciano Lab (8) Apply Feliciano Lab filter
- Fetter Lab (41) Apply Fetter Lab filter
- FIB-SEM Technology (1) Apply FIB-SEM Technology filter
- Fitzgerald Lab (29) Apply Fitzgerald Lab filter
- Freeman Lab (15) Apply Freeman Lab filter
- Funke Lab (41) Apply Funke Lab filter
- Gonen Lab (91) Apply Gonen Lab filter
- Grigorieff Lab (62) Apply Grigorieff Lab filter
- Harris Lab (64) Apply Harris Lab filter
- Heberlein Lab (94) Apply Heberlein Lab filter
- Hermundstad Lab (29) Apply Hermundstad Lab filter
- Hess Lab (79) Apply Hess Lab filter
- Ilanges Lab (2) Apply Ilanges Lab filter
- Jayaraman Lab (47) Apply Jayaraman Lab filter
- Ji Lab (33) Apply Ji Lab filter
- Johnson Lab (6) Apply Johnson Lab filter
- Kainmueller Lab (19) Apply Kainmueller Lab filter
- Karpova Lab (14) Apply Karpova Lab filter
- Keleman Lab (13) Apply Keleman Lab filter
- Keller Lab (76) Apply Keller Lab filter
- Koay Lab (18) Apply Koay Lab filter
- Lavis Lab (152) Apply Lavis Lab filter
- Lee (Albert) Lab (34) Apply Lee (Albert) Lab filter
- Leonardo Lab (23) Apply Leonardo Lab filter
- Li Lab (29) Apply Li Lab filter
- Lippincott-Schwartz Lab (174) Apply Lippincott-Schwartz Lab filter
- Liu (Yin) Lab (7) Apply Liu (Yin) Lab filter
- Liu (Zhe) Lab (64) Apply Liu (Zhe) Lab filter
- Looger Lab (138) Apply Looger Lab filter
- Magee Lab (49) Apply Magee Lab filter
- Menon Lab (18) Apply Menon Lab filter
- Murphy Lab (13) Apply Murphy Lab filter
- O'Shea Lab (7) Apply O'Shea Lab filter
- Otopalik Lab (13) Apply Otopalik Lab filter
- Pachitariu Lab (49) Apply Pachitariu Lab filter
- Pastalkova Lab (18) Apply Pastalkova Lab filter
- Pavlopoulos Lab (19) Apply Pavlopoulos Lab filter
- Pedram Lab (15) Apply Pedram Lab filter
- Podgorski Lab (16) Apply Podgorski Lab filter
- Reiser Lab (52) Apply Reiser Lab filter
- Riddiford Lab (44) Apply Riddiford Lab filter
- Romani Lab (48) Apply Romani Lab filter
- Rubin Lab (146) Apply Rubin Lab filter
- Saalfeld Lab (64) Apply Saalfeld Lab filter
- Satou Lab (16) Apply Satou Lab filter
- Scheffer Lab (38) Apply Scheffer Lab filter
- Schreiter Lab (68) Apply Schreiter Lab filter
- Sgro Lab (21) Apply Sgro Lab filter
- Shroff Lab (31) Apply Shroff Lab filter
- Simpson Lab (23) Apply Simpson Lab filter
- Singer Lab (80) Apply Singer Lab filter
- Spruston Lab (94) Apply Spruston Lab filter
- Stern Lab (158) Apply Stern Lab filter
- Sternson Lab (54) Apply Sternson Lab filter
- Stringer Lab (39) Apply Stringer Lab filter
- Svoboda Lab (135) Apply Svoboda Lab filter
- Tebo Lab (34) Apply Tebo Lab filter
- Tervo Lab (9) Apply Tervo Lab filter
- Tillberg Lab (21) Apply Tillberg Lab filter
- Tjian Lab (64) Apply Tjian Lab filter
- Truman Lab (88) Apply Truman Lab filter
- Turaga Lab (52) Apply Turaga Lab filter
- Turner Lab (39) Apply Turner Lab filter
- Vale Lab (8) Apply Vale Lab filter
- Voigts Lab (3) Apply Voigts Lab filter
- Wang (Meng) Lab (23) Apply Wang (Meng) Lab filter
- Wang (Shaohe) Lab (25) Apply Wang (Shaohe) Lab filter
- Wu Lab (9) Apply Wu Lab filter
- Zlatic Lab (28) Apply Zlatic Lab filter
- Zuker Lab (25) Apply Zuker Lab filter
Associated Project Team
- CellMap (12) Apply CellMap filter
- COSEM (3) Apply COSEM filter
- FIB-SEM Technology (5) Apply FIB-SEM Technology filter
- Fly Descending Interneuron (12) Apply Fly Descending Interneuron filter
- Fly Functional Connectome (14) Apply Fly Functional Connectome filter
- Fly Olympiad (5) Apply Fly Olympiad filter
- FlyEM (56) Apply FlyEM filter
- FlyLight (50) Apply FlyLight filter
- GENIE (47) Apply GENIE filter
- Integrative Imaging (6) Apply Integrative Imaging filter
- Larval Olympiad (2) Apply Larval Olympiad filter
- MouseLight (18) Apply MouseLight filter
- NeuroSeq (1) Apply NeuroSeq filter
- ThalamoSeq (1) Apply ThalamoSeq filter
- Tool Translation Team (T3) (27) Apply Tool Translation Team (T3) filter
- Transcription Imaging (49) Apply Transcription Imaging filter
Publication Date
- 2025 (196) Apply 2025 filter
- 2024 (212) Apply 2024 filter
- 2023 (159) Apply 2023 filter
- 2022 (192) Apply 2022 filter
- 2021 (194) Apply 2021 filter
- 2020 (196) Apply 2020 filter
- 2019 (202) Apply 2019 filter
- 2018 (232) Apply 2018 filter
- 2017 (217) Apply 2017 filter
- 2016 (209) Apply 2016 filter
- 2015 (252) Apply 2015 filter
- 2014 (236) Apply 2014 filter
- 2013 (194) Apply 2013 filter
- 2012 (190) Apply 2012 filter
- 2011 (190) Apply 2011 filter
- 2010 (161) Apply 2010 filter
- 2009 (158) Apply 2009 filter
- 2008 (140) Apply 2008 filter
- 2007 (106) Apply 2007 filter
- 2006 (92) Apply 2006 filter
- 2005 (67) Apply 2005 filter
- 2004 (57) Apply 2004 filter
- 2003 (58) Apply 2003 filter
- 2002 (39) Apply 2002 filter
- 2001 (28) Apply 2001 filter
- 2000 (29) Apply 2000 filter
- 1999 (14) Apply 1999 filter
- 1998 (18) Apply 1998 filter
- 1997 (16) Apply 1997 filter
- 1996 (10) Apply 1996 filter
- 1995 (18) Apply 1995 filter
- 1994 (12) Apply 1994 filter
- 1993 (10) Apply 1993 filter
- 1992 (6) Apply 1992 filter
- 1991 (11) Apply 1991 filter
- 1990 (11) Apply 1990 filter
- 1989 (6) Apply 1989 filter
- 1988 (1) Apply 1988 filter
- 1987 (7) Apply 1987 filter
- 1986 (4) Apply 1986 filter
- 1985 (5) Apply 1985 filter
- 1984 (2) Apply 1984 filter
- 1983 (2) Apply 1983 filter
- 1982 (3) Apply 1982 filter
- 1981 (3) Apply 1981 filter
- 1980 (1) Apply 1980 filter
- 1979 (1) Apply 1979 filter
- 1976 (2) Apply 1976 filter
- 1973 (1) Apply 1973 filter
- 1970 (1) Apply 1970 filter
- 1967 (1) Apply 1967 filter
Type of Publication
4172 Publications
Showing 1501-1510 of 4172 resultsA powerful approach for understanding neural population dynamics is to extract low-dimensional trajectories from population recordings using dimensionality reduction methods. Current approaches for dimensionality reduction on neural data are limited to single population recordings, and can not identify dynamics embedded across multiple measurements. We propose an approach for extracting low-dimensional dynamics from multiple, sequential recordings. Our algorithm scales to data comprising millions of observed dimensions, making it possible to access dynamics distributed across large populations or multiple brain areas. Building on subspace-identification approaches for dynamical systems, we perform parameter estimation by minimizing a moment-matching objective using a scalable stochastic gradient descent algorithm: The model is optimized to predict temporal covariations across neurons and across time. We show how this approach naturally handles missing data and multiple partial recordings, and can identify dynamics and predict correlations even in the presence of severe subsampling and small overlap between recordings. We demonstrate the effectiveness of the approach both on simulated data and a whole-brain larval zebrafish imaging dataset.
Biological tissue is often composed of cells with similar morphologies replicated throughout large volumes and many biological applications rely on the accurate identification of these cells and their locations from image data. Here we develop a generative model that captures the regularities present in images composed of repeating elements of a few different types. Formally, the model can be described as convolutional sparse block coding. For inference we use a variant of convolutional matching pursuit adapted to block-based representations. We extend the K-SVD learning algorithm to subspaces by retaining several principal vectors from the SVD decomposition instead of just one. Good models with little cross-talk between subspaces can be obtained by learning the blocks incrementally. We perform extensive experiments on simulated images and the inference algorithm consistently recovers a large proportion of the cells with a small number of false positives. We fit the convolutional model to noisy GCaMP6 two-photon images of spiking neurons and to Nissl-stained slices of cortical tissue and show that it recovers cell body locations without supervision. The flexibility of the block-based representation is reflected in the variability of the recovered cell shapes.
An often-overlooked aspect of neural plasticity is the plasticity of neuronal composition, in which the numbers of neurons of particular classes are altered in response to environment and experience. The Drosophila brain features several well-characterized lineages in which a single neuroblast gives rise to multiple neuronal classes in a stereotyped sequence during development [1]. We find that in the intrinsic mushroom body neuron lineage, the numbers for each class are highly plastic, depending on the timing of temporal fate transitions and the rate of neuroblast proliferation. For example, mushroom body neuroblast cycling can continue under starvation conditions, uncoupled from temporal fate transitions that depend on extrinsic cues reflecting organismal growth and development. In contrast, the proliferation rates of antennal lobe lineages are closely associated with organismal development, and their temporal fate changes appear to be cell cycle-dependent, such that the same numbers and types of uniglomerular projection neurons innervate the antennal lobe following various perturbations. We propose that this surprising difference in plasticity for these brain lineages is adaptive, given their respective roles as parallel processors versus discrete carriers of olfactory information.
Many animals rely on vision to navigate through their environment. The pattern of changes in the visual scene induced by self-motion is the optic flow1, which is first estimated in local patches by directionally selective (DS) neurons2–4. But how should the arrays of DS neurons, each responsive to motion in a preferred direction at a specific retinal position, be organized to support robust decoding of optic flow by downstream circuits? Understanding this global organization is challenging because it requires mapping fine, local features of neurons across the animal’s field of view3. In Drosophila, the asymmetric dendrites of the T4 and T5 DS neurons establish their preferred direction, making it possible to predict DS responses from anatomy4,5. Here we report that the preferred directions of fly DS neurons vary at different retinal positions and show that this spatial variation is established by the anatomy of the compound eye. To estimate the preferred directions across the visual field, we reconstructed hundreds of T4 neurons in a full brain EM volume6 and discovered unexpectedly stereotypical dendritic arborizations that are independent of location. We then used whole-head μCT scans to map the viewing directions of all compound eye facets and found a non-uniform sampling of visual space that explains the spatial variation in preferred directions. Our findings show that the organization of preferred directions in the fly is largely determined by the compound eye, exposing an intimate and unexpected connection between the peripheral structure of the eye, functional properties of neurons deep in the brain, and the control of body movements.
In Drosophila, pattern formation at multiple stages of embryonic and imaginal development depends on the same intercellular signaling pathways. We have identified a novel gene, eyelid (eld), which is required for embryonic segmentation, development of the notum and wing margin, and photoreceptor differentiation. In these tissues, eld mutations have effects opposite to those caused by wingless (wg) mutations. eld encodes a widely expressed nuclear protein with a region homologous to a novel family of DNA-binding domains. Based on this homology and on the phenotypic analysis, we suggest that Eld could act as a transcription factor antagonistic to the Wg pathway.
Visualizing the formation of multinucleated giant cells (MGCs) from living specimens has been challenging due to the fact that most live imaging techniques require propagation of light through glass, but on glass macrophage fusion is a rare event. This protocol presents the fabrication of several optical-quality glass surfaces where adsorption of compounds containing long-chain hydrocarbons transforms glass into a fusogenic surface. First, preparation of clean glass surfaces as starting material for surface modification is described. Second, a method is provided for the adsorption of compounds containing long-chain hydrocarbons to convert non-fusogenic glass into a fusogenic substrate. Third, this protocol describes fabrication of surface micropatterns that promote a high degree of spatiotemporal control over MGC formation. Finally, fabricating glass bottom dishes is described. Examples of use of this in vitro cell system as a model to study macrophage fusion and MGC formation are shown.
Recent studies in mice have shown that orofacial behaviors drive a large fraction of neural activity across the brain. To understand the nature and function of these signals, we need better computational models to characterize the behaviors and relate them to neural activity. Here we developed Facemap, a framework consisting of a keypoint tracking algorithm and a deep neural network encoder for predicting neural activity. We used the Facemap keypoints as input for the deep neural network to predict the activity of ∼50,000 simultaneously-recorded neurons and in visual cortex we doubled the amount of explained variance compared to previous methods. Our keypoint tracking algorithm was more accurate than existing pose estimation tools, while the inference speed was several times faster, making it a powerful tool for closed-loop behavioral experiments. The Facemap tracker was easy to adapt to data from new labs, requiring as few as 10 annotated frames for near-optimal performance. We used Facemap to find that the neuronal activity clusters which were highly driven by behaviors were more spatially spread-out across cortex. We also found that the deep keypoint features inferred by the model had time-asymmetrical state dynamics that were not apparent in the raw keypoint data. In summary, Facemap provides a stepping stone towards understanding the function of the brainwide neural signals and their relation to behavior.
Despite the apparent simplicity of the xanthene fluorophores, the preparation of caged derivatives with free carboxy groups remains a synthetic challenge. A straightforward and flexible strategy for preparing rhodamine and fluorescein derivatives was developed using reduced, “leuco” intermediates.
Efforts to map neural circuits have been galvanized by the development of genetic technologies that permit the manipulation of targeted sets of neurons in the brains of freely behaving animals. The success of these efforts relies on the experimenter's ability to target arbitrarily small subsets of neurons for manipulation, but such specificity of targeting cannot routinely be achieved using existing methods. In Drosophila melanogaster, a widely used technique for refined cell-type specific manipulation is the Split GAL4 system, which augments the targeting specificity of the binary GAL4-UAS system by making GAL4 transcriptional activity contingent upon two enhancers, rather than one. To permit more refined targeting, we introduce here the "Killer Zipper" (KZip(+)), a suppressor that makes Split GAL4 targeting contingent upon a third enhancer. KZip(+) acts by disrupting both the formation and activity of Split GAL4 heterodimers, and we show how this added layer of control can be used to selectively remove unwanted cells from a Split GAL4 expression pattern or to subtract neurons of interest from a pattern to determine their requirement in generating a given phenotype. To facilitate application of the KZip(+) technology, we have developed a versatile set of LexAop-KZip(+) fly lines that can be used directly with the large number of LexA driver lines with known expression patterns. The Killer Zipper significantly sharpens the precision of neuronal genetic control available in Drosophila and may be extended to other organisms where Split GAL4-like systems are used.
