Filter
Associated Lab
- Aguilera Castrejon Lab (15) Apply Aguilera Castrejon Lab filter
- Ahrens Lab (56) Apply Ahrens Lab filter
- Aso Lab (39) Apply Aso Lab filter
- Baker Lab (38) Apply Baker Lab filter
- Betzig Lab (110) Apply Betzig Lab filter
- Beyene Lab (10) Apply Beyene Lab filter
- Bock Lab (17) Apply Bock Lab filter
- Branson Lab (48) Apply Branson Lab filter
- Card Lab (40) Apply Card Lab filter
- Cardona Lab (63) Apply Cardona Lab filter
- Chklovskii Lab (13) Apply Chklovskii Lab filter
- Clapham Lab (12) Apply Clapham Lab filter
- Cui Lab (19) Apply Cui Lab filter
- Darshan Lab (12) Apply Darshan Lab filter
- Dennis Lab (1) Apply Dennis Lab filter
- Dickson Lab (46) Apply Dickson Lab filter
- Druckmann Lab (25) Apply Druckmann Lab filter
- Dudman Lab (46) Apply Dudman Lab filter
- Eddy/Rivas Lab (30) Apply Eddy/Rivas Lab filter
- Egnor Lab (11) Apply Egnor Lab filter
- Espinosa Medina Lab (16) Apply Espinosa Medina Lab filter
- Feliciano Lab (6) Apply Feliciano Lab filter
- Fetter Lab (41) Apply Fetter Lab filter
- Fitzgerald Lab (28) Apply Fitzgerald Lab filter
- Freeman Lab (15) Apply Freeman Lab filter
- Funke Lab (34) Apply Funke Lab filter
- Gonen Lab (91) Apply Gonen Lab filter
- Grigorieff Lab (62) Apply Grigorieff Lab filter
- Harris Lab (58) Apply Harris Lab filter
- Heberlein Lab (94) Apply Heberlein Lab filter
- Hermundstad Lab (22) Apply Hermundstad Lab filter
- Hess Lab (71) Apply Hess Lab filter
- Ilanges Lab (1) Apply Ilanges Lab filter
- Jayaraman Lab (44) Apply Jayaraman Lab filter
- Ji Lab (33) Apply Ji Lab filter
- Johnson Lab (6) Apply Johnson Lab filter
- Kainmueller Lab (19) Apply Kainmueller Lab filter
- Karpova Lab (14) Apply Karpova Lab filter
- Keleman Lab (13) Apply Keleman Lab filter
- Keller Lab (75) Apply Keller Lab filter
- Koay Lab (16) Apply Koay Lab filter
- Lavis Lab (136) Apply Lavis Lab filter
- Lee (Albert) Lab (34) Apply Lee (Albert) Lab filter
- Leonardo Lab (23) Apply Leonardo Lab filter
- Li Lab (25) Apply Li Lab filter
- Lippincott-Schwartz Lab (161) Apply Lippincott-Schwartz Lab filter
- Liu (Yin) Lab (5) Apply Liu (Yin) Lab filter
- Liu (Zhe) Lab (58) Apply Liu (Zhe) Lab filter
- Looger Lab (137) Apply Looger Lab filter
- Magee Lab (49) Apply Magee Lab filter
- Menon Lab (18) Apply Menon Lab filter
- Murphy Lab (13) Apply Murphy Lab filter
- O'Shea Lab (4) Apply O'Shea Lab filter
- Otopalik Lab (13) Apply Otopalik Lab filter
- Pachitariu Lab (41) Apply Pachitariu Lab filter
- Pastalkova Lab (18) Apply Pastalkova Lab filter
- Pavlopoulos Lab (19) Apply Pavlopoulos Lab filter
- Pedram Lab (14) Apply Pedram Lab filter
- Podgorski Lab (16) Apply Podgorski Lab filter
- Reiser Lab (49) Apply Reiser Lab filter
- Riddiford Lab (44) Apply Riddiford Lab filter
- Romani Lab (40) Apply Romani Lab filter
- Rubin Lab (139) Apply Rubin Lab filter
- Saalfeld Lab (60) Apply Saalfeld Lab filter
- Satou Lab (16) Apply Satou Lab filter
- Scheffer Lab (36) Apply Scheffer Lab filter
- Schreiter Lab (62) Apply Schreiter Lab filter
- Sgro Lab (20) Apply Sgro Lab filter
- Shroff Lab (23) Apply Shroff Lab filter
- Simpson Lab (23) Apply Simpson Lab filter
- Singer Lab (80) Apply Singer Lab filter
- Spruston Lab (91) Apply Spruston Lab filter
- Stern Lab (152) Apply Stern Lab filter
- Sternson Lab (54) Apply Sternson Lab filter
- Stringer Lab (29) Apply Stringer Lab filter
- Svoboda Lab (135) Apply Svoboda Lab filter
- Tebo Lab (31) Apply Tebo Lab filter
- Tervo Lab (9) Apply Tervo Lab filter
- Tillberg Lab (17) Apply Tillberg Lab filter
- Tjian Lab (64) Apply Tjian Lab filter
- Truman Lab (88) Apply Truman Lab filter
- Turaga Lab (46) Apply Turaga Lab filter
- Turner Lab (35) Apply Turner Lab filter
- Vale Lab (6) Apply Vale Lab filter
- Voigts Lab (2) Apply Voigts Lab filter
- Wang (Meng) Lab (9) Apply Wang (Meng) Lab filter
- Wang (Shaohe) Lab (24) Apply Wang (Shaohe) Lab filter
- Wu Lab (9) Apply Wu Lab filter
- Zlatic Lab (28) Apply Zlatic Lab filter
- Zuker Lab (25) Apply Zuker Lab filter
Associated Project Team
- CellMap (5) Apply CellMap filter
- COSEM (3) Apply COSEM filter
- Fly Descending Interneuron (10) Apply Fly Descending Interneuron filter
- Fly Functional Connectome (14) Apply Fly Functional Connectome filter
- Fly Olympiad (5) Apply Fly Olympiad filter
- FlyEM (51) Apply FlyEM filter
- FlyLight (46) Apply FlyLight filter
- GENIE (40) Apply GENIE filter
- Integrative Imaging (1) Apply Integrative Imaging filter
- Larval Olympiad (2) Apply Larval Olympiad filter
- MouseLight (16) Apply MouseLight filter
- NeuroSeq (1) Apply NeuroSeq filter
- ThalamoSeq (1) Apply ThalamoSeq filter
- Tool Translation Team (T3) (24) Apply Tool Translation Team (T3) filter
- Transcription Imaging (49) Apply Transcription Imaging filter
Publication Date
- 2024 (141) Apply 2024 filter
- 2023 (175) Apply 2023 filter
- 2022 (192) Apply 2022 filter
- 2021 (193) Apply 2021 filter
- 2020 (196) Apply 2020 filter
- 2019 (202) Apply 2019 filter
- 2018 (232) Apply 2018 filter
- 2017 (217) Apply 2017 filter
- 2016 (209) Apply 2016 filter
- 2015 (252) Apply 2015 filter
- 2014 (236) Apply 2014 filter
- 2013 (194) Apply 2013 filter
- 2012 (190) Apply 2012 filter
- 2011 (190) Apply 2011 filter
- 2010 (161) Apply 2010 filter
- 2009 (158) Apply 2009 filter
- 2008 (140) Apply 2008 filter
- 2007 (106) Apply 2007 filter
- 2006 (92) Apply 2006 filter
- 2005 (67) Apply 2005 filter
- 2004 (57) Apply 2004 filter
- 2003 (58) Apply 2003 filter
- 2002 (39) Apply 2002 filter
- 2001 (28) Apply 2001 filter
- 2000 (29) Apply 2000 filter
- 1999 (14) Apply 1999 filter
- 1998 (18) Apply 1998 filter
- 1997 (16) Apply 1997 filter
- 1996 (10) Apply 1996 filter
- 1995 (18) Apply 1995 filter
- 1994 (12) Apply 1994 filter
- 1993 (10) Apply 1993 filter
- 1992 (6) Apply 1992 filter
- 1991 (11) Apply 1991 filter
- 1990 (11) Apply 1990 filter
- 1989 (6) Apply 1989 filter
- 1988 (1) Apply 1988 filter
- 1987 (7) Apply 1987 filter
- 1986 (4) Apply 1986 filter
- 1985 (5) Apply 1985 filter
- 1984 (2) Apply 1984 filter
- 1983 (2) Apply 1983 filter
- 1982 (3) Apply 1982 filter
- 1981 (3) Apply 1981 filter
- 1980 (1) Apply 1980 filter
- 1979 (1) Apply 1979 filter
- 1976 (2) Apply 1976 filter
- 1973 (1) Apply 1973 filter
- 1970 (1) Apply 1970 filter
- 1967 (1) Apply 1967 filter
Type of Publication
3920 Publications
Showing 721-730 of 3920 resultsAdaptive movements are critical to animal survival. To guide future actions, the brain monitors different outcomes, including achievement of movement and appetitive goals. The nature of outcome signals and their neuronal and network realization in motor cortex (M1), which commands the performance of skilled movements, is largely unknown. Using a dexterity task, calcium imaging, optogenetic perturbations, and behavioral manipulations, we studied outcome signals in murine M1. We find two populations of layer 2-3 neurons, “success”- and “failure” related neurons that develop with training and report end-result of trials. In these neurons, prolonged responses were recorded after success or failure trials, independent of reward and kinematics. In contrast, the initial state of layer-5 pyramidal tract neurons contains a memory trace of the previous trial’s outcome. Inter-trial cortical activity was needed to learn new task requirements. These M1 reflective layer-specific performance outcome signals, can support reinforcement motor learning of skilled behavior.
Spatial patterns of gene expression in the vertebrate brain are not independent, as pairs of genes can exhibit complex patterns of coexpression. Two genes may be similarly expressed in one region, but differentially expressed in other regions. These correlations have been studied quantitatively, particularly for the Allen Atlas of the adult mouse brain, but their biological meaning remains obscure. We propose a simple model of the coexpression patterns in terms of spatial distributions of underlying cell types and establish its plausibility using independently measured cell-type-specific transcriptomes. The model allows us to predict the spatial distribution of cell types in the mouse brain.
Cell-type-selective expression of the TFIID subunit TAF(II)105 (renamed TAF4b) in the ovary is essential for proper follicle development. Although a multitude of signaling pathways required for folliculogenesis have been identified, downstream transcriptional integrators of these signals remain largely unknown. Here, we show that TAF4b controls the granulosa-cell-specific expression of the proto-oncogene c-jun, and together they regulate transcription of ovary-selective promoters. Instead of using cell-type-specific activators, our findings suggest that the coactivator TAF4b regulates the expression of tissue-specific genes, at least in part, through the cell-type-specific induction of c-jun, a ubiquitous activator. Importantly, the loss of TAF4b in ovarian granulosa cells disrupts cellular morphologies and interactions during follicle growth that likely contribute to the infertility observed in TAF4b-null female mice. These data highlight a mechanism for potentiating tissue-selective functions of the basal transcription machinery and reveal intricate networks of gene expression that orchestrate ovarian-specific functions and cell morphology.
The study of synaptic specificity and plasticity in the CNS is limited by the inability to efficiently visualize synapses in identified neurons using light microscopy. Here, we describe synaptic tagging with recombination (STaR), a method for labeling endogenous presynaptic and postsynaptic proteins in a cell-type-specific fashion. We modified genomic loci encoding synaptic proteins within bacterial artificial chromosomes such that these proteins, expressed at endogenous levels and with normal spatiotemporal patterns, were labeled in an inducible fashion in specific neurons through targeted expression of site-specific recombinases. Within the Drosophila visual system, the number and distribution of synapses correlate with electron microscopy studies. Using two different recombination systems, presynaptic and postsynaptic specializations of synaptic pairs can be colabeled. STaR also allows synapses within the CNS to be studied in live animals noninvasively. In principle, STaR can be adapted to the mammalian nervous system.
Neocortical spiking dynamics control aspects of behavior, yet how these dynamics emerge during motor learning remains elusive. Activity-dependent synaptic plasticity is likely a key mechanism, as it reconfigures network architectures that govern neural dynamics. Here, we examined how the mouse premotor cortex acquires its well-characterized neural dynamics that control movement timing, specifically lick timing. To probe the role of synaptic plasticity, we have genetically manipulated proteins essential for major forms of synaptic plasticity, Ca2+/calmodulin-dependent protein kinase II (CaMKII) and Cofilin, in a region and cell-type-specific manner. Transient inactivation of CaMKII in the premotor cortex blocked learning of new lick timing without affecting the execution of learned action or ongoing spiking activity. Furthermore, among the major glutamatergic neurons in the premotor cortex, CaMKII and Cofilin activity in pyramidal tract (PT) neurons, but not intratelencephalic (IT) neurons, is necessary for learning. High-density electrophysiology in the premotor cortex uncovered that neural dynamics anticipating licks are progressively shaped during learning, which explains the change in lick timing. Such reconfiguration in behaviorally relevant dynamics is impeded by CaMKII manipulation in PT neurons. Altogether, the activity of plasticity-related proteins in PT neurons plays a central role in sculpting neocortical dynamics to learn new behavior.
Mutations in methyl-CpG-binding protein 2 (MeCP2) cause Rett syndrome and related autism spectrum disorders (Amir et al., 1999). MeCP2 is believed to be required for proper regulation of brain gene expression, but prior microarray studies in Mecp2 knock-out mice using brain tissue homogenates have revealed only subtle changes in gene expression (Tudor et al., 2002; Nuber et al., 2005; Jordan et al., 2007; Chahrour et al., 2008). Here, by profiling discrete subtypes of neurons we uncovered more dramatic effects of MeCP2 on gene expression, overcoming the "dilution problem" associated with assaying homogenates of complex tissues. The results reveal misregulation of genes involved in neuronal connectivity and communication. Importantly, genes upregulated following loss of MeCP2 are biased toward longer genes but this is not true for downregulated genes, suggesting MeCP2 may selectively repress long genes. Because genes involved in neuronal connectivity and communication, such as cell adhesion and cell-cell signaling genes, are enriched among longer genes, their misregulation following loss of MeCP2 suggests a possible etiology for altered circuit function in Rett syndrome.
Pretrained neural network models for biological segmentation can provide good out-of-the-box results for many image types. However, such models do not allow users to adapt the segmentation style to their specific needs and can perform suboptimally for test images that are very different from the training images. Here we introduce Cellpose 2.0, a new package that includes an ensemble of diverse pretrained models as well as a human-in-the-loop pipeline for rapid prototyping of new custom models. We show that models pretrained on the Cellpose dataset can be fine-tuned with only 500-1,000 user-annotated regions of interest (ROI) to perform nearly as well as models trained on entire datasets with up to 200,000 ROI. A human-in-the-loop approach further reduced the required user annotation to 100-200 ROI, while maintaining high-quality segmentations. We provide software tools such as an annotation graphical user interface, a model zoo and a human-in-the-loop pipeline to facilitate the adoption of Cellpose 2.0.
Generalist methods for cellular segmentation have good out-of-the-box performance on a variety of image types. However, existing methods struggle for images that are degraded by noise, blurred or undersampled, all of which are common in microscopy. We focused the development of Cellpose3 on addressing these cases, and here we demonstrate substantial out-of-the-box gains in segmentation and image quality for noisy, blurry or undersampled images. Unlike previous approaches, which train models to restore pixel values, we trained Cellpose3 to output images that are well-segmented by a generalist segmentation model, while maintaining perceptual similarity to the target images. Furthermore, we trained the restoration models on a large, varied collection of datasets, thus ensuring good generalization to user images. We provide these tools as “one-click” buttons inside the graphical interface of Cellpose as well as in the Cellpose API.
Many biological applications require the segmentation of cell bodies, membranes and nuclei from microscopy images. Deep learning has enabled great progress on this problem, but current methods are specialized for images that have large training datasets. Here we introduce a generalist, deep learning-based segmentation algorithm called Cellpose, which can very precisely segment a wide range of image types out-of-the-box and does not require model retraining or parameter adjustments. We trained Cellpose on a new dataset of highly-varied images of cells, containing over 70,000 segmented objects. To support community contributions to the training data, we developed software for manual labelling and for curation of the automated results, with optional direct upload to our data repository. Periodically retraining the model on the community-contributed data will ensure that Cellpose improves constantly.
Many biological applications require the segmentation of cell bodies, membranes and nuclei from microscopy images. Deep learning has enabled great progress on this problem, but current methods are specialized for images that have large training datasets. Here we introduce a generalist, deep learning-based segmentation method called Cellpose, which can precisely segment cells from a wide range of image types and does not require model retraining or parameter adjustments. Cellpose was trained on a new dataset of highly varied images of cells, containing over 70,000 segmented objects. We also demonstrate a three-dimensional (3D) extension of Cellpose that reuses the two-dimensional (2D) model and does not require 3D-labeled data. To support community contributions to the training data, we developed software for manual labeling and for curation of the automated results. Periodically retraining the model on the community-contributed data will ensure that Cellpose improves constantly.