Filter
Associated Lab
- Aguilera Castrejon Lab (15) Apply Aguilera Castrejon Lab filter
- Ahrens Lab (57) Apply Ahrens Lab filter
- Aso Lab (39) Apply Aso Lab filter
- Baker Lab (38) Apply Baker Lab filter
- Betzig Lab (111) Apply Betzig Lab filter
- Beyene Lab (10) Apply Beyene Lab filter
- Bock Lab (17) Apply Bock Lab filter
- Branson Lab (48) Apply Branson Lab filter
- Card Lab (40) Apply Card Lab filter
- Cardona Lab (63) Apply Cardona Lab filter
- Chklovskii Lab (13) Apply Chklovskii Lab filter
- Clapham Lab (13) Apply Clapham Lab filter
- Cui Lab (19) Apply Cui Lab filter
- Darshan Lab (12) Apply Darshan Lab filter
- Dennis Lab (1) Apply Dennis Lab filter
- Dickson Lab (46) Apply Dickson Lab filter
- Druckmann Lab (25) Apply Druckmann Lab filter
- Dudman Lab (46) Apply Dudman Lab filter
- Eddy/Rivas Lab (30) Apply Eddy/Rivas Lab filter
- Egnor Lab (11) Apply Egnor Lab filter
- Espinosa Medina Lab (16) Apply Espinosa Medina Lab filter
- Feliciano Lab (6) Apply Feliciano Lab filter
- Fetter Lab (41) Apply Fetter Lab filter
- Fitzgerald Lab (28) Apply Fitzgerald Lab filter
- Freeman Lab (15) Apply Freeman Lab filter
- Funke Lab (35) Apply Funke Lab filter
- Gonen Lab (91) Apply Gonen Lab filter
- Grigorieff Lab (62) Apply Grigorieff Lab filter
- Harris Lab (59) Apply Harris Lab filter
- Heberlein Lab (94) Apply Heberlein Lab filter
- Hermundstad Lab (23) Apply Hermundstad Lab filter
- Hess Lab (72) Apply Hess Lab filter
- Ilanges Lab (1) Apply Ilanges Lab filter
- Jayaraman Lab (44) Apply Jayaraman Lab filter
- Ji Lab (33) Apply Ji Lab filter
- Johnson Lab (6) Apply Johnson Lab filter
- Kainmueller Lab (19) Apply Kainmueller Lab filter
- Karpova Lab (14) Apply Karpova Lab filter
- Keleman Lab (13) Apply Keleman Lab filter
- Keller Lab (75) Apply Keller Lab filter
- Koay Lab (17) Apply Koay Lab filter
- Lavis Lab (138) Apply Lavis Lab filter
- Lee (Albert) Lab (34) Apply Lee (Albert) Lab filter
- Leonardo Lab (23) Apply Leonardo Lab filter
- Li Lab (27) Apply Li Lab filter
- Lippincott-Schwartz Lab (161) Apply Lippincott-Schwartz Lab filter
- Liu (Yin) Lab (5) Apply Liu (Yin) Lab filter
- Liu (Zhe) Lab (61) Apply Liu (Zhe) Lab filter
- Looger Lab (137) Apply Looger Lab filter
- Magee Lab (49) Apply Magee Lab filter
- Menon Lab (18) Apply Menon Lab filter
- Murphy Lab (13) Apply Murphy Lab filter
- O'Shea Lab (6) Apply O'Shea Lab filter
- Otopalik Lab (13) Apply Otopalik Lab filter
- Pachitariu Lab (42) Apply Pachitariu Lab filter
- Pastalkova Lab (18) Apply Pastalkova Lab filter
- Pavlopoulos Lab (19) Apply Pavlopoulos Lab filter
- Pedram Lab (14) Apply Pedram Lab filter
- Podgorski Lab (16) Apply Podgorski Lab filter
- Reiser Lab (49) Apply Reiser Lab filter
- Riddiford Lab (44) Apply Riddiford Lab filter
- Romani Lab (41) Apply Romani Lab filter
- Rubin Lab (139) Apply Rubin Lab filter
- Saalfeld Lab (61) Apply Saalfeld Lab filter
- Satou Lab (16) Apply Satou Lab filter
- Scheffer Lab (36) Apply Scheffer Lab filter
- Schreiter Lab (62) Apply Schreiter Lab filter
- Sgro Lab (20) Apply Sgro Lab filter
- Shroff Lab (24) Apply Shroff Lab filter
- Simpson Lab (23) Apply Simpson Lab filter
- Singer Lab (80) Apply Singer Lab filter
- Spruston Lab (91) Apply Spruston Lab filter
- Stern Lab (152) Apply Stern Lab filter
- Sternson Lab (54) Apply Sternson Lab filter
- Stringer Lab (29) Apply Stringer Lab filter
- Svoboda Lab (135) Apply Svoboda Lab filter
- Tebo Lab (31) Apply Tebo Lab filter
- Tervo Lab (9) Apply Tervo Lab filter
- Tillberg Lab (17) Apply Tillberg Lab filter
- Tjian Lab (64) Apply Tjian Lab filter
- Truman Lab (88) Apply Truman Lab filter
- Turaga Lab (46) Apply Turaga Lab filter
- Turner Lab (35) Apply Turner Lab filter
- Vale Lab (6) Apply Vale Lab filter
- Voigts Lab (2) Apply Voigts Lab filter
- Wang (Meng) Lab (10) Apply Wang (Meng) Lab filter
- Wang (Shaohe) Lab (24) Apply Wang (Shaohe) Lab filter
- Wu Lab (9) Apply Wu Lab filter
- Zlatic Lab (28) Apply Zlatic Lab filter
- Zuker Lab (25) Apply Zuker Lab filter
Associated Project Team
- CellMap (6) Apply CellMap filter
- COSEM (3) Apply COSEM filter
- Fly Descending Interneuron (10) Apply Fly Descending Interneuron filter
- Fly Functional Connectome (14) Apply Fly Functional Connectome filter
- Fly Olympiad (5) Apply Fly Olympiad filter
- FlyEM (51) Apply FlyEM filter
- FlyLight (46) Apply FlyLight filter
- GENIE (40) Apply GENIE filter
- Integrative Imaging (1) Apply Integrative Imaging filter
- Larval Olympiad (2) Apply Larval Olympiad filter
- MouseLight (16) Apply MouseLight filter
- NeuroSeq (1) Apply NeuroSeq filter
- ThalamoSeq (1) Apply ThalamoSeq filter
- Tool Translation Team (T3) (24) Apply Tool Translation Team (T3) filter
- Transcription Imaging (49) Apply Transcription Imaging filter
Publication Date
- 2024 (172) Apply 2024 filter
- 2023 (171) Apply 2023 filter
- 2022 (192) Apply 2022 filter
- 2021 (193) Apply 2021 filter
- 2020 (196) Apply 2020 filter
- 2019 (202) Apply 2019 filter
- 2018 (232) Apply 2018 filter
- 2017 (217) Apply 2017 filter
- 2016 (209) Apply 2016 filter
- 2015 (252) Apply 2015 filter
- 2014 (236) Apply 2014 filter
- 2013 (194) Apply 2013 filter
- 2012 (190) Apply 2012 filter
- 2011 (190) Apply 2011 filter
- 2010 (161) Apply 2010 filter
- 2009 (158) Apply 2009 filter
- 2008 (140) Apply 2008 filter
- 2007 (106) Apply 2007 filter
- 2006 (92) Apply 2006 filter
- 2005 (67) Apply 2005 filter
- 2004 (57) Apply 2004 filter
- 2003 (58) Apply 2003 filter
- 2002 (39) Apply 2002 filter
- 2001 (28) Apply 2001 filter
- 2000 (29) Apply 2000 filter
- 1999 (14) Apply 1999 filter
- 1998 (18) Apply 1998 filter
- 1997 (16) Apply 1997 filter
- 1996 (10) Apply 1996 filter
- 1995 (18) Apply 1995 filter
- 1994 (12) Apply 1994 filter
- 1993 (10) Apply 1993 filter
- 1992 (6) Apply 1992 filter
- 1991 (11) Apply 1991 filter
- 1990 (11) Apply 1990 filter
- 1989 (6) Apply 1989 filter
- 1988 (1) Apply 1988 filter
- 1987 (7) Apply 1987 filter
- 1986 (4) Apply 1986 filter
- 1985 (5) Apply 1985 filter
- 1984 (2) Apply 1984 filter
- 1983 (2) Apply 1983 filter
- 1982 (3) Apply 1982 filter
- 1981 (3) Apply 1981 filter
- 1980 (1) Apply 1980 filter
- 1979 (1) Apply 1979 filter
- 1976 (2) Apply 1976 filter
- 1973 (1) Apply 1973 filter
- 1970 (1) Apply 1970 filter
- 1967 (1) Apply 1967 filter
Type of Publication
3947 Publications
Showing 2761-2770 of 3947 resultsCompared to the dorsal hippocampus, relatively few studies have characterized neuronal responses in the ventral hippocampus. In particular, it is unclear whether and how cells in the ventral region represent space and/or respond to contextual changes. We recorded from dorsal and ventral CA1 neurons in freely moving mice exposed to manipulations of visuospatial and olfactory contexts. We found that ventral cells respond to alterations of the visuospatial environment such as exposure to novel local cues, cue rotations, and contextual expansion in similar ways to dorsal cells, with the exception of cue rotations. Furthermore, we found that ventral cells responded to odors much more strongly than dorsal cells, particularly to odors of high valence. Similar to earlier studies recording from the ventral hippocampus in CA3, we also found increased scaling of place cell field size along the longitudinal hippocampal axis. Although the increase in place field size observed toward the ventral pole has previously been taken to suggest a decrease in spatial information coded by ventral place cells, we hypothesized that a change in spatial scaling could instead signal a shift in representational coding that preserves the resolution of spatial information. To explore this possibility, we examined population activity using principal component analysis (PCA) and neural location reconstruction techniques. Our results suggest that ventral populations encode a distributed representation of space, and that the resolution of spatial information at the population level is comparable to that of dorsal populations of similar size. Finally, through the use of neural network modeling, we suggest that the redundancy in spatial representation along the longitudinal hippocampal axis may allow the hippocampus to overcome the conflict between memory interference and generalization inherent in neural network memory. Our results suggest that ventral population activity is well suited for generalization across locations and contexts. © 2014 Wiley Periodicals, Inc.
Methods for one-photon fluorescent imaging of calcium dynamics can capture the activity of hundreds of neurons across large fields of view at a low equipment complexity and cost. In contrast to two-photon methods, however, one-photon methods suffer from higher levels of crosstalk from neuropil, resulting in a decreased signal-to-noise ratio and artifactual correlations of neural activity. We address this problem by engineering cell-body-targeted variants of the fluorescent calcium indicators GCaMP6f and GCaMP7f. We screened fusions of GCaMP to natural, as well as artificial, peptides and identified fusions that localized GCaMP to within 50 μm of the cell body of neurons in mice and larval zebrafish. One-photon imaging of soma-targeted GCaMP in dense neural circuits reported fewer artifactual spikes from neuropil, an increased signal-to-noise ratio, and decreased artifactual correlation across neurons. Thus, soma-targeting of fluorescent calcium indicators facilitates usage of simple, powerful, one-photon methods for imaging neural calcium dynamics.
COVID-19 has severely impacted socioeconomically disadvantaged populations. To support pandemic control strategies, geographically weighted negative binomial regression (GWNBR) mapped COVID-19 risk related to epidemiological and socioeconomic risk factors using South Korean incidence data (January 20, 2020 to July 1, 2020). We constructed COVID-19-specific socioeconomic and epidemiological themes using established social theoretical frameworks and created composite indexes through principal component analysis. The risk of COVID-19 increased with higher area morbidity, risky health behaviours, crowding, and population mobility, and with lower social distancing, healthcare access, and education. Falling COVID-19 risks and spatial shifts over three consecutive time periods reflected effective public health interventions. This study provides a globally replicable methodological framework and precision mapping for COVID-19 and future pandemics.
Determining cell identities in imaging sequences is an important yet challenging task. The conventional method for cell identification is via cell tracking, which is complex and can be time-consuming. In this study, we propose an innovative approach to cell identification during early C. elegans embryogenesis using machine learning. We employed random forest, MLP, and LSTM models, and tested cell classification accuracy on 3D time-lapse confocal datasets spanning the first 4 hours of embryogenesis. By leveraging a small number of spatial-temporal features of individual cells, including cell trajectory and cell fate information, our models achieve an accuracy of over 90%, even with limited data. We also determine the most important feature contributions and can interpret these features in the context of biological knowledge. Our research demonstrates the success of predicting cell identities in 4D imaging sequences directly from simple spatio-temporal features.
Perceptual decision making is an active process where animals move their sense organs to extract task-relevant information. To investigate how the brain translates sensory input into decisions during active sensation, we developed a mouse active touch task where the mechanosensory input can be precisely measured and that challenges animals to use multiple mechanosensory cues. Male mice were trained to localise a pole using a single whisker and to report their decision by selecting one of three choices. Using high-speed imaging and machine vision we estimated whisker-object mechanical forces at millisecond resolution. Mice solved the task by a sensory-motor strategy where both the strength and direction of whisker bending were informative cues to pole location. We found competing influences of immediate sensory input and choice memory on mouse choice. On correct trials, choice could be predicted from the direction and strength of whisker bending, but not from previous choice. In contrast, on error trials, choice could be predicted from previous choice but not from whisker bending. This study shows that animal choices during active tactile decision making can be predicted from mechanosenory and choice-memory signals; and provides a new task, well-suited for future study of the neural basis of active perceptual decisions.Due to the difficulty of measuring the sensory input to moving sense organs, active perceptual decision making remains poorly understood. The whisker system provides a way forward since it is now possible to measure the mechanical forces due to whisker-object contact during behaviour. Here we train mice in a novel behavioural task that challenges them to use rich mechanosensory cues, but can be performed using one whisker and enables task-relevant mechanical forces to be precisely estimated. This approach enables rigorous study of how sensory cues translate into action during active, perceptual decision making. Our findings provide new insight into active touch and how sensory/internal signals interact to determine behavioural choices.
A prominent trend in single-cell transcriptomics is providing spatial context alongside a characterization of each cell's molecular state. This typically requires targeting an a priori selection of genes, often covering less than 1% of the genome, and a key question is how to optimally determine the small gene panel. We address this challenge by introducing a flexible deep learning framework, PERSIST, to identify informative gene targets for spatial transcriptomics studies by leveraging reference scRNA-seq data. Using datasets spanning different brain regions, species, and scRNA-seq technologies, we show that PERSIST reliably identifies panels that provide more accurate prediction of the genome-wide expression profile, thereby capturing more information with fewer genes. PERSIST can be adapted to specific biological goals, and we demonstrate that PERSIST's binarization of gene expression levels enables models trained on scRNA-seq data to generalize with to spatial transcriptomics data, despite the complex shift between these technologies.
To explore theories of predictive coding, we presented mice with repeated sequences of images with novel images sparsely substituted. Under these conditions, mice could be rapidly trained to lick in response to a novel image, demonstrating a high level of performance on the first day of testing. Using 2-photon calcium imaging to record from layer 2/3 neurons in the primary visual cortex, we found that novel images evoked excess activity in the majority of neurons. When a new stimulus sequence was repeatedly presented, a majority of neurons had similarly elevated activity for the first few presentations, which then decayed to almost zero activity. The decay time of these transient responses was not fixed, but instead scaled with the length of the stimulus sequence. However, at the same time, we also found a small fraction of the neurons within the population (\~2%) that continued to respond strongly and periodically to the repeated stimulus. Decoding analysis demonstrated that both the transient and sustained responses encoded information about stimulus identity. We conclude that the layer 2/3 population uses a two-channel predictive code: a dense transient code for novel stimuli and a sparse sustained code for familiar stimuli. These results extend and unify existing theories about the nature of predictive neural codes.
Alcohol addiction is a common affliction with a strong genetic component [1]. Although mammalian studies have provided significant insight into the molecular mechanisms underlying ethanol consumption [2], other organisms such as Drosophila melanogaster are better suited for unbiased, forward genetic approaches to identify novel genes. Behavioral responses to ethanol, such as hyperactivity, sedation, and tolerance, are conserved between flies and mammals [3, 4], as are the underlying molecular pathways [5-9]. However, few studies have investigated ethanol self-administration in flies [10]. Here we characterize ethanol consumption and preference in Drosophila. Flies prefer to consume ethanol-containing food over regular food, and this preference increases over time. Flies are attracted to the smell of ethanol, which partially mediates ethanol preference, but are averse to its taste. Preference for consuming ethanol is not entirely explained by attraction to either its sensory or caloric properties. We demonstrate that flies can exhibit features of alcohol addiction. First, flies self-administer ethanol to pharmacologically relevant concentrations. Second, flies will overcome an aversive stimulus in order to consume ethanol. Third, flies rapidly return to high levels of ethanol consumption after a period of imposed abstinence. Thus, ethanol preference in Drosophila provides a new model for studying aspects of addiction.
Motivation: A significant focus of biological research is to understand the development, organization and function of tissues. A particularly productive area of study is on single layer epithelial tissues in which the adherence junctions of cells form a 2D manifold that is fluorescently labeled. Given the size of the tissue, a microscope must collect a mosaic of overlapping 3D stacks encompassing the stained surface. Downstream interpretation is greatly simplified by preprocessing such a dataset as follows: (a) extracting and mapping the stained manifold in each stack into a single 2D projection plane, (b) correcting uneven illumination artifacts, (c) stitching the mosaic planes into a single, large 2D image, and (d) adjusting the contrast. Results: We have developed PreMosa, an efficient, fully automatic pipeline to perform the four preprocessing tasks above resulting in a single 2D image of the stained manifold across which contrast is optimized and illumination is even. Notable features are as follows. First, the 2D projection step employs a specially developed algorithm that actually finds the manifold in the stack based on maximizing contrast, intensity and smoothness. Second, the projection step comes first, implying all subsequent tasks are more rapidly solved in 2D. And last, the mosaic melding employs an algorithm that globally adjusts contrasts amongst the 2D tiles so as to produce a seamless, high-contrast image. We conclude with an evaluation using ground-truth datasets and present results on datasets from Drosophila melanogaster wings and Schmidtae mediterranea ciliary components. Availability: PreMosa is available under https://cblasse.github.io/premosa. Contact: blasse@mpi-cbg.de, myers@mpi-cbg.de.
Induced pluripotent stem cell (iPSC)-based models are powerful tools to study neurodegenerative diseases such as Parkinson's disease. The differentiation of patient-derived neurons and astrocytes allows investigation of the molecular mechanisms responsible for disease onset and development. In particular, these two cell types can be mono- or co-cultured to study the influence of cell-autonomous and non-cell-autonomous contributors to neurodegenerative diseases. We developed a streamlined procedure to produce high-quality/high-purity cultures of dopaminergic neurons and astrocytes that originate from the same population of midbrain floor-plate progenitors. This unit describes differentiation, quality control, culture parameters, and troubleshooting tips to ensure the highest quality and reproducibility of research results. © 2019 The Authors. Basic Protocol 1: Differentiation of iPSCs into midbrain-patterned neural progenitor cells Support Protocol: Quality control of neural progenitor cells Basic Protocol 2: Differentiation of neural progenitor cells into astrocytes Basic Protocol 3: Differentiation of neural progenitor cells into dopaminergic neurons Basic Protocol 4: Co-culture of iPSC-derived neurons and astrocytes.