Filter
Associated Lab
- Aguilera Castrejon Lab (15) Apply Aguilera Castrejon Lab filter
- Ahrens Lab (56) Apply Ahrens Lab filter
- Aso Lab (39) Apply Aso Lab filter
- Baker Lab (38) Apply Baker Lab filter
- Betzig Lab (110) Apply Betzig Lab filter
- Beyene Lab (10) Apply Beyene Lab filter
- Bock Lab (17) Apply Bock Lab filter
- Branson Lab (48) Apply Branson Lab filter
- Card Lab (40) Apply Card Lab filter
- Cardona Lab (63) Apply Cardona Lab filter
- Chklovskii Lab (13) Apply Chklovskii Lab filter
- Clapham Lab (12) Apply Clapham Lab filter
- Cui Lab (19) Apply Cui Lab filter
- Darshan Lab (12) Apply Darshan Lab filter
- Dennis Lab (1) Apply Dennis Lab filter
- Dickson Lab (46) Apply Dickson Lab filter
- Druckmann Lab (25) Apply Druckmann Lab filter
- Dudman Lab (46) Apply Dudman Lab filter
- Eddy/Rivas Lab (30) Apply Eddy/Rivas Lab filter
- Egnor Lab (11) Apply Egnor Lab filter
- Espinosa Medina Lab (16) Apply Espinosa Medina Lab filter
- Feliciano Lab (6) Apply Feliciano Lab filter
- Fetter Lab (41) Apply Fetter Lab filter
- Fitzgerald Lab (28) Apply Fitzgerald Lab filter
- Freeman Lab (15) Apply Freeman Lab filter
- Funke Lab (34) Apply Funke Lab filter
- Gonen Lab (91) Apply Gonen Lab filter
- Grigorieff Lab (62) Apply Grigorieff Lab filter
- Harris Lab (58) Apply Harris Lab filter
- Heberlein Lab (94) Apply Heberlein Lab filter
- Hermundstad Lab (22) Apply Hermundstad Lab filter
- Hess Lab (72) Apply Hess Lab filter
- Ilanges Lab (1) Apply Ilanges Lab filter
- Jayaraman Lab (44) Apply Jayaraman Lab filter
- Ji Lab (33) Apply Ji Lab filter
- Johnson Lab (6) Apply Johnson Lab filter
- Kainmueller Lab (19) Apply Kainmueller Lab filter
- Karpova Lab (14) Apply Karpova Lab filter
- Keleman Lab (13) Apply Keleman Lab filter
- Keller Lab (75) Apply Keller Lab filter
- Koay Lab (16) Apply Koay Lab filter
- Lavis Lab (136) Apply Lavis Lab filter
- Lee (Albert) Lab (34) Apply Lee (Albert) Lab filter
- Leonardo Lab (23) Apply Leonardo Lab filter
- Li Lab (25) Apply Li Lab filter
- Lippincott-Schwartz Lab (161) Apply Lippincott-Schwartz Lab filter
- Liu (Yin) Lab (5) Apply Liu (Yin) Lab filter
- Liu (Zhe) Lab (59) Apply Liu (Zhe) Lab filter
- Looger Lab (137) Apply Looger Lab filter
- Magee Lab (49) Apply Magee Lab filter
- Menon Lab (18) Apply Menon Lab filter
- Murphy Lab (13) Apply Murphy Lab filter
- O'Shea Lab (4) Apply O'Shea Lab filter
- Otopalik Lab (13) Apply Otopalik Lab filter
- Pachitariu Lab (41) Apply Pachitariu Lab filter
- Pastalkova Lab (18) Apply Pastalkova Lab filter
- Pavlopoulos Lab (19) Apply Pavlopoulos Lab filter
- Pedram Lab (14) Apply Pedram Lab filter
- Podgorski Lab (16) Apply Podgorski Lab filter
- Reiser Lab (49) Apply Reiser Lab filter
- Riddiford Lab (44) Apply Riddiford Lab filter
- Romani Lab (40) Apply Romani Lab filter
- Rubin Lab (139) Apply Rubin Lab filter
- Saalfeld Lab (60) Apply Saalfeld Lab filter
- Satou Lab (16) Apply Satou Lab filter
- Scheffer Lab (36) Apply Scheffer Lab filter
- Schreiter Lab (62) Apply Schreiter Lab filter
- Sgro Lab (20) Apply Sgro Lab filter
- Shroff Lab (23) Apply Shroff Lab filter
- Simpson Lab (23) Apply Simpson Lab filter
- Singer Lab (80) Apply Singer Lab filter
- Spruston Lab (91) Apply Spruston Lab filter
- Stern Lab (152) Apply Stern Lab filter
- Sternson Lab (54) Apply Sternson Lab filter
- Stringer Lab (29) Apply Stringer Lab filter
- Svoboda Lab (135) Apply Svoboda Lab filter
- Tebo Lab (31) Apply Tebo Lab filter
- Tervo Lab (9) Apply Tervo Lab filter
- Tillberg Lab (17) Apply Tillberg Lab filter
- Tjian Lab (64) Apply Tjian Lab filter
- Truman Lab (88) Apply Truman Lab filter
- Turaga Lab (46) Apply Turaga Lab filter
- Turner Lab (35) Apply Turner Lab filter
- Vale Lab (6) Apply Vale Lab filter
- Voigts Lab (2) Apply Voigts Lab filter
- Wang (Meng) Lab (9) Apply Wang (Meng) Lab filter
- Wang (Shaohe) Lab (24) Apply Wang (Shaohe) Lab filter
- Wu Lab (9) Apply Wu Lab filter
- Zlatic Lab (28) Apply Zlatic Lab filter
- Zuker Lab (25) Apply Zuker Lab filter
Associated Project Team
- CellMap (5) Apply CellMap filter
- COSEM (3) Apply COSEM filter
- Fly Descending Interneuron (10) Apply Fly Descending Interneuron filter
- Fly Functional Connectome (14) Apply Fly Functional Connectome filter
- Fly Olympiad (5) Apply Fly Olympiad filter
- FlyEM (51) Apply FlyEM filter
- FlyLight (46) Apply FlyLight filter
- GENIE (40) Apply GENIE filter
- Integrative Imaging (1) Apply Integrative Imaging filter
- Larval Olympiad (2) Apply Larval Olympiad filter
- MouseLight (16) Apply MouseLight filter
- NeuroSeq (1) Apply NeuroSeq filter
- ThalamoSeq (1) Apply ThalamoSeq filter
- Tool Translation Team (T3) (24) Apply Tool Translation Team (T3) filter
- Transcription Imaging (49) Apply Transcription Imaging filter
Publication Date
- 2024 (145) Apply 2024 filter
- 2023 (175) Apply 2023 filter
- 2022 (192) Apply 2022 filter
- 2021 (193) Apply 2021 filter
- 2020 (196) Apply 2020 filter
- 2019 (202) Apply 2019 filter
- 2018 (232) Apply 2018 filter
- 2017 (217) Apply 2017 filter
- 2016 (209) Apply 2016 filter
- 2015 (252) Apply 2015 filter
- 2014 (236) Apply 2014 filter
- 2013 (194) Apply 2013 filter
- 2012 (190) Apply 2012 filter
- 2011 (190) Apply 2011 filter
- 2010 (161) Apply 2010 filter
- 2009 (158) Apply 2009 filter
- 2008 (140) Apply 2008 filter
- 2007 (106) Apply 2007 filter
- 2006 (92) Apply 2006 filter
- 2005 (67) Apply 2005 filter
- 2004 (57) Apply 2004 filter
- 2003 (58) Apply 2003 filter
- 2002 (39) Apply 2002 filter
- 2001 (28) Apply 2001 filter
- 2000 (29) Apply 2000 filter
- 1999 (14) Apply 1999 filter
- 1998 (18) Apply 1998 filter
- 1997 (16) Apply 1997 filter
- 1996 (10) Apply 1996 filter
- 1995 (18) Apply 1995 filter
- 1994 (12) Apply 1994 filter
- 1993 (10) Apply 1993 filter
- 1992 (6) Apply 1992 filter
- 1991 (11) Apply 1991 filter
- 1990 (11) Apply 1990 filter
- 1989 (6) Apply 1989 filter
- 1988 (1) Apply 1988 filter
- 1987 (7) Apply 1987 filter
- 1986 (4) Apply 1986 filter
- 1985 (5) Apply 1985 filter
- 1984 (2) Apply 1984 filter
- 1983 (2) Apply 1983 filter
- 1982 (3) Apply 1982 filter
- 1981 (3) Apply 1981 filter
- 1980 (1) Apply 1980 filter
- 1979 (1) Apply 1979 filter
- 1976 (2) Apply 1976 filter
- 1973 (1) Apply 1973 filter
- 1970 (1) Apply 1970 filter
- 1967 (1) Apply 1967 filter
Type of Publication
3924 Publications
Showing 2581-2590 of 3924 resultsNew technologies for monitoring and manipulating the nervous system promise exciting biology but pose challenges for analysis and computation. Solutions can be found in the form of modern approaches to distributed computing, machine learning, and interactive visualization. But embracing these new technologies will require a cultural shift: away from independent efforts and proprietary methods and toward an open source and collaborative neuroscience.
Laboratory behavioural tasks are an essential research tool. As questions asked of behaviour and brain activity become more sophisticated, the ability to specify and run richly structured tasks becomes more important. An increasing focus on reproducibility also necessitates accurate communication of task logic to other researchers. To these ends, we developed pyControl, a system of open-source hardware and software for controlling behavioural experiments comprising a simple yet flexible Python-based syntax for specifying tasks as extended state machines, hardware modules for building behavioural setups, and a graphical user interface designed for efficiently running high-throughput experiments on many setups in parallel, all with extensive online documentation. These tools make it quicker, easier, and cheaper to implement rich behavioural tasks at scale. As important, pyControl facilitates communication and reproducibility of behavioural experiments through a highly readable task definition syntax and self-documenting features. Here, we outline the system's design and rationale, present validation experiments characterising system performance, and demonstrate example applications in freely moving and head-fixed mouse behaviour.
To smooth the academic-to-industry transition, one institution is experimenting with offering biomedical researchers pre-commercial open access to new optical imaging systems still under development. The approach, the authors of this case study suggest, can be a win on both sides.
Recent progress in intracellular calcium sensors and other fluorophores has promoted the widespread adoption of functional optical imaging in the life sciences. Home-built multiphoton microscopes are easy to build, highly customizable, and cost effective. For many imaging applications a 3-axis motorized stage is critical, but commercially available motorization hardware (motorized translators, controller boxes, etc) are often very expensive. Furthermore, the firmware on commercial motor controllers cannot easily be altered and is not usually designed with a microscope stage in mind. Here we describe an open-source motorization solution that is simple to construct, yet far cheaper and more customizable than commercial offerings. The cost of the controller and motorization hardware are under $1000. Hardware costs are kept low by replacing linear actuators with high quality stepper motors. Electronics are assembled from commonly available hobby components, which are easy to work with. Here we describe assembly of the system and quantify the positioning accuracy of all three axes. We obtain positioning repeatability of the order of 1 μm in X/Y and 0.1 μm in Z. A hand-held control-pad allows the user to direct stage motion precisely over a wide range of speeds (10(-1) to 10(2) μm·s(-1)), rapidly store and return to different locations, and execute "jumps" of a fixed size. In addition, the system can be controlled from a PC serial port. Our "OpenStage" controller is sufficiently flexible that it could be used to drive other devices, such as micro-manipulators, with minimal modifications.
The present study describes a task testing the ability of rats to trigger operant behavior by their relative spatial position to inaccessible rotating objects. Rats were placed in a Skinner box with a transparent front wall through which they could observe one or two adjacent objects fixed on a slowly rotating arena (d = 1 m) surrounded by an immobile black cylinder. The direction of arena rotation was alternated at a sequence of different time intervals. Rats were reinforced for the first bar-press that was emitted when a radius separating the two adjacent objects or dividing a single object into two halves (pointing radius) entered a 60 degrees sector of its circular trajectory defined with respect to the stationary Skinner box (reward sector). Well trained rats emitted 62.1 +/- 3.6% of responses in a 60 degrees sector preceding the reward sector and in the first 30 degrees of the reward sector. Response rate increased only when the pointing radius was approaching the reward sector, regardless of the time elapsed from the last reward. In the extinction session, when no reward was delivered, rats responded during the whole passage of the pointing radius through the former reward sector and spontaneously decreased responding after the pointing radius left this area. This finding suggests that rats perceived the reward sector as a continuous single region. The same results were obtained when the Skinner box with the rat was orbiting around the immobile scene. It is concluded that rats can recognize and anticipate their position relative to movable objects.
For goal-directed behaviour it is critical that we can both select the appropriate action and learn to modify the underlying movements (for example, the pitch of a note or velocity of a reach) to improve outcomes. The basal ganglia are a critical nexus where circuits necessary for the production of behaviour, such as the neocortex and thalamus, are integrated with reward signalling to reinforce successful, purposive actions. The dorsal striatum, a major input structure of basal ganglia, is composed of two opponent pathways, direct and indirect, thought to select actions that elicit positive outcomes and suppress actions that do not, respectively. Activity-dependent plasticity modulated by reward is thought to be sufficient for selecting actions in the striatum. Although perturbations of basal ganglia function produce profound changes in movement, it remains unknown whether activity-dependent plasticity is sufficient to produce learned changes in movement kinematics, such as velocity. Here we use cell-type-specific stimulation in mice delivered in closed loop during movement to demonstrate that activity in either the direct or indirect pathway is sufficient to produce specific and sustained increases or decreases in velocity, without affecting action selection or motivation. These behavioural changes were a form of learning that accumulated over trials, persisted after the cessation of stimulation, and were abolished in the presence of dopamine antagonists. Our results reveal that the direct and indirect pathways can each bidirectionally control movement velocity, demonstrating unprecedented specificity and flexibility in the control of volition by the basal ganglia.
Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems—patient classification, fundamental biological processes and treatment of patients—and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine.
Major resources are now available to develop tools and technologies aimed at dissecting the circuitry and computations underlying behavior, unraveling the underpinnings of brain disorders, and understanding the neural substrates of cognition. Scientists from around the world shared their views around new tools and technologies to drive advances in neuroscience.
Neural stem cells show age-dependent developmental potentials, as evidenced by their production of distinct neuron types at different developmental times. Drosophila neuroblasts produce long, stereotyped lineages of neurons. We searched for factors that could regulate neural temporal fate by RNA-sequencing lineage-specific neuroblasts at various developmental times. We found that two RNA-binding proteins, IGF-II mRNA-binding protein (Imp) and Syncrip (Syp), display opposing high-to-low and low-to-high temporal gradients with lineage-specific temporal dynamics. Imp and Syp promote early and late fates, respectively, in both a slowly progressing and a rapidly changing lineage. Imp and Syp control neuronal fates in the mushroom body lineages by regulating the temporal transcription factor Chinmo translation. Together, the opposing Imp/Syp gradients encode stem cell age, specifying multiple cell fates within a lineage.
Two-photon excitation fluorescence microscopy has revolutionized our understanding of brain structure and function through the high resolution and large penetration depth it offers. Investigating neural structures in vivo requires gaining optical access to the brain, which is typically achieved by replacing a part of the skull with one or several layers of cover glass windows. To compensate for the spherical aberrations caused by the presence of these layers of glass, collar-correction objectives are typically used. However, the efficiency of this correction has been shown to depend significantly on the tilt angle between the glass window surface and the optical axis of the imaging system. Here, we first expand these observations and characterize the effect of the tilt angle on the collected fluorescence signal with thicker windows (double cover slide) and compare these results with an objective devoid of collar-correction. Second, we present a simple optical alignment device designed to rapidly minimize the tilt angle in vivo and align the optical axis of the microscope perpendicularly to the glass window to an angle below 0.25°, thereby significantly improving the imaging quality. Finally, we describe a tilt-correction procedure for users in an in vivo setting, enabling the accurate alignment with a resolution of <0.2° in only few iterations.