Filter
Associated Lab
- Ahrens Lab (61) Apply Ahrens Lab filter
- Aso Lab (1) Apply Aso Lab filter
- Branson Lab (1) Apply Branson Lab filter
- Fitzgerald Lab (1) Apply Fitzgerald Lab filter
- Freeman Lab (5) Apply Freeman Lab filter
- Harris Lab (2) Apply Harris Lab filter
- Jayaraman Lab (2) Apply Jayaraman Lab filter
- Johnson Lab (1) Apply Johnson Lab filter
- Keller Lab (5) Apply Keller Lab filter
- Lavis Lab (2) Apply Lavis Lab filter
- Liu (Zhe) Lab (1) Apply Liu (Zhe) Lab filter
- Looger Lab (7) Apply Looger Lab filter
- Pedram Lab (1) Apply Pedram Lab filter
- Podgorski Lab (3) Apply Podgorski Lab filter
- Schreiter Lab (4) Apply Schreiter Lab filter
- Shroff Lab (1) Apply Shroff Lab filter
- Svoboda Lab (4) Apply Svoboda Lab filter
- Turaga Lab (2) Apply Turaga Lab filter
- Turner Lab (2) Apply Turner Lab filter
- Wang (Shaohe) Lab (1) Apply Wang (Shaohe) Lab filter
- Zlatic Lab (1) Apply Zlatic Lab filter
Associated Project Team
Publication Date
- 2025 (1) Apply 2025 filter
- 2024 (9) Apply 2024 filter
- 2023 (4) Apply 2023 filter
- 2022 (4) Apply 2022 filter
- 2021 (2) Apply 2021 filter
- 2020 (4) Apply 2020 filter
- 2019 (5) Apply 2019 filter
- 2018 (4) Apply 2018 filter
- 2017 (2) Apply 2017 filter
- 2016 (7) Apply 2016 filter
- 2015 (3) Apply 2015 filter
- 2014 (3) Apply 2014 filter
- 2013 (5) Apply 2013 filter
- 2012 (1) Apply 2012 filter
- 2011 (1) Apply 2011 filter
- 2010 (1) Apply 2010 filter
- 2008 (3) Apply 2008 filter
- 2006 (2) Apply 2006 filter
Type of Publication
61 Publications
Showing 41-50 of 61 resultsThe representation of acoustic stimuli in the brainstem forms the basis for higher auditory processing. While some characteristics of this representation (e.g. tuning curve) are widely accepted, it remains a challenge to predict the firing rate at high temporal resolution in response to complex stimuli. In this study we explore models for in vivo, single cell responses in the medial nucleus of the trapezoid body (MNTB) under complex sound stimulation. We estimate a family of models, the multilinear models, encompassing the classical spectrotemporal receptive field and allowing arbitrary input-nonlinearities and certain multiplicative interactions between sound energy and its short-term auditory context. We compare these to models of more traditional type, and also evaluate their performance under various stimulus representations. Using the context model, 75% of the explainable variance could be predicted based on a cochlear-like, gamma-tone stimulus representation. The presence of multiplicative contextual interactions strongly reduces certain inhibitory/suppressive regions of the linear kernels, suggesting an underlying nonlinear mechanism, e.g. cochlear or synaptic suppression, as the source of the suppression in MNTB neuronal responses. In conclusion, the context model provides a rich and still interpretable extension over many previous phenomenological models for modeling responses in the auditory brainstem at submillisecond resolution.
Escape behaviors deliver organisms away from imminent catastrophe. Here, we characterize behavioral responses of freely swimming larval zebrafish to looming visual stimuli simulating predators. We report that the visual system alone can recruit lateralized, rapid escape motor programs, similar to those elicited by mechanosensory modalities. Two-photon calcium imaging of retino-recipient midbrain regions isolated the optic tectum as an important center processing looming stimuli, with ensemble activity encoding the critical image size determining escape latency. Furthermore, we describe activity in retinal ganglion cell terminals and superficial inhibitory interneurons in the tectum during looming and propose a model for how temporal dynamics in tectal periventricular neurons might arise from computations between these two fundamental constituents. Finally, laser ablations of hindbrain circuitry confirmed that visual and mechanosensory modalities share the same premotor output network. We establish a circuit for the processing of aversive stimuli in the context of an innate visual behavior.
The relationship between a sound and its neural representation in the auditory cortex remains elusive. Simple measures such as the frequency response area or frequency tuning curve provide little insight into the function of the auditory cortex in complex sound environments. Spectrotemporal receptive field (STRF) models, despite their descriptive potential, perform poorly when used to predict auditory cortical responses, showing that nonlinear features of cortical response functions, which are not captured by STRFs, are functionally important. We introduce a new approach to the description of auditory cortical responses, using multilinear modeling methods. These descriptions simultaneously account for several nonlinearities in the stimulus-response functions of auditory cortical neurons, including adaptation, spectral interactions, and nonlinear sensitivity to sound level. The models reveal multiple inseparabilities in cortical processing of time lag, frequency, and sound level, and suggest functional mechanisms by which auditory cortical neurons are sensitive to stimulus context. By explicitly modeling these contextual influences, the models are able to predict auditory cortical responses more accurately than are STRF models. In addition, they can explain some forms of stimulus dependence in STRFs that were previously poorly understood.
Both neurons and glia communicate via diffusible neuromodulatory substances, but the substrates of computation in such neuromodulatory networks are unclear. During behavioral transitions in the larval zebrafish, the neuromodulator norepinephrine drives fast excitation and delayed inhibition of behavior and circuit activity. We find that the inhibitory arm of this feedforward motif is implemented by astroglial purinergic signaling. Neuromodulator imaging, behavioral pharmacology, and perturbations of neurons and astroglia reveal that norepinephrine triggers astroglial release of adenosine triphosphate, extracellular conversion into adenosine, and behavioral suppression through activation of hindbrain neuronal adenosine receptors. This work, along with a companion piece by Lefton and colleagues demonstrating an analogous pathway mediating the effect of norepinephrine on synaptic connectivity in mice, identifies a computational and behavioral role for an evolutionarily conserved astroglial purinergic signaling axis in norepinephrine-mediated behavioral and brain state transitions.
Sensory stimulation can systematically bias the perceived passage of time, but why and how this happens is mysterious. In this report, we provide evidence that such biases may ultimately derive from an innate and adaptive use of stochastically evolving dynamic stimuli to help refine estimates derived from internal timekeeping mechanisms. A simplified statistical model based on probabilistic expectations of stimulus change derived from the second-order temporal statistics of the natural environment makes three predictions. First, random noise-like stimuli whose statistics violate natural expectations should induce timing bias. Second, a previously unexplored obverse of this effect is that similar noise stimuli with natural statistics should reduce the variability of timing estimates. Finally, this reduction in variability should scale with the interval being timed, so as to preserve the overall Weber law of interval timing. All three predictions are borne out experimentally. Thus, in the context of our novel theoretical framework, these results suggest that observers routinely rely on sensory input to augment their sense of the passage of time, through a process of Bayesian inference based on expectations of change in the natural environment.
Optogenetic tools can be used to manipulate neuronal activity in a reversible and specific manner. In recent years, such methods have been applied to uncover causal relationships between activity in specified neuronal circuits and behavior in the larval zebrafish. In this small, transparent, genetic model organism, noninvasive manipulation and monitoring of neuronal activity with light is possible throughout the nervous system. Here we review recent work in which these new tools have been applied to zebrafish, and discuss some of the existing challenges of these approaches.
Methods for one-photon fluorescent imaging of calcium dynamics can capture the activity of hundreds of neurons across large fields of view at a low equipment complexity and cost. In contrast to two-photon methods, however, one-photon methods suffer from higher levels of crosstalk from neuropil, resulting in a decreased signal-to-noise ratio and artifactual correlations of neural activity. We address this problem by engineering cell-body-targeted variants of the fluorescent calcium indicators GCaMP6f and GCaMP7f. We screened fusions of GCaMP to natural, as well as artificial, peptides and identified fusions that localized GCaMP to within 50 μm of the cell body of neurons in mice and larval zebrafish. One-photon imaging of soma-targeted GCaMP in dense neural circuits reported fewer artifactual spikes from neuropil, an increased signal-to-noise ratio, and decreased artifactual correlation across neurons. Thus, soma-targeting of fluorescent calcium indicators facilitates usage of simple, powerful, one-photon methods for imaging neural calcium dynamics.
3D snapshot microscopy enables fast volumetric imaging by capturing a 3D volume in a single 2D camera image and performing computational reconstruction. Fast volumetric imaging has a variety of biological applications such as whole brain imaging of rapid neural activity in larval zebrafish. The optimal microscope design for this optical 3D-to-2D encoding is both sample- and task-dependent, with no general solution known. Deep learning based decoders can be combined with a differentiable simulation of an optical encoder for end-to-end optimization of both the deep learning decoder and optical encoder. This technique has been used to engineer local optical encoders for other problems such as depth estimation, 3D particle localization, and lensless photography. However, 3D snapshot microscopy is known to require a highly non-local optical encoder which existing UNet-based decoders are not able to engineer. We show that a neural network architecture based on global kernel Fourier convolutional neural networks can efficiently decode information from multiple depths in a volume, globally encoded across a 3D snapshot image. We show in simulation that our proposed networks succeed in engineering and reconstructing optical encoders for 3D snapshot microscopy where the existing state-of-the-art UNet architecture fails. We also show that our networks outperform the state-of-the-art learned reconstruction algorithms for a computational photography dataset collected on a prototype lensless camera which also uses a highly non-local optical encoding.
Genetically encoded calcium indicators (GECIs) allow measurement of activity in large populations of neurons and in small neuronal compartments, over times of milliseconds to months. Although GFP-based GECIs are widely used for in vivo neurophysiology, GECIs with red-shifted excitation and emission spectra have advantages for in vivo imaging because of reduced scattering and absorption in tissue, and a consequent reduction in phototoxicity. However, current red GECIs are inferior to the state-of-the-art GFP-based GCaMP6 indicators for detecting and quantifying neural activity. Here we present improved red GECIs based on mRuby (jRCaMP1a, b) and mApple (jRGECO1a), with sensitivity comparable to GCaMP6. We characterized the performance of the new red GECIs in cultured neurons and in mouse, Drosophila, zebrafish and C. elegans in vivo. Red GECIs facilitate deep-tissue imaging, dual-color imaging together with GFP-based reporters, and the use of optogenetics in combination with calcium imaging.
We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multi-neuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data.