Filter
Associated Lab
- Dudman Lab (1) Apply Dudman Lab filter
- Hermundstad Lab (26) Apply Hermundstad Lab filter
- Jayaraman Lab (9) Apply Jayaraman Lab filter
- Looger Lab (1) Apply Looger Lab filter
- Romani Lab (3) Apply Romani Lab filter
- Rubin Lab (2) Apply Rubin Lab filter
- Schreiter Lab (1) Apply Schreiter Lab filter
- Sternson Lab (1) Apply Sternson Lab filter
- Svoboda Lab (1) Apply Svoboda Lab filter
Associated Project Team
Publication Date
- 2025 (1) Apply 2025 filter
- 2024 (5) Apply 2024 filter
- 2023 (2) Apply 2023 filter
- 2022 (6) Apply 2022 filter
- 2021 (3) Apply 2021 filter
- 2020 (2) Apply 2020 filter
- 2019 (1) Apply 2019 filter
- 2018 (1) Apply 2018 filter
- 2017 (1) Apply 2017 filter
- 2014 (2) Apply 2014 filter
- 2013 (1) Apply 2013 filter
- 2011 (1) Apply 2011 filter
Type of Publication
26 Publications
Showing 1-10 of 26 resultsTo survive, animals must be able quickly infer the state of their surroundings. For example, to successfully escape an approaching predator, prey must quickly estimate the direction of approach from incoming sensory stimuli and guide their behavior accordingly. Such rapid inferences are particularly challenging because the animal has only a brief window of time to gather sensory stimuli, and yet the accuracy of inference is critical for survival. Due to evolutionary pressures, nervous systems have likely evolved effective computational strategies that enable accurate inferences under strong time limitations. Traditionally, the relationship between the speed and accuracy of inference has been described by the “speed-accuracy tradeoff” (SAT), which quantifies how the average performance of an ideal observer improves as the observer has more time to collect incoming stimuli. While this trial-averaged description can reasonably account for individual inferences made over long timescales, it does not capture individual inferences on short timescales, when trial-to-trial variability gives rise to diverse patterns of error dynamics. We show that an ideal observer can exploit this single-trial structure by adaptively tracking the dynamics of its belief about the state of the environment, which enables it to speed its own inferences and more reliably track its own error, but also causes it to violate the SAT. We show that these features can be used to improve overall performance during rapid escape. The resulting behavior qualitatively reproduces features of escape behavior in the fruit fly Drosophila melanogaster, whose escapes have presumably been highly optimized by natural selection.
In natural environments, animals must efficiently allocate their choices across multiple concurrently available resources when foraging, a complex decision-making process not fully captured by existing models. To understand how rodents learn to navigate this challenge we developed a novel paradigm in which untrained, water-restricted mice were free to sample from six options rewarded at a range of deterministic intervals and positioned around the walls of a large ( 2m) arena. Mice exhibited rapid learning, matching their choices to integrated reward ratios across six options within the first session. A reinforcement learning model with separate states for staying or leaving an option and a dynamic, global learning rate was able to accurately reproduce mouse learning and decision-making. Fiber photometry recordings revealed that dopamine in the nucleus accumbens core (NAcC), but not dorsomedial striatum (DMS), more closely reflected the global learning rate than local error-based updating. Altogether, our results provide insight into the neural substrate of a learning algorithm that allows mice to rapidly exploit multiple options when foraging in large spatial environments.
Many animals rely on persistent internal representations of continuous variables for working memory, navigation, and motor control. Existing theories typically assume that large networks of neurons are required to maintain such representations accurately; networks with few neurons are thought to generate discrete representations. However, analysis of two-photon calcium imaging data from tethered flies walking in darkness suggests that their small head-direction system can maintain a surprisingly continuous and accurate representation. We thus ask whether it is possible for a small network to generate a continuous, rather than discrete, representation of such a variable. We show analytically that even very small networks can be tuned to maintain continuous internal representations, but this comes at the cost of sensitivity to noise and variations in tuning. This work expands the computational repertoire of small networks, and raises the possibility that larger networks could represent more and higher-dimensional variables than previously thought.
To survive, animals must be able quickly infer the state of their surroundings. For example, to successfully escape an approaching predator, prey must quickly estimate the direction of approach from incoming sensory stimuli. Such rapid inferences are particularly challenging because the animal has only a brief window of time to gather sensory stimuli, and yet the accuracy of inference is critical for survival. Due to evolutionary pressures, nervous systems have likely evolved effective computational strategies that enable accurate inferences under strong time limitations. Traditionally, the relationship between the speed and accuracy of inference has been described by the "speed-accuracy tradeoff" (SAT), which quantifies how the average performance of an ideal observer improves as the observer has more time to collect incoming stimuli. While this trial-averaged description can reasonably account for individual inferences made over long timescales, it does not capture individual inferences on short timescales, when trial-to-trial variability gives rise to diverse patterns of error dynamics. We show that an ideal observer can exploit this single-trial structure by adaptively tracking the dynamics of its belief about the state of the environment, which enables it make more rapid inferences and more reliably track its own error but also causes it to violate the SAT. We show that these features can be used to improve overall performance during rapid escape. The resulting behavior qualitatively reproduces features of escape behavior in the fruit fly Drosophila melanogaster, whose escapes have presumably been highly optimized by natural selection.
After finding food, a foraging animal must decide whether to continue feeding, or to explore the environment for potentially better options. One strategy to negotiate this tradeoff is to perform local searches around the food but repeatedly return to feed. We studied this behavior in flies and used genetic tools to uncover the underlying mechanisms. Over time, flies gradually expand their search, shifting from primarily exploiting food sources to exploring the environment, a change that is likely driven by increases in satiety. We found that flies’ search patterns preserve these dynamics even as the overall scale of the search is modulated by starvation-induced changes in metabolic state. In contrast, search induced by optogenetic activation of sugar sensing neurons does not show these dynamics. We asked what navigational strategies underlie local search. Using a generative model, we found that a change in locomotor pattern after food consumption could account for repeated returns to the food, but failed to capture relatively direct, long return trajectories. Alternative strategies, such as path integration or sensory taxis could allow flies to return from larger distances. We tested this by individually silencing the fly’s head direction system, olfaction and hygrosensation, and found that the only substantial effect was from perturbing hygrosensation, which reduced the number of long exploratory trips. Our study illustrates that local search is composed of multiple behavioral features that evolve over time based on both internal and external factors, providing a path towards uncovering the underlying neural mechanisms.
Anchoring goals to spatial representations enables flexible navigation but is challenging in novel environments when both representations must be acquired simultaneously. We propose a framework for how Drosophila uses internal representations of head direction (HD) to build goal representations upon selective thermal reinforcement. We show that flies use stochastically generated fixations and directed saccades to express heading preferences in an operant visual learning paradigm and that HD neurons are required to modify these preferences based on reinforcement. We used a symmetric visual setting to expose how flies' HD and goal representations co-evolve and how the reliability of these interacting representations impacts behavior. Finally, we describe how rapid learning of new goal headings may rest on a behavioral policy whose parameters are flexible but whose form is genetically encoded in circuit architecture. Such evolutionarily structured architectures, which enable rapidly adaptive behavior driven by internal representations, may be relevant across species.
Neurons throughout the sensory pathway adapt their responses depending on the statistical structure of the sensory environment. Contrast gain control is a form of adaptation in the auditory cortex, but it is unclear whether the dynamics of gain control reflect efficient adaptation, and whether they shape behavioral perception. Here, we trained mice to detect a target presented in background noise shortly after a change in the contrast of the background. The observed changes in cortical gain and behavioral detection followed the dynamics of a normative model of efficient contrast gain control; specifically, target detection and sensitivity improved slowly in low contrast, but degraded rapidly in high contrast. Auditory cortex was required for this task, and cortical responses were not only similarly affected by contrast but predicted variability in behavioral performance. Combined, our results demonstrate that dynamic gain adaptation supports efficient coding in auditory cortex and predicts the perception of sounds in noise.
To interpret the sensory environment, the brain combines ambiguous sensory measurements with knowledge that reflects context-specific prior experience. But environmental contexts can change abruptly and unpredictably, resulting in uncertainty about the current context. Here we address two questions: how should context-specific prior knowledge optimally guide the interpretation of sensory stimuli in changing environments, and do human decision-making strategies resemble this optimum? We probe these questions with a task in which subjects report the orientation of ambiguous visual stimuli that were drawn from three dynamically switching distributions, representing different environmental contexts. We derive predictions for an ideal Bayesian observer that leverages knowledge about the statistical structure of the task to maximize decision accuracy, including knowledge about the dynamics of the environment. We show that its decisions are biased by the dynamically changing task context. The magnitude of this decision bias depends on the observer's continually evolving belief about the current context. The model therefore not only predicts that decision bias will grow as the context is indicated more reliably, but also as the stability of the environment increases, and as the number of trials since the last context switch grows. Analysis of human choice data validates all three predictions, suggesting that the brain leverages knowledge of the statistical structure of environmental change when interpreting ambiguous sensory signals.
Internal representations are thought to support the generation of flexible, long-timescale behavioral patterns in both animals and artificial agents. Here, we present a novel conceptual framework for how Drosophila use their internal representation of head direction to maintain preferred headings in their surroundings, and how they learn to modify these preferences in the presence of selective thermal reinforcement. To develop the framework, we analyzed flies’ behavior in a classical operant visual learning paradigm and found that they use stochastically generated fixations and directed turns to express their heading preferences. Symmetries in the visual scene used in the paradigm allowed us to expose how flies’ probabilistic behavior in this setting is tethered to their head direction representation. We describe how flies’ ability to quickly adapt their behavior to the rules of their environment may rest on a behavioral policy whose parameters are flexible but whose form is genetically encoded in the structure of their circuits. Many of the mechanisms we outline may also be relevant for rapidly adaptive behavior driven by internal representations in other animals, including mammals.
Inference-based decision-making, which underlies a broad range of behavioral tasks, is typically studied using a small number of handcrafted models. We instead enumerate a complete ensemble of strategies that could be used to effectively, but not necessarily optimally, solve a dynamic foraging task. Each strategy is expressed as a behavioral "program" that uses a limited number of internal states to specify actions conditioned on past observations. We show that the ensemble of strategies is enormous-comprising a quarter million programs with up to five internal states-but can nevertheless be understood in terms of algorithmic "mutations" that alter the structure of individual programs. We devise embedding algorithms that reveal how mutations away from a Bayesian-like strategy can diversify behavior while preserving performance, and we construct a compositional description to link low-dimensional changes in algorithmic structure with high-dimensional changes in behavior. Together, this work provides an alternative approach for understanding individual variability in behavior across animals and tasks.