Filter
Associated Lab
- Dudman Lab (1) Apply Dudman Lab filter
- Hermundstad Lab (26) Apply Hermundstad Lab filter
- Jayaraman Lab (9) Apply Jayaraman Lab filter
- Looger Lab (1) Apply Looger Lab filter
- Romani Lab (3) Apply Romani Lab filter
- Rubin Lab (2) Apply Rubin Lab filter
- Schreiter Lab (1) Apply Schreiter Lab filter
- Sternson Lab (1) Apply Sternson Lab filter
- Svoboda Lab (1) Apply Svoboda Lab filter
Associated Project Team
Publication Date
- 2025 (1) Apply 2025 filter
- 2024 (5) Apply 2024 filter
- 2023 (2) Apply 2023 filter
- 2022 (6) Apply 2022 filter
- 2021 (3) Apply 2021 filter
- 2020 (2) Apply 2020 filter
- 2019 (1) Apply 2019 filter
- 2018 (1) Apply 2018 filter
- 2017 (1) Apply 2017 filter
- 2014 (2) Apply 2014 filter
- 2013 (1) Apply 2013 filter
- 2011 (1) Apply 2011 filter
Type of Publication
26 Publications
Showing 11-20 of 26 resultsThe ability to adapt to changes in stimulus statistics is a hallmark of sensory systems. Here, we developed a theoretical framework that can account for the dynamics of adaptation from an information processing perspective. We use this framework to optimize and analyze adaptive sensory codes, and we show that codes optimized for stationary environments can suffer from prolonged periods of poor performance when the environment changes. To mitigate the adversarial effects of these environmental changes, sensory systems must navigate tradeoffs between the ability to accurately encode incoming stimuli and the ability to rapidly detect and adapt to changes in the distribution of these stimuli. We derive families of codes that balance these objectives, and we demonstrate their close match to experimentally observed neural dynamics during mean and variance adaptation. Our results provide a unifying perspective on adaptation across a range of sensory systems, environments, and sensory tasks.
Previously, in (Hermundstad et al., 2014), we showed that when sampling is limiting, the efficient coding principle leads to a 'variance is salience' hypothesis, and that this hypothesis accounts for visual sensitivity to binary image statistics. Here, using extensive new psychophysical data and image analysis, we show that this hypothesis accounts for visual sensitivity to a large set of grayscale image statistics at a striking level of detail, and also identify the limits of the prediction. We define a 66-dimensional space of local grayscale light-intensity correlations, and measure the relevance of each direction to natural scenes. The 'variance is salience' hypothesis predicts that two-point correlations are most salient, and predicts their relative salience. We tested these predictions in a texture-segregation task using un-natural, synthetic textures. As predicted, correlations beyond second order are not salient, and predicted thresholds for over 300 second-order correlations match psychophysical thresholds closely (median fractional error < 0:13).
To interpret the sensory environment, the brain combines ambiguous sensory measurements with knowledge that reflects context-specific prior experience. But environmental contexts can change abruptly and unpredictably, resulting in uncertainty about the current context. Here we address two questions: how should context-specific prior knowledge optimally guide the interpretation of sensory stimuli in changing environments, and do human decision-making strategies resemble this optimum? We probe these questions with a task in which subjects report the orientation of ambiguous visual stimuli that were drawn from three dynamically switching distributions, representing different environmental contexts. We derive predictions for an ideal Bayesian observer that leverages knowledge about the statistical structure of the task to maximize decision accuracy, including knowledge about the dynamics of the environment. We show that its decisions are biased by the dynamically changing task context. The magnitude of this decision bias depends on the observer's continually evolving belief about the current context. The model therefore not only predicts that decision bias will grow as the context is indicated more reliably, but also as the stability of the environment increases, and as the number of trials since the last context switch grows. Analysis of human choice data validates all three predictions, suggesting that the brain leverages knowledge of the statistical structure of environmental change when interpreting ambiguous sensory signals.
After finding food, a foraging animal must decide whether to continue feeding, or to explore the environment for potentially better options. One strategy to negotiate this tradeoff is to perform local searches around the food but repeatedly return to feed. We studied this behavior in flies and used genetic tools to uncover the underlying mechanisms. Over time, flies gradually expand their search, shifting from primarily exploiting food sources to exploring the environment, a change that is likely driven by increases in satiety. We found that flies’ search patterns preserve these dynamics even as the overall scale of the search is modulated by starvation-induced changes in metabolic state. In contrast, search induced by optogenetic activation of sugar sensing neurons does not show these dynamics. We asked what navigational strategies underlie local search. Using a generative model, we found that a change in locomotor pattern after food consumption could account for repeated returns to the food, but failed to capture relatively direct, long return trajectories. Alternative strategies, such as path integration or sensory taxis could allow flies to return from larger distances. We tested this by individually silencing the fly’s head direction system, olfaction and hygrosensation, and found that the only substantial effect was from perturbing hygrosensation, which reduced the number of long exploratory trips. Our study illustrates that local search is composed of multiple behavioral features that evolve over time based on both internal and external factors, providing a path towards uncovering the underlying neural mechanisms.
Internal representations are thought to support the generation of flexible, long-timescale behavioral patterns in both animals and artificial agents. Here, we present a novel conceptual framework for how Drosophila use their internal representation of head direction to maintain preferred headings in their surroundings, and how they learn to modify these preferences in the presence of selective thermal reinforcement. To develop the framework, we analyzed flies’ behavior in a classical operant visual learning paradigm and found that they use stochastically generated fixations and directed turns to express their heading preferences. Symmetries in the visual scene used in the paradigm allowed us to expose how flies’ probabilistic behavior in this setting is tethered to their head direction representation. We describe how flies’ ability to quickly adapt their behavior to the rules of their environment may rest on a behavioral policy whose parameters are flexible but whose form is genetically encoded in the structure of their circuits. Many of the mechanisms we outline may also be relevant for rapidly adaptive behavior driven by internal representations in other animals, including mammals.
Many animals rely on an internal heading representation when navigating in varied environments. How this representation is linked to the sensory cues that define different surroundings is unclear. In the fly brain, heading is represented by 'compass' neurons that innervate a ring-shaped structure known as the ellipsoid body. Each compass neuron receives inputs from 'ring' neurons that are selective for particular visual features; this combination provides an ideal substrate for the extraction of directional information from a visual scene. Here we combine two-photon calcium imaging and optogenetics in tethered flying flies with circuit modelling, and show how the correlated activity of compass and visual neurons drives plasticity, which flexibly transforms two-dimensional visual cues into a stable heading representation. We also describe how this plasticity enables the fly to convert a partial heading representation, established from orienting within part of a novel setting, into a complete heading representation. Our results provide mechanistic insight into the memory-related computations that are essential for flexible navigation in varied surroundings.
Hunger and thirst have distinct goals but control similar ingestive behaviors, and little is known about neural processes that are shared between these behavioral states. We identify glutamatergic neurons in the peri-locus coeruleus (periLC neurons) as a polysynaptic convergence node from separate energy-sensitive and hydration-sensitive cell populations. We develop methods for stable hindbrain calcium imaging in free-moving mice, which show that periLC neurons are tuned to ingestive behaviors and respond similarly to food or water consumption. PeriLC neurons are scalably inhibited by palatability and homeostatic need during consumption. Inhibition of periLC neurons is rewarding and increases consumption by enhancing palatability and prolonging ingestion duration. These properties comprise a double-negative feedback relationship that sustains food or water consumption without affecting food- or water-seeking. PeriLC neurons are a hub between hunger and thirst that specifically controls motivation for food and water ingestion, which is a factor that contributes to hedonic overeating and obesity.
The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.
Many animals rely on a representation of head direction for flexible, goal-directed navigation. In insects, a compass-like head direction representation is maintained in a conserved brain region called the central complex. This head direction representation is updated by self-motion information and by tethering to sensory cues in the surroundings through a plasticity mechanism. However, under natural settings, some of these sensory cues may temporarily disappear—for example, when clouds hide the sun—and prominent landmarks at different distances from the insect may move across the animal's field of view during translation, creating potential conflicts for a neural compass. We used two-photon calcium imaging in head-fixed Drosophila behaving in virtual reality to monitor the fly's compass during navigation in immersive naturalistic environments with approachable local landmarks. We found that the fly's compass remains stable even in these settings by tethering to available global cues, likely preserving the animal's ability to perform compass-driven behaviors such as maintaining a constant heading.
Many animals rely on persistent internal representations of continuous variables for working memory, navigation, and motor control. Existing theories typically assume that large networks of neurons are required to maintain such representations accurately; networks with few neurons are thought to generate discrete representations. However, analysis of two-photon calcium imaging data from tethered flies walking in darkness suggests that their small head-direction system can maintain a surprisingly continuous and accurate representation. We thus ask whether it is possible for a small network to generate a continuous, rather than discrete, representation of such a variable. We show analytically that even very small networks can be tuned to maintain continuous internal representations, but this comes at the cost of sensitivity to noise and variations in tuning. This work expands the computational repertoire of small networks, and raises the possibility that larger networks could represent more and higher-dimensional variables than previously thought.