Main Menu (Mobile)- Block

Main Menu - Block

custom | custom

Search Results

general_search_page-panel_pane_1 | views_panes

34 Janelia Publications

Showing 1-10 of 34 results
Your Criteria:
    01/10/25 | A critical initialization for biological neural networks
    Pachitariu M, Zhong L, Gracias A, Minisi A, Lopez C, Stringer C
    bioRxiv. 01/2025:. doi: 10.1101/2025.01.10.632397

    Artificial neural networks learn faster if they are initialized well. Good initializations can generate high-dimensional macroscopic dynamics with long timescales. It is not known if biological neural networks have similar properties. Here we show that the eigenvalue spectrum and dynamical properties of large-scale neural recordings in mice (two-photon and electrophysiology) are similar to those produced by linear dynamics governed by a random symmetric matrix that is critically normalized. An exception was hippocampal area CA1: population activity in this area resembled an efficient, uncorrelated neural code, which may be optimized for information storage capacity. Global emergent activity modes persisted in simulations with sparse, clustered or spatial connectivity. We hypothesize that the spontaneous neural activity reflects a critical initialization of whole-brain neural circuits that is optimized for learning time-dependent tasks.

    View Publication Page
    11/08/24 | Analysis methods for large-scale neuronal recordings.
    Stringer C, Pachitariu M
    Science. 2024 Nov 08;386(6722):eadp7429. doi: 10.1126/science.adp7429

    Simultaneous recordings from hundreds or thousands of neurons are becoming routine because of innovations in instrumentation, molecular tools, and data processing software. Such recordings can be analyzed with data science methods, but it is not immediately clear what methods to use or how to adapt them for neuroscience applications. We review, categorize, and illustrate diverse analysis methods for neural population recordings and describe how these methods have been used to make progress on longstanding questions in neuroscience. We review a variety of approaches, ranging from the mathematically simple to the complex, from exploratory to hypothesis-driven, and from recently developed to more established methods. We also illustrate some of the common statistical pitfalls in analyzing large-scale neural data.

    View Publication Page
    12/07/23 | Anatomically distributed neural representations of instincts in the hypothalamus.
    Stagkourakis S, Spigolon G, Marks M, Feyder M, Kim J, Perona P, Pachitariu M, Anderson DJ
    bioRxiv. 2023 Dec 07:. doi: 10.1101/2023.11.21.568163

    Artificial activation of anatomically localized, genetically defined hypothalamic neuron populations is known to trigger distinct innate behaviors, suggesting a hypothalamic nucleus-centered organization of behavior control. To assess whether the encoding of behavior is similarly anatomically confined, we performed simultaneous neuron recordings across twenty hypothalamic regions in freely moving animals. Here we show that distinct but anatomically distributed neuron ensembles encode the social and fear behavior classes, primarily through mixed selectivity. While behavior class-encoding ensembles were spatially distributed, individual ensembles exhibited strong localization bias. Encoding models identified that behavior actions, but not motion-related variables, explained a large fraction of hypothalamic neuron activity variance. These results identify unexpected complexity in the hypothalamic encoding of instincts and provide a foundation for understanding the role of distributed neural representations in the expression of behaviors driven by hardwired circuits.

    View Publication Page
    08/01/20 | Arousal modulates retinal output.
    Schröder S, Steinmetz NA, Krumin M, Pachitariu M, Rizzi M, Lagnado L, Harris KD, Carandini M
    Neuron. 2020 Aug 01;107(3):487. doi: 10.1016/j.neuron.2020.04.026

    At various stages of the visual system, visual responses are modulated by arousal. Here, we find that in mice this modulation operates as early as in the first synapse from the retina and even in retinal axons. To measure retinal activity in the awake, intact brain, we imaged the synaptic boutons of retinal axons in the superior colliculus. Their activity depended not only on vision but also on running speed and pupil size, regardless of retinal illumination. Arousal typically reduced their visual responses and selectivity for direction and orientation. Recordings from retinal axons in the optic tract revealed that arousal modulates the firing of some retinal ganglion cells. Arousal had similar effects postsynaptically in colliculus neurons, independent of activity in the other main source of visual inputs to the colliculus, the primary visual cortex. These results indicate that arousal modulates activity at every stage of the mouse visual system.

    View Publication Page
    10/14/24 | Assembloid model to study loop circuits of the human nervous system
    Miura Y, Kim J, Jurjut O, Kelley KW, Yang X, Chen X, Thete MV, Revah O, Cui B, Pachitariu M, Pasca SP
    bioRxiv. 2024 Oct 14:. doi: 10.1101/2024.10.13.617729

    Neural circuits connecting the cerebral cortex, the basal ganglia and the thalamus are fundamental networks for sensorimotor processing and their dysfunction has been consistently implicated in neuropsychiatric disorders1-9. These recursive, loop circuits have been investigated in animal models and by clinical neuroimaging, however, direct functional access to developing human neurons forming these networks has been limited. Here, we use human pluripotent stem cells to reconstruct an in vitro cortico-striatal-thalamic-cortical circuit by creating a four-part loop assembloid. More specifically, we generate regionalized neural organoids that resemble the key elements of the cortico-striatal-thalamic-cortical circuit, and functionally integrate them into loop assembloids using custom 3D-printed biocompatible wells. Volumetric and mesoscale calcium imaging, as well as extracellular recordings from individual parts of these assembloids reveal the emergence of synchronized patterns of neuronal activity. In addition, a multi–step rabies retrograde tracing approach demonstrate the formation of neuronal connectivity across the network in loop assembloids. Lastly, we apply this system to study heterozygous loss of ASH1L gene associated with autism spectrum disorder and Tourette syndrome and discover aberrant synchronized activity in disease model assembloids. Taken together, this human multi-cellular platform will facilitate functional investigations of the cortico-striatal-thalamic-cortical circuit in the context of early human development and in disease conditions.

    View Publication Page
    08/02/24 | Bridging tuning and invariance with equivariant neuronal representations
    Hoeller J, Zhong L, Pachitariu M, Romani S
    bioRxiv. 2024 Aug 02:. doi: 10.1101/2024.08.02.606398

    As we move through the world, we see the same visual scenes from different perspectives. Although we experience perspective deformations, our perception of a scene remains stable. This raises the question of which neuronal representations in visual brain areas are perspective-tuned and which are invariant. Focusing on planar rotations, we introduce a mathematical framework based on the principle of equivariance, which asserts that an image rotation results in a corresponding rotation of neuronal representations, to explain how the same representation can range from being fully tuned to fully invariant. We applied this framework to large-scale simultaneous neuronal recordings from four visual cortical areas in mice, where we found that representations are both tuned and invariant but become more invariant across higher-order areas. While common deep convolutional neural networks show similar trends in orientation-invariance across layers, they are not rotation-equivariant. We propose that equivariance is a prevalent computation of populations of biological neurons to gradually achieve invariance through structured tuning.

    View Publication Page
    11/07/22 | Cellpose 2.0: how to train your own model.
    Pachitariu M, Stringer C
    Nature Methods. 2022 Nov 07;19(12):1634-41. doi: 10.1038/s41592-022-01663-4

    Pretrained neural network models for biological segmentation can provide good out-of-the-box results for many image types. However, such models do not allow users to adapt the segmentation style to their specific needs and can perform suboptimally for test images that are very different from the training images. Here we introduce Cellpose 2.0, a new package that includes an ensemble of diverse pretrained models as well as a human-in-the-loop pipeline for rapid prototyping of new custom models. We show that models pretrained on the Cellpose dataset can be fine-tuned with only 500-1,000 user-annotated regions of interest (ROI) to perform nearly as well as models trained on entire datasets with up to 200,000 ROI. A human-in-the-loop approach further reduced the required user annotation to 100-200 ROI, while maintaining high-quality segmentations. We provide software tools such as an annotation graphical user interface, a model zoo and a human-in-the-loop pipeline to facilitate the adoption of Cellpose 2.0.

    View Publication Page
    02/12/25 | Cellpose3: one-click image restoration for improved cellular segmentation.
    Stringer C, Pachitariu M
    Nat Methods. 2025 Feb 12:. doi: 10.1038/s41592-025-02595-5

    Generalist methods for cellular segmentation have good out-of-the-box performance on a variety of image types; however, existing methods struggle for images that are degraded by noise, blurring or undersampling, all of which are common in microscopy. We focused the development of Cellpose3 on addressing these cases and here we demonstrate substantial out-of-the-box gains in segmentation and image quality for noisy, blurry and undersampled images. Unlike previous approaches that train models to restore pixel values, we trained Cellpose3 to output images that are well segmented by a generalist segmentation model, while maintaining perceptual similarity to the target images. Furthermore, we trained the restoration models on a large, varied collection of datasets, thus ensuring good generalization to user images. We provide these tools as 'one-click' buttons inside the graphical interface of Cellpose as well as in the Cellpose API.

    View Publication Page
    02/03/20 | Cellpose: a generalist algorithm for cellular segmentation
    Stringer C, Michaelos M, Pachitariu M
    bioRxiv. 2020 Feb 03:. doi: 10.1101/2020.02.02.931238

    Many biological applications require the segmentation of cell bodies, membranes and nuclei from microscopy images. Deep learning has enabled great progress on this problem, but current methods are specialized for images that have large training datasets. Here we introduce a generalist, deep learning-based segmentation algorithm called Cellpose, which can very precisely segment a wide range of image types out-of-the-box and does not require model retraining or parameter adjustments. We trained Cellpose on a new dataset of highly-varied images of cells, containing over 70,000 segmented objects. To support community contributions to the training data, we developed software for manual labelling and for curation of the automated results, with optional direct upload to our data repository. Periodically retraining the model on the community-contributed data will ensure that Cellpose improves constantly.

    View Publication Page
    01/07/21 | Cellpose: a generalist algorithm for cellular segmentation.
    Stringer C, Wang T, Michaelos M, Pachitariu M
    Nature Methods. 2021 Jan 07;18(1):100-106. doi: 10.1038/s41592-020-01018-x

    Many biological applications require the segmentation of cell bodies, membranes and nuclei from microscopy images. Deep learning has enabled great progress on this problem, but current methods are specialized for images that have large training datasets. Here we introduce a generalist, deep learning-based segmentation method called Cellpose, which can precisely segment cells from a wide range of image types and does not require model retraining or parameter adjustments. Cellpose was trained on a new dataset of highly varied images of cells, containing over 70,000 segmented objects. We also demonstrate a three-dimensional (3D) extension of Cellpose that reuses the two-dimensional (2D) model and does not require 3D-labeled data. To support community contributions to the training data, we developed software for manual labeling and for curation of the automated results. Periodically retraining the model on the community-contributed data will ensure that Cellpose improves constantly.

    View Publication Page