Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Lippincottschwartz Lab / Publications
custom | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block

Associated Lab

facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block
facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block
general_search_page-panel_pane_1 | views_panes

3945 Publications

Showing 1751-1760 of 3945 results
10/10/12 | Illuminating vertebrate olfactory processing.
Spors H, Albeanu DF, Murthy VN, Rinberg D, Uchida N, Wachowiak M, Friedrich RW
The Journal of Neuroscience: The Official Journal of the Society for Neuroscience. 2012 Oct 10;32(41):14102-8. doi: 10.1523/JNEUROSCI.3328-12.2012

The olfactory system encodes information about molecules by spatiotemporal patterns of activity across distributed populations of neurons and extracts information from these patterns to control specific behaviors. Recent studies used in vivo recordings, optogenetics, and other methods to analyze the mechanisms by which odor information is encoded and processed in the olfactory system, the functional connectivity within and between olfactory brain areas, and the impact of spatiotemporal patterning of neuronal activity on higher-order neurons and behavioral outputs. The results give rise to a faceted picture of olfactory processing and provide insights into fundamental mechanisms underlying neuronal computations. This review focuses on some of this work presented in a Mini-Symposium at the Annual Meeting of the Society for Neuroscience in 2012.

View Publication Page
02/08/18 | Image co-localization - co-occurrence versus correlation.
Aaron JS, Taylor AB, Chew T
Journal of Cell Science. 2018 Feb 08;131(3):. doi: 10.1242/jcs.211847

Fluorescence image co-localization analysis is widely utilized to suggest biomolecular interaction. However, there exists some confusion as to its correct implementation and interpretation. In reality, co-localization analysis consists of at least two distinct sets of methods, termed co-occurrence and correlation. Each approach has inherent and often contrasting strengths and weaknesses. Yet, neither one can be considered to always be preferable for any given application. Rather, each method is most appropriate for answering different types of biological question. This Review discusses the main factors affecting multicolor image co-occurrence and correlation analysis, while giving insight into the types of biological behavior that are better suited to one approach or the other. Further, the limits of pixel-based co-localization analysis are discussed in the context of increasingly popular super-resolution imaging techniques.

View Publication Page
01/01/06 | Image diffusion using saliency bilateral filter.
Xie J, Heng P, Ho SS, Shah M
Medical Image Computing and Computer-Assisted Intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention. 2006;9:67-75

Image diffusion can smooth away noise and small-scale structures while retaining important features, thereby enhancing the performances of many image processing algorithms such as image compression, segmentation and recognition. In this paper, we present a novel diffusion algorithm for which the filtering kernels vary according to the perceptual saliency of boundaries in the input images. The boundary saliency is estimated through a saliency measure which is generally determined by curvature changes, intensity gradient and the interaction of neighboring vectors. The connection between filtering kernels and perceptual saliency makes it possible to remove small-scale structures and preserves significant boundaries adaptively. The effectiveness of the proposed approach is validated by experiments on various medical images including the color Chinese Visible Human data set and gray MRI brain images.

View Publication Page
01/01/24 | Image processing tools for petabyte-scale light sheet microscopy data.
Xiongtao Ruan , Matthew Mueller , Gaoxiang Liu , Frederik Görlitz , Tian-Ming Fu , Daniel E. Milkie , Joshua Lillvis , Alison Killilea , Eric Betzig , Srigokul Upadhyayula
bioRxiv. 2024 Jan 01:. doi: 10.1101/2023.12.31.573734

Light sheet microscopy is a powerful technique for visualizing dynamic biological processes in 3D. Studying large specimens or recording time series with high spatial and temporal resolution generates large datasets, often exceeding terabytes and potentially reaching petabytes in size. Handling these massive datasets is challenging for conventional data processing tools with their memory and performance limitations. To overcome these issues, we developed LLSM5DTools, a software solution specifically designed for the efficient management of petabyte-scale light sheet microscopy data. This toolkit, optimized for memory and per-formance, features fast image readers and writers, efficient geometric transformations, high-performance Richardson-Lucy deconvolution, and scalable Zarr-based stitching. These advancements enable LLSM5DTools to perform over ten times faster than current state-of-the-art methods, facilitating real-time processing of large datasets and opening new avenues for biological discoveries in large-scale imaging experiments.

View Publication Page
12/23/16 | Image-based correction of continuous and discontinuous non-planar axial distortion in serial section microscopy.
Hanslovsky P, Bogovic JA, Saalfeld S
Bioinformatics (Oxford, England). 2016 Dec 23:. doi: 10.1093/bioinformatics/btw794

MOTIVATION: Serial section microscopy is an established method for detailed anatomy reconstruction of biological specimen. During the last decade, high resolution electron microscopy (EM) of serial sections has become the de-facto standard for reconstruction of neural connectivity at ever increasing scales (EM connectomics). In serial section microscopy, the axial dimension of the volume is sampled by physically removing thin sections from the embedded specimen and subsequently imaging either the block-face or the section series. This process has limited precision leading to inhomogeneous non-planar sampling of the axial dimension of the volume which, in turn, results in distorted image volumes. This includes that section series may be collected and imaged in unknown order.

RESULTS: We developed methods to identify and correct these distortions through image-based signal analysis without any additional physical apparatus or measurements. We demonstrate the efficacy of our methods in proof of principle experiments and application to real world problems.

AVAILABILITY AND IMPLEMENTATION: We made our work available as libraries for the ImageJ distribution Fiji and for deployment in a high performance parallel computing environment. Our sources are open and available at http://github.com/saalfeldlab/section-sort, http://github.com/saalfeldlab/z-spacing and http://github.com/saalfeldlab/z-spacing-spark CONTACT: : saalfelds@janelia.hhmi.orgSupplementary information: Supplementary data are available at Bioinformatics online.

View Publication Page
02/01/21 | Image-based pooled whole-genome CRISPRi screening for subcellular phenotypes.
Kanfer G, Sarraf SA, Maman Y, Baldwin H, Dominguez-Martin E, Johnson KR, Ward ME, Kampmann M, Lippincott-Schwartz J, Youle RJ
Journal of Cell Biology. 2021 Feb 01;220(2):. doi: 10.1083/jcb.202006180

Genome-wide CRISPR screens have transformed our ability to systematically interrogate human gene function, but are currently limited to a subset of cellular phenotypes. We report a novel pooled screening approach for a wider range of cellular and subtle subcellular phenotypes. Machine learning and convolutional neural network models are trained on the subcellular phenotype to be queried. Genome-wide screening then utilizes cells stably expressing dCas9-KRAB (CRISPRi), photoactivatable fluorescent protein (PA-mCherry), and a lentiviral guide RNA (gRNA) pool. Cells are screened by using microscopy and classified by artificial intelligence (AI) algorithms, which precisely identify the genetically altered phenotype. Cells with the phenotype of interest are photoactivated and isolated via flow cytometry, and the gRNAs are identified by sequencing. A proof-of-concept screen accurately identified PINK1 as essential for Parkin recruitment to mitochondria. A genome-wide screen identified factors mediating TFEB relocation from the nucleus to the cytosol upon prolonged starvation. Twenty-one of the 64 hits called by the neural network model were independently validated, revealing new effectors of TFEB subcellular localization. This approach, AI-photoswitchable screening (AI-PS), offers a novel screening platform capable of classifying a broad range of mammalian subcellular morphologies, an approach largely unattainable with current methodologies at genome-wide scale.

View Publication Page
12/07/21 | Image-based representation of massive spatial transcriptomics datasets.
Stephan Preibisch , Nikos Karaiskos , Nikolaus Rajewsky
bioRxiv. 2021 Dec 07:. doi: 10.1101/2021.12.07.471629

We present STIM, an imaging-based computational framework for exploring, visualizing, and processing high-throughput spatial sequencing datasets. STIM is built on the powerful ImgLib2, N5 and BigDataViewer (BDV) frameworks enabling transfer of computer vision techniques to datasets with irregular measurement-spacing and arbitrary spatial resolution, such as spatial transcriptomics data generated by multiplexed targeted hybridization or spatial sequencing technologies. We illustrate STIM’s capabilities by representing, visualizing, and automatically registering publicly available spatial sequencing data from 14 serial sections of mouse brain tissue.

View Publication Page
06/19/13 | Imaging a population code for odor identity in the Drosophila mushroom body.
Campbell RA, Honegger KS, Qin H, Li W, Demir E, Turner GC
The Journal of Neuroscience : the official journal of the Society for Neuroscience. 2013 Jun 19;33(25):10568-81. doi: 10.1523/JNEUROSCI.0682-12.2013

The brain represents sensory information in the coordinated activity of neuronal ensembles. Although the microcircuits underlying olfactory processing are well characterized in Drosophila, no studies to date have examined the encoding of odor identity by populations of neurons and related it to the odor specificity of olfactory behavior. Here we used two-photon Ca(2+) imaging to record odor-evoked responses from >100 neurons simultaneously in the Drosophila mushroom body (MB). For the first time, we demonstrate quantitatively that MB population responses contain substantial information on odor identity. Using a series of increasingly similar odor blends, we identified conditions in which odor discrimination is difficult behaviorally. We found that MB ensemble responses accounted well for olfactory acuity in this task. Kenyon cell ensembles with as few as 25 cells were sufficient to match behavioral discrimination accuracy. Using a generalization task, we demonstrated that the MB population code could predict the flies' responses to novel odors. The degree to which flies generalized a learned aversive association to unfamiliar test odors depended upon the relative similarity between the odors' evoked MB activity patterns. Discrimination and generalization place different demands on the animal, yet the flies' choices in these tasks were reliably predicted based on the amount of overlap between MB activity patterns. Therefore, these different behaviors can be understood in the context of a single physiological framework.

View Publication Page
08/20/21 | Imaging Africa: a strategic approach to optical microscopy training in Africa.
Reiche MA, Warner DF, Aaron J, Khuon S, Fletcher DA, Hahn K, Rogers KL, Mhlanga M, Koch A, Quaye W, Chew T
Nature Methods. 2021 Aug 20;18(8):847-855. doi: 10.1038/s41592-021-01227-y
06/27/14 | Imaging ATUM ultrathin section libraries with WaferMapper: a multi-scale approach to EM reconstruction of neural circuits.
Hayworth KJ, Morgan JL, Schalek R, Berger DR, Hildebrand DG, Lichtman JW
Frontiers in Neural Circuits. 2014 Jun 27;8:68. doi: 10.3389/fncir.2014.00068

The automated tape-collecting ultramicrotome (ATUM) makes it possible to collect large numbers of ultrathin sections quickly-the equivalent of a petabyte of high resolution images each day. However, even high throughput image acquisition strategies generate images far more slowly (at present ~1 terabyte per day). We therefore developed WaferMapper, a software package that takes a multi-resolution approach to mapping and imaging select regions within a library of ultrathin sections. This automated method selects and directs imaging of corresponding regions within each section of an ultrathin section library (UTSL) that may contain many thousands of sections. Using WaferMapper, it is possible to map thousands of tissue sections at low resolution and target multiple points of interest for high resolution imaging based on anatomical landmarks. The program can also be used to expand previously imaged regions, acquire data under different imaging conditions, or re-image after additional tissue treatments.

View Publication Page