Automatic Analysis of High-Frequency Ultrasound Images of Mouse Embryos
Project Summary:
The project is aiming to phenotype early- to mid-gestational mouse embryos by segmenting select organ systems in 3D data sets acquired in-utero with high-frequency ultrasound (HFU). Around 20,000 NIH Knockout (KO) mouse strains will be generated, 30% of which are expected to be embryonic or perinatal lethal, including many important models of human structural birth defects and congenital diseases. The development of phenotyping methods for embryonic lethal mice that provide for efficient pipeline analyses of defects in embryonic growth in the KO mouse strains is a highly demanded.
We propose to develop and validate in-utero 3D HFU image acquisition protocols and image processing methods that permit noninvasive, longitudinal studies of embryonic development and, in particular, the characterization of mutant phenotypes. Volumetric HFU data will be collected in-utero from mouse embryos staged between E9.5 to 15.5 in order to establish a database of normal development. With provided HFU images of mouse embryos, the focus of NYU Video team is developing advanced image analysis and machine learning methods for analyzing brain development in mouse embryos and characterizing defects caused by mutations.
Achievements:
A deep learning approach for segmentation, classification, and visualization of 3-D HFU images of mouse embryos was developed. Segmentation and mutant classification of HFU mouse embryo brain ventricle (BV) and body images can provide valuable information for developmental biologists. However, manual segmentation and identification of BV and body requires substantial time and expertise. We proposes an accurate, efficient and explainable deep learning pipeline for automatic segmentation and classification of the BV and body. For segmentation, a two-stage framework is implemented. The first stage produces a low-resolution segmentation map, which is then used to crop a region of interest (ROI) around the target object and serve as the probability map of the auto-context input for the second-stage fine-resolution refinement network. The segmentation then becomes tractable on high-resolution 3-D images without time-consuming sliding windows. The proposed segmentation method significantly reduces inference time (102.36– 0.09 s/volume ≈1000× faster) while maintaining high accuracy comparable to previous sliding-window approaches. Based on the BV and body segmentation map, a volumetric convolutional neural network (CNN) is trained to perform a mutant classification task. Through backpropagating the gradients of the predictions to the input BV and body segmentation map, the trained classifier is found to largely focus on the region where the Engrailed-1 (En1) mutation phenotype is known to manifest itself. This suggests that gradient backpropagation of deep learning classifiers may provide a powerful tool for automatically detecting unknown phenotypes associated with a known genetic mutation.
Challenges:
The developed algorithm must overcome five primary challenges related to the image data: 1) extreme imbalance between background and foreground (i.e., the BV makes up only 0.367% of the whole volume, on average, while the body is around 10.6%); 2) differing shapes and locations of the body and BV due to various embryonic stages; 3) large variation in embryo posture and orientation; 4) the presence of missing or ambiguous boundaries (see Fig. 1(a), (e), and (f)) and motion artifacts (see Fig. 1(b)–(d)); and 5) large variation in image size, from 150 × 161 × 81 to 210 × 281 × 282 voxels.
Fig.1. shows some challenges of segmenting BV and body from HFU images. (a)–(f): Six embryonic mice HFU volumes are shown with three views each: a B-mode image slice from the 3-D volume, a manual BV (green) and body (red) segmentation, and a 3-D rendering (visualized in natural orientation relative to the HFU probe). The numbers below each 3-D rendering indicate the corresponding image size in voxels. The arrow in (a) indicates an ambiguous boundary due to contact between the body and the uterine wall. The arrows in (b)—(d) indicate motion artifacts because of irregular physiological movements of the anesthetized pregnant mice. The arrows in (e) and (f) indicate missing head boundaries due to either specular reflections or shadowing from overlaying tissues.
Segmentation Algorithm:
Fig.2. illustrates that an end-to-end two-stage segmentation framework was developed for accurate and real-time segmentation of the BV and body in 3-D, in vivo, and in utero HFU images of mouse embryos.
Mutant Classification Algorithm:
Fig.3. shows a pictorial representation of the mutant classification network.
Saliency Map Visualization:
Fig.4. shows saliency images of the trained mutant classification network. The first row is the normal mouse embryo BV (green) and body (red) segmentation, whereas the second row is mutant. Two images are presented for each sample. The blue arrow in the first image indicates the known structural differences between En1 mutant and normal BVs, while the blue dots in the second image (salient points) indicate where the trained network focused when making the prediction.
If had not known a priori where to detect the difference in BV between normal and En1 mutant mouse embryos before- hand, the visualization results of the trained network would have highlighted these relevant regions. This observation indicates that gradient backpropagation of trained, deep-learning classifiers has the potential to automatically detect unknown phenotypes associated with a known genetic mutation.
Participants:
- Yao Wang, Professor NYU Video Lab, NYU Tandon School of Engineering.
- Jeffrey Ketterling, Dr. Lizzi Center for Biomedical Engineering, Riverside Research, New York.
- Jonathan Mamou, Dr. Lizzi Center for Biomedical Engineering, Riverside Research, New York.
- Daniel Turnbull, Professor Skirball Institute of Biomolecular Medicine, NYU School of Medicine.
- Orlando Aristizabal, Skirball Institute of Biomolecular Medicine, NYU School of Medicine.
- Ziming Qiu, Ph.D. student, NYU Tandon School of Engineering.
- Jen-wei Kuo, Ph.D. student, NYU Tandon School of Engineering.
- Nitin Nair, Master student, NYU Tandon School of Engineering.
- Tongda Xu, Master student, NYU Tandon School of Engineering.
- Jack Langerman, B.S. student, NYU Tandon School of Engineering.
Related Publications:
- A Deep Learning Approach for Segmentation, Classification, and Visualization of 3-D High-Frequency Ultrasound Images of Mouse Embryos.
- Scanner Independent Deep Learning-Based Segmentation Framework Applied to Mouse Embryos.
- Deep Mouse: An End-to-End Auto-Context Refinement Framework for Brain Ventricle & Body Segmentation in Embryonic Mice Ultrasound Volumes.
- Automatic Mouse Embryo Brain Ventricle & Body Segmentation and Mutant Classification from Ultrasound Data Using Deep Learning.
- Deep BV: A Fully Automated System for Brain Ventricle Localization and Segmentation in 3D Ultrasound Images of Embryonic Mice.
- Automatic Body Localization and Brain Ventricle Segmentation in 3D High Frequency Ultrasound Images of Mouse Embryos.
- Nested Graph Cut for Automatic Segmentation of High-Frequency Ultrasound Images of the Mouse Embryo.
- Automatic Mouse Embryo Brain Ventricle Segmentation, Gestation Stage Estimation, and Mutant Detection from 3D 40-MHz Ultrasound Data.
- A Novel Nested Graph Cuts Method for Segmenting Human Lymph Nodes in 3D High Frequency Ultrasound Images.
- Segmentation of 3-D High-Frequency Ultrasound Images of Human Lymph Nodes Using Graph Cut With Energy Functional Adapted to Local Intensity Distribution.
Dataset:
The dataset used in the paper “A Deep Learning Approach for Segmentation, Classification, and Visualization of 3-D High-Frequency Ultrasound Images of Mouse Embryos.” is shared in IEEEDataPort (link).
This work was supported in part by the NIH under Grant EB022950 and Grant HD097485.