Article Preview
Top1. Introduction
Precise segmentation of target anatomical structures from tomographic image datasets is an essential prerequisite for quantitative analysis and computer-assisted diagnostics. If all voxels of an anatomical structure are labelled, measurements on the extent and volume become feasible, for instance facilitating the monitoring of the disease progression. Furthermore, from available segmentations 3D surface models can be derived that can be utilized for surgery planning (Zwettler, Backfrieder, Swoboda, & Pfeifer, 2009) or surgical training (Fürst & Schrempf, 2012). Thereby the user interaction and subsequent analysis can be performed on the computer model or utilizing a virtual reality environment, enriched by haptic patient models that are derived from the anatomical segmentations and produced via emerging 3D printing devices. Precise segmentations are not only required for computer-based analysis, but also for registering multi-model image data of the same patient to combine high resolution morphological imaging (CT, MRI) and image data from the functional imaging domain (PET, SPECT). Only with available segmentation masks, the measured metabolic activity can be limited to organ borders to be quantitatively evaluated with respect to anatomical classifications (Beyer, Schwenzer, Bisdas et al., 2010).
In the last decades, there has been intensive research work in the field of medical image processing to achieve preferably fully-automated segmentation approaches in specific diagnostic domains. Utilizing deformable models (McInerney & Terzopoulos, 1996) and incorporating a priori knowledge on the target anatomical structure, morphologies with low variability in shape can be robustly segmented. Nevertheless, generic application of deformable models for arbitrary segmentation domains is not feasible as proper adjustment of the parameters and the a priori model is required. In contrast, Statistical Shape Models (Cootes, Taylor, Cooper, & Graham, 1992) can be trained rather autonomously, if a large set of reference segmentations covering all relevant possible anatomical variations is available. Active appearance models (Cootes, Edwards, & Taylor, 1998) introduce additional statistical properties of the targets structure expected intensity profile besides geometric features and Level sets (Osher & Sethian, 1988) can handle changes in topology and anatomical variability but complex parameterization needs adjustment to the particular segmentation task. Furthermore, all of these sophisticated models are limited to segmentation of particular anatomical shapes, as border areas and overlapping segments cannot be handled, when segmenting multiple classes from input volume, covering all of the voxels at most.
Segmentation of the entire dataset from arbitrary imaging domains, i.e. assigning a class label to all available voxels, is up to now not feasible by utilizing fully-automated model-based approaches. However, such entire segmentations can be achieved in a semi-automated way by utilizing conventional segmentation approaches like region growing (Gonzalez & Wintz, 1987; Handels, 2009) or live wire contour detection (Barett & Mortensen, 1997; Schenk & Prause, 2001; Yoo, 2004) with appropriate filtering and morphological post-processing in a rapid prototyping image processing pipeline (Sonka, Hlavac, & Boyle, 1993). A standardized process model for segmentation of arbitrary anatomical structures from variable tomographic image data has been presented in (Zwettler & Backfrieder, 2013). While this approach is by far too user-intensive for practical application, the reference segmentations achievable in a semi-automated way are perfectly suited for training a priori models of specific segmentation domains.