 Research
 Open Access
 Published:
A fullyautomatic caudate nucleus segmentation of brain MRI: Application in volumetric analysis of pediatric attentiondeficit/hyperactivity disorder
BioMedical Engineering OnLinevolume 10, Article number: 105 (2011)
Abstract
Background
Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations.
Method
We present CaudateCut: a new fullyautomatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlasbased segmentation strategy with the Graph Cut energyminimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, lowcontrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multiscale edgeness measure.
Results
We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attentiondeficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to stateoftheart approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis.
Conclusion
CaudateCut generates segmentation results that are comparable to goldstandard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD.
1 Introduction
Studies of volumetric brain magnetic resonance imaging (MRI) show neuroanatomical abnormalities in pediatric attentiondeficit/hyperactivity disorder (ADHD) [1–3]. ADHD is a developmental disorder characterized by inattentiveness, motor hyperactivity and impulsiveness, and it represents the most prevalent childhood psychiatric disorder. It is also estimated that half the children with ADHD will display the disorder in adulthood. As stated in several reviews and metanalyses, diminished right caudate volume is one of the most replicated findings among ADHD samples in morphometric MRI studies [4]. As a result of these studies, in [5], the authors proposed a diagnostic test based on the ratio between right caudate volume and the total bilateral caudate volume.
Most of the analyses of ADHD via MRI images, as well as much research in neuroscience, lack an appropriate automated segmentation system, and therefore require physicians to manually segment brain structures, such as the caudate, on a slice by slice basis. This process is extremely time consuming, tedious, and prone to interrater discrepancies, limiting the statistical power of the analysis. An automated approach would accelerate the analysis and make the procedure feasible for large amounts of data. Automatic segmentation of subcortical structures in the brain is currently an active research area. In contrast to the problem of tissue segmentation (GM, WM, and CSF) in brain MRI, for which acceptable solutions can be found, the issue of subcortical structure segmentation has yet to be satisfactorily addressed. Structures such as the putamen and caudate nucleus are difficult to correctly segment even manually, since they are small and their intensity is nonuniform and noncontrasted. Figure 1 is an example of some brain MRI transversal planes with the caudate nucleus indicated.
Semiautomatic methods for segmenting subcortical structures have been proposed, such as the method developed specifically for neuroanatomical segmentation [6], in which the user specifies two coordinates of the ACPC line for the segmentation of the caudate. This method is a knowledgedriven twostep algorithm. In the first step, lateral ventricles are extracted to help position a bounding box that contains the caudate nucleus. Region growing of gray matter seed points is performed inside the box to estimate an initial segmentation. A set of anatomical constraints are also defined, based on previous knowledge, and are subsequently imposed on the first result. In the second step, the caudate boundaries are refined outside the bounding box by imposing new anatomical constraints. In [1], the authors use an SPM tool to segment and compute voxelbased morphometry measures. Significant effort has been put into automated segmentation of different structures in brain MRI (see reviews [7, 8]). A good example of these efforts can be found in the Caudate Segmentation Evaluation challenge (CAUSE07) [9]. In this competition, different algorithms designed to segment the caudate nucleus from brain MRI scans were compared. From among the methods adopted, the atlasbased segmentation approaches stand out as a powerful generic technique for automatic delineation of structures in volumetric images. This approach uses data obtained from different subjects to construct an atlas, which acts as a common anatomy for the area imaged (brain) and applies it to further segmentations. The results of the CAUSE07 competition show that multiatlas segmentation methods can outperform schemes based on a single atlas. However, running multiple registrations on volumetric data requires a lot of time, and it is difficult to determine the optimum number of atlases to be considered [10]. In contrast, an important disadvantage of atlasbased methods is that the target object is not necessarily correctly represented by the atlas shapes. In this case, a more flexible and adaptive technique can be useful in order to ensure accurate segmentation results.
In this work, we combine the power of atlasbased segmentation with an adaptive energybased scheme based on the Graph Cut (GC) framework, to obtain a globally optimal segmentation of the caudate structure in MRI. The GC theory has been used in many computer vision problems [11]. In particular, it has successfully been applied to binary segmentation of images, and has yielded a solution which corresponds to the global minimum of an energy function [12, 13]. The goodness of the solution depends on the suitability of the unary and boundary energy terms and their reliable computation. The original GC definition is limited to image information, and can fail when the caudate structure in MRI is subtle and contrast is low. In order to overcome this problem, we add supervised contextual information of the caudate nucleus and reinforce boundary detection using a new multiscale edgeness measure.
Our method, CaudateCut, starts with an initialization step based on a standard atlasbased method, and defines a new GC energy function that is specially adapted to caudate nucleus segmentation. In particular, CaudateCut involves several stages. The first stage is devoted to defining the initial region of the caudate nucleus and does so by taking advantage of the a priori brain structure information. Later steps continue the definition of the novel GC energy function that is appropriate for segmentation of the caudate nucleus from brain MRI scans. More specifically, we propose a novel energy function that combines local and contextual image information analysis by modeling foreground and background properties, as well as relations between neighboring pixels. In contrast to the classical GC model, where energy unary terms are only based on pixel intensity values, we also exploit previouslylearned shape relations. In particular, our unary term is defined as the weighted sum of two terms: one based on the intensity model, and the other on the confidence of the output of a binary classifier. The new supervised unary term uses correlogram structure as a pixel description in order to capture contextual intensity relations around the pixel analyzed. Moreover, in the case of the boundary term, we propose that information from the first and second intensity derivatives be considered, and include a measure of edgeness based on a new multiscale version of the adaptive regularization parameter [14]. With this new term, we obtain a more accurate segmentation in the presence of boundary artifacts and improve boundary term pixel influence.
We present results from two different datasets. The first consists on an MRI dataset of thirty nine children/adolescents with ADHD (ages 618) and forty healthy control subjects matched for age, gender, and handedness. The second is a public dataset of 18 healthy controls from the Internet Brain Segmentations Repository provided by the Center for Morphometric Analysis at Massachusetts General Hospital. We show that our method, CaudateCut, improves segmentation performance with respect to a classical atlasbased approach and a multiatlas approach proposed recently. Moreover, we provide a quantitative volumetric analysis of pediatric ADHD, and obtain specifications and results that are comparable to manual analysis based on caudate nucleus appearance.
The rest of the paper is organized as follows: Section 2 goes through the related work. Section 3 introduces the CaudateCut algorithm. Section 4 reports and discuss the results of experiments on caudate nucleus segmentation, as well as an ADHD volumetric quantitative analysis. Finally, Section 5 concludes the paper and describes future lines of research.
2 Related work
Different strategies can be adopted for fullyautomatic segmentation of subcortical structures. Recent techniques can be summarized in four groups: a) anatomical atlasbased and multiatlasbased algorithms, b) supervised learning techniques, c) statistical model approaches, and d) energybased segmentation techniques.

a)
Anatomical atlasbased methods rely on comparing the image under study with a precomputed anatomical atlas of the brain. After the comparison, atlas label propagation is performed to give an estimation of the segmentation in the subject being studied [15–17]. Thus, these methods use knowledge about the structure of the brain directly. [15] develops ANIMAL, a fullyautomatic procedure for segmenting any structure in an anatomical image in a predefined native space in an anatomical atlas in a normalized space. They observed that since the deformation field is bandlimited, irregular structures could not be accurately segmented. It was in their next work, ANIMAL+INSECT [18], that the problem was addressed by introducing postprocessing that required tissue classification of the subject in order to refine the final segmentation of any labeled structure. Other authors have exploited the benefit of generative models with the aim of reaching optimal solutions. [19] and [20] combine tissue classification, bias correction, and nonlinear warping within the same framework. Version 8 of SPM [21] includes the unified approach of [20]. An important disadvantage of these methods is the computational cost necessary to build an atlas from different subjects. Moreover, the training set selection required to build the atlas is a difficult issue, and most of the methods in Challenge CAUSE07 [9] select different training sets manually to segment the different groups of test data. This fact converts these methods in semiautomatic. In [17], the influence of atlas selection is analyzed by comparing the segmentation of tissue from brain MRI of young children using different atlases. In this case, a standard expectationmaximization algorithm with registrationbased segmentation was used [22]. In [23], the authors incorporate structurespecific models using Markov random fields and [24] improves the results produced by [23] using diffeomorphic warps.
Atlasbased algorithms were first based on a single mean atlas, and, progressively, evolved to multiatlas strategies where decision fusion strategies are involved [10, 25, 26] together with label propagation. Classifier fusion, based on the majority vote rule, has been shown to be accurate for segmenting brain structures. This strategy can become more robust and increasingly accurate as the number of classifiers grows. However, it suffers from problems of scale when the number of atlases is large. [26] compares different classifier selection strategies, which are applied to a group of 275 subjects with manually labeled brain MRI. An adaptive multiatlas segmentation method (AMAS) is presented in [10]. AMAS includes an automated decision to select the most appropriate atlases for a target image and a stopping criterion for registering atlases when no further improvement is expected. This method obtained the best mark in the CAUSE07 challenge.

b)
Different ways of exploiting supervised learning in segmentation methods have been incorporated. In [27], the atlasbased segmentation method presented uses segmentation confidence maps, which are learned from a small manuallysegmented training set, and incorporated into the cost term. This cost is responsible for weighting the influence of initial segmentations in the multistructure registration. Moreover, multiple atlases are used both in a supervised atlascorrection step, and multiple atlas propagation. In [28], a twostage method is presented, which benefits from capabilities of mathematical feature extractors and artificial neural networks. In the first stage, geometric moment invariants (GMIs) are applied at different scales to extract features that represent the shapes of the structures. Next, multidimensional feature vectors are constructed that contain the GMIs along with image intensity values, probability atlas values (PAVs), and voxel coordinates. These feature vectors are used to estimate signed distance maps (SDMs) of the desired structures. To this end, multilayer perceptron neural networks (MLPNN) are designed to approximate the SDMs of the target structures. In the second stage, the estimated SDM of each structure is used to design another MLPNN to classify the image voxels into two classes: inside and outside the structure.

c)
Shape and appearance models involve establishing correspondence across a training set and learning the statistics of shape and intensity variation using PCA models. To segment an image being studied, model parameters which best approximate the structures have to be computed. [29] applies an active appearance model (AAM)based method to segment the caudate nucleus. A "composite" 3D profile AAM is constructed from the surfaces of several subcortical structures using a training set, and individual AAMs of the left and right caudate are constructed from a different training set. Segmentation starts with affine registration to initialize the composite model within the image. Then, a search is performed using the composite model. This provides a reliable but coarse segmentation, used to initialize a search with the individual caudate models. [30] uses a statistical shape model with elastic deformations to segment the hippocampus, thalamus, putamen, and pallidum. In [31], a comparison of four different strategies of brain subcortical structure segmentation is presented: two of them are atlasbased strategies ( [26] and [17]) and the other two are based on statistical models of shape and appearance ( [29] and [32]). The best results are achieved by the multiatlas classifier fusion and labeling approach [26] which treats atlases as classifiers and combines them using a majority voting rule.

d)
With reference to energyminimization methods, [33] uses a deformable mesh followed by normalized cuts criterion to segment the caudate and the putamen from PET images. [34] proposes a multiphase level set framework for image segmentation using the MumfordShah model, as a generalization of an active contour model. In [35], a method is presented for the segmentation of anatomical structures, which incorporates prior information about the intensity and curvature profile of the structure from a training set of images and boundaries. In [36], the GC strategy is adapted for segmenting anatomical brain regions of interest in diffusion tensor MRI (DTMRI). An open source application called ITKSNAP was developed [37] for level set segmentation.
Finally, there exist some libraries, such as Freesurfer [38], Slicer [39], and SPM [21], which have been developed to address the MRI segmentation problem. However, all of them are limited to atlasbased algorithms which lack robustness when dealing with different types of subjects. Hence, constructing an hybrid approach that combines atlasbased and energybased strategies is a natural extension of stateoftheart algorithms. The combination presented in this paper exploits atlas structure information and an adaptive ad hoc energy model. Moreover, the proposed energy model also takes advantage of supervised learning techniques.
3 CaudateCut
In this section, we review the GC framework and describe the novel CaudateCut segmentation algorithm. Table 1 summarizes the terminology used in the next sections.
3.1 GraphCut Framework
In this section, we introduce the GC framework used in the CaudateCut segmentation algorithm. Let us define $\mathcal{X}=\left({\mathsf{\text{x}}}_{1},...,{\mathsf{\text{x}}}_{p},...,{\mathsf{\text{x}}}_{\left\mathcal{P}\right}\right)$ as the set of pixels for a given grayscale image I; $\mathcal{P}=\left(1,...,p,...,\left\mathcal{P}\right\right)$ as the set of indexes for I; $\mathcal{N}$ as the set of unordered pairs {p, q} of neighboring pixels of $\mathcal{P}$ under a 4(8) neighborhood system, and $L=\left({L}_{1},...,{L}_{p},...,{L}_{\left\mathcal{P}\right}\right)$ as a binary vector whose components L _{ p }specify assignments to pixels $p\in \mathcal{P}$. Each L _{ p }can be either "foreground" or "background", or equivalently "cau" or "back" for our problem (abbreviations for caudate and background), indicating whether pixel p belongs to the caudate or background, respectively. Thus, the array L defines a segmentation of image I. The GC formulation defines the cost function E(L) which describes soft constraints imposed on boundary and region properties of L:
where U(L) is the unary term (or region properties term),
and B(L) is the boundary property term,
where,
The coefficient δ ∈ ℝ^{+} in Eq.(1) specifies the relative importance of the unary term U(L) compared to the boundary term B(L). The unary term U(L) assumes that the individual penalties for assigning pixel p to "cau" and "back", correspondingly U _{ p }("cau") and U _{ p }("back"), are given. The term B(L) comprises the boundary properties of segmentation L. Coefficients B _{{p, q}}≥ 0 should be interpreted as a penalty for a discontinuity between p and q.
The GC method imposes hard constraints on the segmentation results by means of the definition of seed points where labels are predefined and cannot be modified. The subsets $\mathcal{C}\subset \mathcal{P},\mathcal{B}\subset \mathcal{P},\mathcal{C}\cap \mathcal{B}=\varnothing $ denote the subsets of caudate and background seeds, respectively. The goal of GC is to compute the global minimum of Eq. (1) from all segmentations L satisfying the hard constraints $\forall p\in \mathcal{C}$, L _{ p }= "cau", $\forall p\in \mathcal{B}$, L _{ p }= "back".
Let us describe the details of the graph created to segment an MRI image. A graph $\mathcal{G}=<\mathcal{V},\mathcal{E}>$ is created with nodes, $\mathcal{V}$, corresponding to pixels $p\in \mathcal{P}$ of the image plus two additional nodes: the caudate terminal (a source S) and the background terminal (a sink T), therefore, $\mathcal{V}=\mathcal{P}\cup \left\{S,T\right\}$. The set of edges ε consists of two types of undirected edges: nlinks (neighborhood links) and t links (terminal links). Each pixel p has two tlinks {p, S} and {p, T} connecting it to each terminal. Each pair of neighboring pixels {p, q} in $\mathcal{N}$ is connected by an nlink. Without introducing any ambiguity, an nlink connecting a pair of neighbors p and q will be denoted by {p, q}, giving $\mathcal{E}=\mathcal{N}{\bigcup}_{p\in \mathcal{P}}\left\{\left\{p,S\right\},\left\{p,T\right\}\right\}$. Final segmentation is then computed over the defined graph using the mincut algorithm to minimize E(L).
3.2 CaudateCut Segmentation Algorithm
In this section, we describe the steps in the automatic caudate segmentation algorithm in detail. The CaudateCut algorithm is summarized in Table 2.
3.2.1 Atlasbased Segmentation
In this work, the atlasbased segmentation of the caudate largely follows the strategy proposed by [18]. The main steps in the algorithm are illustrated in Figure 2 and described thus:

1.
First, a nonuniformity image intensity correction is computed. Then, the corrected image is classified into WM, GM, and CSF.

2.
In the next step, the GM image is elastically registered from its original geometrical space to match a template image (which represents the expected distribution of gray matter in the subjects under study) in the socalled normalized space. The deformation field obtained is inverted to map the normalized space onto the original space.

3.
This inverted deformation is applied to the caudate segmentation in the normalized space, thus yielding a first segmentation of the caudate nucleus of the subject.

4.
Finally, in order to refine this first segmentation, the GM mask of the subject under study is combined with the mask obtained by unwarping the normalized caudate segmentation. They are combined as follows: the GM and caudate probability maps are multiplied and a threshold T _{ p }is imposed over the result: we consider that a voxel belongs to the caudate only where the product map is larger than T _{ p }.
This atlasbased segmentation method depends strongly on the atlas definition. In some situations, this can result in a solution that does not fit the target structure well and a further refinement may be necessary. However, the segmentation obtained may be useful for roughly locating the region of interest, and thus, it can be used to define the seeds for GC application. Figure 3 (b) shows the result of AB segmentation for the input image in Figure 3 (a).
3.2.2 Seed Initialization
GC is a semiautomatic interactive method, since the seeds are manually defined. In order to achieve a fully automatic method, we use the result of the atlasbased method to define an initial segmentation taking advantage of the atlas caudate shape. We define caudate and background seeds by performing morphological operations on the ROI obtained ${\mathcal{R}}_{0}$ in the atlasbased mask. To define the caudate seeds, $\mathcal{C}$, we compute $\mathcal{C}=\mathsf{\text{Erod}}{\mathsf{\text{e}}}_{{k}_{e}}\left({\mathcal{R}}_{0}\right)$, where $\mathsf{\text{Erod}}{\mathsf{\text{e}}}_{{k}_{e}}$ denotes an erosion with a structural element of k _{ e }pixels. In the case of background seeds, we dilate the region ${\mathcal{R}}_{0}$ and keep the complementary set, $\mathcal{B}=\mathcal{P}\backslash \mathsf{\text{Dilat}}{\mathsf{\text{e}}}_{{k}_{d}}\left({\mathcal{R}}_{0}\right)$, where $\mathsf{\text{Dilat}}{\mathsf{\text{e}}}_{{k}_{d}}$ denotes a dilatation with a structural element of k _{ d }pixels. In the example shown in Figure 3 (a), the selection of $\mathcal{C}$ and $\mathcal{B}$ seeds is obtained from erosion and dilation of the AB segmentation shown in Figure 3 (b).
3.2.3 Unary Energy Term
In this section, we describe how to compute the unary energy term for the GC energy function. This energy term is divided into two: an unsupervised part and a supervised part. The unsupervised part is computed in a problemdependent image way, based on the graylevel distribution of the seed pixels. The supervised part is computed from a support vector machine classifier (SVM) based on the contextual learning of caudate derivatives. Next, we describe in detail both parts of the unary term and the final combination.
Unsupervised unary term
We define the unsupervised unary term using caudate and background models based on graylevel information pertaining to the seeds. We initialize the unary potentials at each pixel p as,
The probability of a pixel p being marked as "cau", P_{ u }(L _{ p }= "cau"), is computed using the histogram of graylevels of caudate seeds. The probability of a pixel being marked as "back" is computed using the inverse probability, as P_{u}(L _{ p }= "back") = 1P_{ u }(L _{ p }= "cau"), since background seeds contain GM, WM and CSF and it is difficult to extract a model directly from them. Figure 3 (c) shows the unsupervised probability values P_{ u }(L _{ p }= "cau")) for the image in Figure 3 (a).
The unsupervised unary term estimates imagedependent caudate pixel probabilities based on caudate seeds. However, given the noisy information of MRI images and the small number of caudate seed pixels, a high generalization based on this term is not always guaranteed. In this context, we propose using a combination of the unsupervised energy with the supervised, which is based on learning contextual caudate derivatives from Ground Truth (GT) data.
Supervised unary term
In order to define the supervised unary term, we train a binary classifier using a set of MRI slices as a training set. In particular, we extract a pixel descriptor using a correlogram structure. The correlogram structure captures contextual intensity relations from circular bins around the pixel analyzed [40].
Given a pixel p, a correlogram C _{ c×r }is defined, where c and r define the number of circles and radius of the structure. Then, each bin b from the set of n bins, with n = c ⋅ r, is defined as the area delimited by two consecutive circles of the given radius. Given the pixel p and its correlogram structure ${C}_{c\times r}^{p}$, its supervised caudate descriptor is defined as:
where ∂_{ k }is the signed substraction of graylevel information within a pair of bins in C _{ c×r }. In this sense, the descriptor contains the n · (n  1)/2 graylevel derivatives of all pairs of bins within C _{ c×r }, which captures all spatial relations of graylevel intensities in the neighborhood of p. An example of a correlogram structure estimated for a caudate pixel is shown in Figure 3 (d).
We extract the descriptors for a subset of pixels on $\mathcal{C}$ and $\mathcal{B}$ from the training set data. Given the set of descriptors, a linear SVM classifier is trained in order to predict caudate confidence on image pixels from new test data. In our case, we use the output confidence of the classifier as a measure of the "probability" of a pixel belonging to the caudate. Then, the supervised unary potentials at each pixel p are:
The probability of a pixel being marked as "cau" is computed using the confidence of the SVM classifier over its correlogram descriptor P_{ s }(L _{ p }= "cau") = SVM(p). The probability of a pixel being marked as "back" is computed as the negative of the output margin of the classifier P_{ s }(L _{ p }= "back") = SVM(p). Figure 3 (d) shows the supervised caudate probability values P_{ s }(L _{ p }= "cau")) for the image in Figure 3 (b).
Combined unary term
The final unary term is defined as the addition of the unsupervised and supervised values at pixel p as follows:
3.2.4 Boundary Energy Term
To define boundary potentials, we use first and second intensity derivatives of the image to use the intensity and geometric information. Moreover, given the high variability in contrast between the caudate and background in different parts of the images, we propose weighting the boundary term using an imagedependent multiscale edgeness measure.
Specifically, we define the boundary potentials as the following convex linear combination:
First, we define N _{{p, q}}and O _{{p, q}}as:
The term θ _{{p, q}}denotes the angle between two unitary vectors codifying the directions of minimum gradient variation in pixel p and q based on the Hessian eigenvectors. In particular, we choose the direction of the eigenvector of the Hessian matrix with the smallest eigenvalue which gives the direction of the smallest variation at each pixel. The parameter α is empirically set by crossvalidation, while σ and β are computed by adapting the image distribution to I _{ p }and θ _{{p, q}}, respectively. Intuitively, the function N _{{p, q}}penalizes discontinuities between pixels of similar intensities and O _{{p, q}}penalizes for discontinuities between pixels of similar gradient variations.
The differential operators involved in the previous definition (Eq. 2) are wellposed concepts of linear scalespace theory, defined as convolutions with derivatives of Gaussians: $\frac{\partial}{\partial x}I\left(x,s\right)={s}^{\ell}I\left(x\right)*\frac{\partial}{\partial x}G\left(x,s\right)$, where G is the 2dimensional Gaussian function and ℓ is the Lindeberg parameter.
The selection of the Gaussian scale parameter is crucial for obtaining a satisfactory result. For a given pixel p, we consider an s × s neighborhood R _{ p }(s), and measure its entropy value:
where P(iR _{ p }(s)) is the probability of taking the value i in the local region R _{ p }(s), with r being all the possible discrete values. The scale chosen is defined by the maxima of the function H in the space of scales ${\mathcal{S}}_{p}=\left\{s:\partial H\left({R}_{p}\left(s\right)\right)\u2215\partial s=0,{\partial}^{2}H\left({R}_{p}\left(s\right)\right)\u2215\partial {s}^{2}<0\right\}$.
Second, we define the J term as the multiscale edgeness measure at each pixel: $J=\left({J}_{1}^{*},...,{J}_{p}^{*},...,{J}_{\left\mathcal{P}\right}^{*}\right)$. In order to compute ${J}_{p}^{*}$, we first run the Canny edge detector algorithm on the observed image at different threshold levels. Then, we compute the edge probability at each pixel by linear averaging of the edge thresholds for different scales as follows:
where J _{ p, γk, sj }is the binary edge map using threshold γ _{ k }and scale s _{ j }for pixel p. If pixel p is labeled as an edge pixel for most of the threshold levels at a significant scale, it has a high probability of being an edge pixel. In order to decrease the smoothness effect at the regions near a boundary, we convolve the probability map with a Gaussian kernel. Figure 3 (e) shows the boundary potential values B(L) for the image in Figure 3 (a). Intuitively, the term J adaptively changes the influence of the boundary term for pixels in the image, since boundary regions should be less regularized than the rest of the image regions.
Finally, by applying the mincut algorithm over the defined energy function and image graph, we obtain the final caudate segmentation. Figure 3 (f) shows the segmentation resulting from applying the CaudateCut algorithm.
4 Experimental Section
Before presenting our results, we first describe the material and methods of comparison, and also the validation protocol for the experiments.
4.1 Material
We considered two different databases, named URNC database and IBSR database, in order to validate the CaudateCut method we have proposed.

URNC database. This is a new database, which includes 39 children (35 boys and 4 girls) with ADHD, according to DSMIV, referred from the Unit of Child Psychiatry at the Vall d [001]ebron Hospital in Barcelona, Spain, and coordinated by the Unit of Research in Cognitive Neuroscience (URNC) at the IMIM Foundation, together with 39 control subjects (27 boys and 12 girls) recruited from the community. The mean age of the groups was 10.8 (S.D.: 2.9) and 11.7 (S.D.: 3), respectively. The groups were matched for handedness and IQ. The 1.5T system was used to acquire brain MRI scans. The resolution of the scans is 256 × 256 × 60 pixels with 2mm thick slices. Expert segmentations of the 79 individual caudate nuclei was obtained. MRIcro software^{1} was used for volume labeling and manipulation.

IBSR database. This dataset is part of a public database released by CAUSE07 Challenge [9]. It is composed by 18 T1weighted MRI scans from the Internet Brain Segmentation Repository (IBSR). It also contains expert segmentations of caudate structure. The MRI scans are of 1.5 mm thickness. Originally, the data size was 256 × 128 × 256 pixels, but in order to prepare data for the later application of the CaudateCut algorithm, we reoriented they data by Xaxis rotation and converted it into 256 × 256 × 128 pixels. For more details of the acquisition, visit CAUSE07 Challenge website^{2} from were the data was downloaded.
Figure 4 displays a sample control (a) and ADHD (b) MRI from the URNC database and a sample MRI from the IBSR database, (c). As can be appreciated, the quality of the ADHD image is worse than that of the control image, probably due to the movement of the children during image acquisition. Anisotropic filtering [41] was performed on all the slices before CaudateCut was applied.
4.2 Methods
We compared the CaudateCut method to two stateoftheart methods: a classical atlasbased method, and a multiatlas segmentation method. We also compared the results with the interobserver (IO) variability of the expert GT.
AB method
We implemented atlasbased segmentation of the caudate following the strategy presented in [18]. To this end we used the SPM toolbox implementation of the unified nonlinear normalization and tissue segmentation. The parameters of the method were set by default as in the SPM8 implementation, except for the threshold T _{ p }, which was estimated using a subset of 5 control subjects from the URNC database and set to T _{ p }= 0.1. The method was implemented using Matlab2008.
AMAS method
An adaptive multiatlas segmentation method (AMAS) was implemented as presented in [10]. For the atlas selection strategy we computed the absolute voxelwise difference between the target image and the registered images from the atlases and ordered them from smallest to largest. Then, the atlas information was propagated until the stopping criterion was reached. The stopping criterion was defined by the percentage of voxels that change their segmentation label after a new atlas propagation. This threshold was set to 0.05 for all experiments. The rest of the parameters in the AMAS method were set as described in [10]. The method was implemented using Matlab2008, and elastix6 version 3.9^{3} was used for volume registration, as suggested in [10].
CaudateCut method
The CaudateCut method was implemented using Matlab2008 and the SPM toolbox. In all the experiments the parameters were set to k _{ e }= 4, k _{ d }= 10, c = 3, r = 5, α = 0.5, ${\mathcal{S}}_{p}=\left[1,1.5,...,6\right]$, ℓ = 0, γ _{ k }∈ [0.02,0.03,..., 0.3] and s _{ j }∈ [0.5,1,..., 5]. The parameters σ and β were estimated for each image, as explained above. The parameter δ was tuned by crossvalidation and was set to 50 for the URNC dataset and 100 for the IBSR database. In order to train the SVM classifiers for computation of the supervised unary term, we performed a subsampling of pixels from each slice. In particular, we took all the pixels labeled as caudate in the GT, and the same number of background pixels. The background pixels were subsampled in a stratified way, trying to select pixels from all parts of the background.
Manual method
Experts use MRIcro [42] to manually delineate the caudate boundaries slice by slice. See [1] for more details of the procedure.
4.3 Validation
The quality of a segmentation can be evaluated in many different ways. Plausible evaluation criteria may depend on the purpose of the segmentation procedure. In order to be sufficiently general, we evaluated several volumetric measures, as well as voxel by voxel comparison measures. We focused on the six metrics detailed below, as proposed in [9]. In all of them, R corresponds to the estimated segmentation, G to the GT segmentation and  ·  denotes the cardinal of a set.

1.
Volumetric similarity index (or mean overlap), in percent:
$$\mathsf{\text{SI}}=2\left\frac{\mathsf{\text{R}}\cap \mathsf{\text{G}}}{R+G}\right\cdot 100.$$ 
2.
Volumetric union overlap, in percent:
$$\mathsf{\text{VO}}=\left\frac{\mathsf{\text{R}}\cap \mathsf{\text{G}}}{\mathsf{\text{R}}\cup \mathsf{\text{G}}}\right\cdot 100.$$ 
3.
Relative absolute volume difference, in percent:
$$\mathsf{\text{VD}}=\left\frac{\mathsf{\text{VO}}{\mathsf{\text{L}}}_{\mathsf{\text{R}}}\mathsf{\text{VO}}{\mathsf{\text{L}}}_{\mathsf{\text{G}}}}{\mathsf{\text{VO}}{\mathsf{\text{L}}}_{\mathsf{\text{G}}}}\right\cdot 100,$$
where VOL_{R} and VOL_{G} correspond to the total volume of the R and G segmentations, respectively.

4.
Average symmetric surface distance, in millimeters:
$$\mathsf{\text{AD}}=\frac{\left(\sum _{i=1}^{N}\mathsf{\text{d}}{\left({\mathsf{\text{B}}}_{\mathsf{\text{S}}i},{\mathsf{\text{B}}}_{\mathsf{\text{R}}}\right)}^{2}+\sum _{i=1}^{M}\mathsf{\text{d}}{\left({\mathsf{\text{B}}}_{\mathsf{\text{S}}},{\mathsf{\text{B}}}_{\mathsf{\text{R}}i}\right)}^{2}\right)}{\left{B}_{S}\right\cdot \left{B}_{R}\right},$$
where B _{ S }and B _{ R }correspond to the set of border voxels in R and G, respectively, and d(·,·) returns the minimum Euclidean distance between two sets of voxels.

5.
Root Mean Square (RMS) symmetric surface distance, in millimeters:
$$\mathsf{\text{RMSD}}=\sqrt{AD}.$$ 
6.
Maximum symmetric surface distance, in millimeters:
$$\mathsf{\text{MD}}=\underset{i,j}{max}\left(\mathsf{\text{d}}\left({\mathsf{\text{B}}}_{\mathsf{\text{S}}i},{\mathsf{\text{B}}}_{\mathsf{\text{R}}}\right),\mathsf{\text{d}}\left({\mathsf{\text{B}}}_{\mathsf{\text{S}}i},{\mathsf{\text{B}}}_{\mathsf{\text{R}}j}\right)\right).$$
Note that the volumetric measures VO and SI have 100 as a perfect segmentation and 0 as the lowest possible value, when there is no overlap at all between the estimated segmentation and GT. In the case of VD, the perfect value is 0, which can also be obtained for a nonperfect segmentation, as long as the volume of that segmentation is equal to the volume of the reference. For voxel comparison measures, AD, RMSD and MD, the perfect value is 0.
In order to validate the AMAS and CaudateCut methods (SVM classifiers for supervised unary term computation), we followed a leaveoneout strategy. Finally, Student's paired ttest [43] was used to evaluate the statistical significance between pairs of segmentation algorithms with a particular dataset (threshold of p < 0.05). The null hypothesis corresponds to the hypothesis that the two groups belong to the same distribution and is called H _{0}. Matlab2008 was used to perform this test.
4.4 Results and Discussion
We divide the results into two sections corresponding to two related experiments: segmentation evaluation and ADHD volumetric quantitative analysis.
4.4.1 Segmentation Evaluation
A) Quantitative segmentation results
We compared the performance of the CaudateCut, AMAS and AB methods. Table 3 shows the results obtained in the experiments on both URNC and IBSR datasets. For all six validation measures, our proposed CaudateCut produced better results than both AB and AMAS for both databases. With regard to the volumetric measures, CaudateCut achieved good mean rates of 80, 75% for SI, 68, 02% for VO, and 16, 22 for VD. Voxel by voxel mean measures are also acceptable, with 0.0024 mm for AD, 0.0733 mm for RMSD, and 35.70 mm for MD. The large MD values are due to the recurrent errors present in the internal boundaries of the caudate defined between caudate head and body, as is clarified in the visual results below. For the IBSR database, the AMAS method obtained larger VO and SI values than the AB method, whereas, in the URNC database, the AB method improved on the result of the AMAS method. This could be due to the fact that the AB parameters were tuned in the URNC database. In this sense, CaudateCut was able to properly overcome this inconvenience and improve on the AMAS results in the IBSR database. It is important to note that CaudateCut showed robustness to AB performance.
B) Qualitative segmentation results
Figure 5 shows qualitative CaudateCut results for the MRI slices of a control subject. In most of the slices, the CaudateCut segmentation result (red line) is highly comparable to the GT (green line). However, segmentation differences occur in the first and last caudate frames, where some voxels are classified as caudate by CaudateCut, but not by the GT (false positives). The inherent ambiguity of the caudate boundaries makes the expert's task of manually defining the caudate start and end slices arduous. This introduces variability and produces errors in MRI atlas information corresponding to the end slices. It is difficult for CaudateCutThis to rectify this kind of error. The AB method introduces fake seeds in these positions and CaudateCut propagates these errors, since it can not remove the seeds. In the second column of the second row, some voxels are not classified as caudate, while they should be, according to GT (false negatives). This particular sample slice corresponds to the transition between caudate head and body, where the caudate shape changes abruptly from the rounded head to the elongated body [5]. Due to the intrasubject variability of the caudate shape, in the caudate internal transition, atlas priors are less reliable and introduce errors. This inconvenience, together with the lack of a wellcontrasted boundary defining the caudate body in the first slices, makes these mistakes difficult to be rectify.
Figure 6 compares qualitative results of left caudate segmentation of URNC database MRI slices using the AMAS (second column), AB (third column) and CaudateCut (fourth column) methods. Note that the best segmentation results were obtained by the novel CaudateCut segmentation method, followed by AB, and finally, by the AMAS strategy. In general, CaudateCut improves AB segmentation and obtains a better fit to the caudate boundaries. Only in a few cases (examples in rows 2 and 3), does CaudateCut agree with the AB segmentation, and the GC strategy did not apply changes to the final segmentation. It can be seen that the registration strategy applied for the AMAS method was unable to correctly fit the caudate boundaries. At several locations, the caudate boundaries are not clearly defined. For example, in the first row example, the lower boundary mostly consists of partial volume effect voxels. Thus, the caudate was oversegmented by all the methods.
C) Statistical analysis
We performed four different statistical ttests: CaudateCut vs. AB and CaudateCut vs. AMAS for the two databases (URNC and IBSR) based on the VO measure. Table 4 presents the results of the tests. In the table, the ttest result is true (accept H _{0}) or false (reject H _{0}), t is Student's t statistic, p represents the pvalue, and CI means the confidence interval of differences. The results of the four tests were favorable for CaudateCut, showing that the differences in the overlap measures between CaudateCut and AB and AMAS were significant.
D) Difference analysis
Figure 7 shows the values of SI obtained using CaudateCut depending on the area of the slice of the caudate nucleus. As can be seen, the SI values are lower for smaller areas and tend to increase for larger caudate regions. This corroborates the claim that smaller structures are more difficult to automatically segment.
E) Computational differences
Concerning the computational time, AMAS was the most costly method in terms of testing time, since multiple registration had to be performed for each subject segmentation. On average, 23 registrations were performed for each volume segmentation and each registration took 7 minutes on a standard highend PC, thus making 17.5 minutes for the whole volume segmentation. The AB method was the fastest, taking around 5 minutes on average for the whole volume segmentation. CaudateCut involves applying the AB method and later the GC minimization process method. The total time was around 6 minutes for the whole volume segmentation.
F) Interobserver variability
Finally, the interobserver variability was computed using manual left caudate segmentations from the URNC database by means of two different experts. Table 5 shows the validation measures computed using these two GT segmentations. As mentioned in the introduction, obtaining an accurate manual segmentation is difficult even for experts, because of the low contrast and resolution of the caudate regions. Note that the measure values are comparable to those obtained with CaudateCut.
4.4.2 ADHD Volumetric Quantitative Analysis
The a priori hypothesis that developmental anomalies exist in the caudate nucleus of people with ADHD is generally accepted. Previous imaging studies have analyzed this hypothesis [4, 2, 3].
In this work, we analyzed right and left caudate volumetric differences between ADHD and control subjects in the URNC database. To this end, we performed a comparison of mean volume values applying Student's ttest for independent samples (with a threshold of p < 0.05). The aim of this experiment was to show that the analysis performed using automatic CaudateCut segmentation was coherent with the results of manual analysis. To carry out the manual and automated statistical analysis we considered GT and CaudateCut segmentations, respectively. ROI measures in voxels were transformed into cubic millimeters, mm^{3} (ROI total number of voxels multiplied by voxel dimensions).
Table 6 and Table 7 show the results of the manual and automatic analyses, respectively. Both tables contain mean volume measures, and standard deviation of control and ADHD groups for the right and left caudate separately. Moreover, the results of Student's ttests are presented: the ttest corresponds to a true (accept H _{0}) or false (reject H _{0}) result, t is Student's t statistic, p represents the pvalue, and CI means the confidence interval of differences. As can be observed, the ADHD group has lower right and left mean caudate volume than the control group in both the manual and automatic analysis. Moreover, the results of the statistical test were the same in the manual and automatic analysis: the volume measure was found to be statistically different between the groups for the right caudate but not for the left. Comparing volume values, it can be seen that the automatic CaudateCut segmentation method undersegments the caudate nucleus compared with the manual delineation. However, these discrepancies in the segmentations do not prevent coherent results between the two methods in the statistical analysis of the groups considered.
Finally, we qualitatively compared the manual and CaudateCut automatic analysis. Figure 8 shows both control and ADHD caudate volume distributions using GT segmentation (a, b) and CaudateCut segmentation (c, d). First column plots (a, c) correspond to right caudate volume measures and second column plots (b, d) to left caudate volume measures. The histogram of caudate volume for the ADHD and control groups are depicted in dashed black and solid red lines, respectively. Two Gaussian functions were fitted to the histograms. It can be appreciated that the differences between the ADHD and control distributions were larger for the right caudate in both the manual and the automatic analysis. The immediate conclusion is that CaudateCut generates results that are comparable to goldstandard analyses in differentiating neuroanatomical abnormalities between healthy controls and the group of individuals with ADHD.
5 Conclusion
In this work, we present a new method, CaudateCut, for caudate nucleus segmentation in brain MRI. CaudateCut combines the power of an atlasbased strategy and the adaptiveness of the defined energy function within the GC energyminimization framework, in order to segment the small and lowcontrast caudate structure. We define the new energy function with data potentials by using intensity and geometry information, and also making the most of the supervised learned local brain structures. Boundary potentials are also redefined using a new multiscale edgeness measure. CaudateCut has different advantages for different neuroimaging researchers. First of all, it is fully automatic, and secondly, the algorithm is reliable. The results are 100% reproducible in subsequent runs with the same data, avoiding the inaccuracies of intrarate and interrater drift.
The method was tested on two different datasets. Although the method was tuned on the novel URNC database, it provided outstanding results on the IBSR dataset, showing the inherent robustness of the approach. Moreover, we obtained results comparable to manual volumetric analysis of children with ADHD based on automatic caudate nucleus volume measurements. Future lines of research include the use of multiplehypotheses for seed initialization in order to increase the robustness to possible errors of atlas application and the incorporation of 3D information in the caudate segmentation. From the clinical point of view, new features based on the caudate appearance can be added to analyze ADHD abnormalities in an automatic way.
References
 1.
Carmona S, Vilarroya O, Bielsa A, Trèmols V, Soliva JC, Rovira M, Tomàs J, Raheb C, Gispert J, Batlle S, Bulbena A: Global and regional gray matter reductions in ADHD: A voxelbased morphometric study. Neuroscience Letters 2005, 389(2):88–93. 10.1016/j.neulet.2005.07.020
 2.
Filipek PA, SemrudClikeman M, Steingard RJ, Renshaw PF, Kennedy DN, Biederman J: Volumetric MRI analysis comparing subjects having attentiondeficit hyperactivity disorder with normal controls. Neurology 1997, 48(3):589–601.
 3.
Reiss A, Abrams M, Singer H, Ross J, Denckla M: Brain development, gender and IQ in children. A volumetric imaging study. Brain 1996, 119.
 4.
Tremols V, Bielsa A, Soliva JC, Raheb C, Carmona S, Tomas J, Gispert JD, Rovira M, Fauquet J, Tobeña A, Bulbena A, Vilarroya O: Differential abnormalities of the head and body of the caudate nucleus in attention deficithyperactivity disorder. Psychiatry Res 2008, 163(3):270–8. 10.1016/j.pscychresns.2007.04.017
 5.
Soliva JC, Fauquet J, Bielsa A, Rovira M, Carmona S, RamosQuiroga JA, Hilferty J, Bulbena A, Casas M, Vilarroya O: Quantitative MR analysis of caudate abnormalities in pediatric ADHD: Proposal for a diagnostic test. Psychiatry Research: Neuroimaging 2010, 182(3):238–243. 10.1016/j.pscychresns.2010.01.013
 6.
Xia Y, Bettinger K, Shen L, Reiss AL: Automatic Segmentation of the Caudate Nucleus From Human Brain MR Images. IEEE Transactions on Medical Imaging 2007, 26: 509–517.
 7.
Balafar M, Ramli A, Saripan M, Mashohor S: Review of brain MRI image segmentation methods. Artificial Intelligence Review 2010, 33: 261–274. 10.1007/s1046201091550
 8.
Duncan JS, Member S, Ayache N: Medical image analysis: progress over two decades and the challenges ahead. IEEE Transactions on Pattern Analysis and Machine Intelligence 2000, 22: 85–106. 10.1109/34.824822
 9.
Ginneken BV, Heimann T, Styner M: 3D segmentation in the clinic: A grand challenge. In: MICCAI Workshop on 3D Segmentation in the Clinic: A Grand Challenge 2007.
 10.
van Rikxoort E, Isgum I, Arzhaeva Y, Staring M, Klein S, Viergever M, Pluim J, van Ginneken B: Adaptive Local MultiAtlas Segmentation: Application to the Heart and the Caudate Nucleus. Medical Image Analysis 2010, 14: 39–49. 10.1016/j.media.2009.10.001
 11.
Kolmogorov V, Zabih R: What energy functions can be minimized via graph cuts. PAMI 2004, 26: 65–81.
 12.
Boykov Y, FunkaLea G: Graph Cuts and Efficient ND Image Segmentation. IJCV 2006, 70(2):109–131. 10.1007/s1126300679345
 13.
Boykov Y, Kolmogorov V: An Experimental Comparison of MinCut/MaxFlow Algorithms for Energy Minimization in Vision. IEEE Transactions on Pattern Analysis and Machine Intelligence 2001, 26: 359–374.
 14.
Candemir S, Akgul Y: Adaptive Regularization Parameter for Graph Cut Segmentation. 2010, 117–126.
 15.
Collins DL, Holmes CJ, Peters TM, Evans AC: Automatic 3D modelbased neuroanatomical segmentation. 1995.
 16.
Iosifescu DV, Shenton ME, Warfield SK, Kikinis R, Dengler J, Jolesz FA, Mccarley RW: An automated registration algorithm for measuring MRI subcortical brain structures. Neuroimage 1997, 6: 13–25. 10.1006/nimg.1997.0274
 17.
Murgasova M, Dyet L, Edwards D, Rutherford M, Hajnal J, Rueckert D: Segmentation of brain MRI in young children. Academic radiology 2007, 14(11):1350–1366. 10.1016/j.acra.2007.07.020
 18.
Collins DL, Zijdenbos AP, Baaré WFC, Evans AC: Animal+insect: Improved cortical structure segmentation. In IPMI. Springer; 1999:210–223.
 19.
Fischl B, Salat DH, van der Kouwe AJW, Makris N, Ségonne F, Quinn BT, Dalea AM: SequenceIndependent Segmentation of Magnetic Resonance Images. Neuroimage 2004, 23(Supplement 1):S69S84.
 20.
Ashburner J, Friston K: Unified segmentation. NeuroImage 2005, 26: 839–851. 10.1016/j.neuroimage.2005.02.018
 21.
Statistical Parametric Mapping (SPM) [Http://www.fil.ion.ucl.ac.uk/spm/]
 22.
Van Leemput K, Maes F, Vandermeulen D, Suetens P: Automated modelbased tissue classification of MR images of the brain. Medical Imaging, IEEE Transactions on 1999, 18(10):897–908. 10.1109/42.811270
 23.
Fischl B, Salat DH, Busa E, Albert M, Dieterich M, Haselgrove C, van der Kouwe A, Killiany R, Kennedy D, Klaveness S, Montillo A, Makris N, Rosen B, Dale AM: Whole brain segmentation: automated labeling of neuroanatomical structures in the human brain. Neuron 2002, 33(3):341–55+. 10.1016/S08966273(02)00569X
 24.
Khan AR, Wang L, Beg MF: FreeSurferinitiated fullyautomated subcortical brain segmentation in MRI using Large Deformation Diffeomorphic Metric Mapping. NeuroImage 2008, 41(3):735–746. 10.1016/j.neuroimage.2008.03.024
 25.
Heckemann RA, Hajnal JV, Aljabar P, Rueckert D, Hammersc A: Automatic anatomical brain MRI segmentation combining label propagation and decision fusion. Neuroimage 2006, 33: 115–126. 10.1016/j.neuroimage.2006.05.061
 26.
Aljabar P, Heckemann R, Hammers A, Hajnal J, Rueckert D: Classifier Selection Strategies for Label Fusion Using Large Atlas Databases. Neuroimage 2007, 523–531.
 27.
Khan AR, Chung MK, Beg MF: Robust AtlasBased Brain Segmentation Using Multistructure ConfidenceWeighted Registration. In Proceedings of the 12th International Conference on Medical Image Computing and ComputerAssisted Intervention: Part II. MICCAI '09, SpringerVerlag; 2009:549–557.
 28.
Jabarouti Moghaddam M, Soltanian Zadeh H: Automatic Segmentation of Brain Structures Using Geometric Moment Invariants and Artificial Neural Networks. In Information Processing in Medical Imaging, Volume 5636 of Lecture Notes in Computer Science. Edited by: Prince J, Pham D, Myers K. Springer Berlin/Heidelberg; 2009:326–337.
 29.
Babalola KO, Petrovic V, Cootes TF, Taylor CJ, Twining CJ, Mills A: Automatic Segmentation of the Caudate Nuclei using Active Appearance Models. 2007.
 30.
Kelemen A, Székely G, Gerig G: Elastic modelbased segmentation of 3D neuroradiological data sets. IEEE Trans Med Imaging 1999, 18(10):828–839. 10.1109/42.811260
 31.
Babalola KO, Patenaude B, Aljabar P, Schnabel J, Kennedy D, Crum W, Smith S, Cootes T, Jenkinson M, Rueckert D: An evaluation of four automatic methods of segmenting the subcortical structures in the brain. Neuroimage 2009, 47: 1435–1447. 10.1016/j.neuroimage.2009.05.029
 32.
Patenaude B, Smith SM, Kennedy DN, Jenkinson M: A Bayesian model of shape and appearance for subcortical brain segmentation. NeuroImage 2011, 56(3):907–922. 10.1016/j.neuroimage.2011.02.046
 33.
Tohka J, Wallius E, Hirvonen J, Hietala J, Ruotsalainen U: Automatic Extraction of Caudate and Putamen in [^{11} C] Raclopride PET Using Deformable Surface Models and Normalized Cuts. Nuclear Science, IEEE Transactions on 2006, 53: 220–227.
 34.
Vese LA, Chan TF, Tony , Chan F: A Multiphase Level Set Framework for Image Segmentation Using the Mumford and Shah Model. International Journal of Computer Vision 2002, 50: 271–293. 10.1023/A:1020874308076
 35.
Leventon ME, Grimson WEL, Faugeras O, III WMW: Level Set Based Segmentation with Intensity and Curvature Priors. Proceedings of the IEEE Workshop on Mathematical Methods in Biomedical Image Analysis 2000, 4–12. MMBIA '00
 36.
Weldeselassie YT, Hamarneh G: DTMRI Segmentation Using Graph Cuts. SPIE 2007.
 37.
Yushkevich PA, Piven J, Cody Hazlett H, Gimpel Smith R, Ho S, Gee JC, Gerig G: UserGuided 3D Active Contour Segmentation of Anatomical Structures: Significantly Improved Efficiency and Reliability. Neuroimage 2006, 31(3):1116–1128. 10.1016/j.neuroimage.2006.01.015
 38.
Freesurfer [Http://surfer.nmr.mgh.harvard.edu/]
 39.
3D Slicer [Http://www.slicer.org/]
 40.
Escalera S, Fornés A, Pujol O, Lladós J, Radeva P: Circular Blurred Shape Model for Multiclass Symbol Recognition. IEEE Transactions on Systems, Man, and Cybernetics 2010.
 41.
Weickert J: Anisotropic Diffusion in Image Processing. ECMI Series, Teubner; 1998.
 42.
MRIcro and MRIcron Medical Image Viewer Softwares [Http://www.cabiatl.com/mricro/mricro/]
 43.
Dietterich TG: Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms. Neural Computation 1998, 10: 1895–1923. 10.1162/089976698300017197
Acknowledgements
This work was supported in part by the projects: TIN200914404C02, La Marató de TV3 082131, CONSOLIDERINGENIO CSD 200700018, and MICINN SAF2009 10901.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
LI led this research. She was involved in handling the medical images, the technical novelty of the proposal, its implementation and validation, as well as writing most of this paper. She also supervised and coordinated the team and the different parts of the project. JS was involved in the acquisition of the medical images, the definition of the ground truth, the validation of the method from a clinical point of view, and the writing of the proposal. AH collaborated in the GC part of the technical proposal and its implementation, as well as in the validation and implementation of the comparative method and the writing of the paper. SE collaborated in the GC part of the technical proposal and its implementation, as well as in the validation of the method from a technical point of view and the writing of the paper. XJ collaborated in the atlasbased part of the technical proposal and its implementation, as well as in the validation of the method from a clinical point of view. OV was involved in the acquisition of the medical images, the definition of the ground truth, the validation of the method from a clinical point of view, and the writing of the proposal. PR was involved in supervising the project together with LI, technical discussion of the contribution, validation of the method from both a technical and clinical point of view, and the writing of the proposal. All authors read and approved the final manuscript.
Endnotes
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Received
Accepted
Published
DOI
Keywords
 Brain caudate nucleus
 segmentation
 MRI
 atlasbased strategy
 Graph Cut framework