 Research
 Open Access
A fullyautomatic caudate nucleus segmentation of brain MRI: Application in volumetric analysis of pediatric attentiondeficit/hyperactivity disorder
 Laura Igual^{1, 2}Email author,
 Joan Carles Soliva^{3, 4},
 Antonio HernándezVela^{1, 2},
 Sergio Escalera^{1, 2},
 Xavier Jiménez^{5},
 Oscar Vilarroya^{3, 4} and
 Petia Radeva^{1, 2}
https://doi.org/10.1186/1475925X10105
© Igual et al; licensee BioMed Central Ltd. 2011
Received: 8 August 2011
Accepted: 5 December 2011
Published: 5 December 2011
Abstract
Background
Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations.
Method
We present CaudateCut: a new fullyautomatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlasbased segmentation strategy with the Graph Cut energyminimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, lowcontrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multiscale edgeness measure.
Results
We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attentiondeficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to stateoftheart approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis.
Conclusion
CaudateCut generates segmentation results that are comparable to goldstandard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD.
Keywords
1 Introduction
Studies of volumetric brain magnetic resonance imaging (MRI) show neuroanatomical abnormalities in pediatric attentiondeficit/hyperactivity disorder (ADHD) [1–3]. ADHD is a developmental disorder characterized by inattentiveness, motor hyperactivity and impulsiveness, and it represents the most prevalent childhood psychiatric disorder. It is also estimated that half the children with ADHD will display the disorder in adulthood. As stated in several reviews and metanalyses, diminished right caudate volume is one of the most replicated findings among ADHD samples in morphometric MRI studies [4]. As a result of these studies, in [5], the authors proposed a diagnostic test based on the ratio between right caudate volume and the total bilateral caudate volume.
Semiautomatic methods for segmenting subcortical structures have been proposed, such as the method developed specifically for neuroanatomical segmentation [6], in which the user specifies two coordinates of the ACPC line for the segmentation of the caudate. This method is a knowledgedriven twostep algorithm. In the first step, lateral ventricles are extracted to help position a bounding box that contains the caudate nucleus. Region growing of gray matter seed points is performed inside the box to estimate an initial segmentation. A set of anatomical constraints are also defined, based on previous knowledge, and are subsequently imposed on the first result. In the second step, the caudate boundaries are refined outside the bounding box by imposing new anatomical constraints. In [1], the authors use an SPM tool to segment and compute voxelbased morphometry measures. Significant effort has been put into automated segmentation of different structures in brain MRI (see reviews [7, 8]). A good example of these efforts can be found in the Caudate Segmentation Evaluation challenge (CAUSE07) [9]. In this competition, different algorithms designed to segment the caudate nucleus from brain MRI scans were compared. From among the methods adopted, the atlasbased segmentation approaches stand out as a powerful generic technique for automatic delineation of structures in volumetric images. This approach uses data obtained from different subjects to construct an atlas, which acts as a common anatomy for the area imaged (brain) and applies it to further segmentations. The results of the CAUSE07 competition show that multiatlas segmentation methods can outperform schemes based on a single atlas. However, running multiple registrations on volumetric data requires a lot of time, and it is difficult to determine the optimum number of atlases to be considered [10]. In contrast, an important disadvantage of atlasbased methods is that the target object is not necessarily correctly represented by the atlas shapes. In this case, a more flexible and adaptive technique can be useful in order to ensure accurate segmentation results.
In this work, we combine the power of atlasbased segmentation with an adaptive energybased scheme based on the Graph Cut (GC) framework, to obtain a globally optimal segmentation of the caudate structure in MRI. The GC theory has been used in many computer vision problems [11]. In particular, it has successfully been applied to binary segmentation of images, and has yielded a solution which corresponds to the global minimum of an energy function [12, 13]. The goodness of the solution depends on the suitability of the unary and boundary energy terms and their reliable computation. The original GC definition is limited to image information, and can fail when the caudate structure in MRI is subtle and contrast is low. In order to overcome this problem, we add supervised contextual information of the caudate nucleus and reinforce boundary detection using a new multiscale edgeness measure.
Our method, CaudateCut, starts with an initialization step based on a standard atlasbased method, and defines a new GC energy function that is specially adapted to caudate nucleus segmentation. In particular, CaudateCut involves several stages. The first stage is devoted to defining the initial region of the caudate nucleus and does so by taking advantage of the a priori brain structure information. Later steps continue the definition of the novel GC energy function that is appropriate for segmentation of the caudate nucleus from brain MRI scans. More specifically, we propose a novel energy function that combines local and contextual image information analysis by modeling foreground and background properties, as well as relations between neighboring pixels. In contrast to the classical GC model, where energy unary terms are only based on pixel intensity values, we also exploit previouslylearned shape relations. In particular, our unary term is defined as the weighted sum of two terms: one based on the intensity model, and the other on the confidence of the output of a binary classifier. The new supervised unary term uses correlogram structure as a pixel description in order to capture contextual intensity relations around the pixel analyzed. Moreover, in the case of the boundary term, we propose that information from the first and second intensity derivatives be considered, and include a measure of edgeness based on a new multiscale version of the adaptive regularization parameter [14]. With this new term, we obtain a more accurate segmentation in the presence of boundary artifacts and improve boundary term pixel influence.
We present results from two different datasets. The first consists on an MRI dataset of thirty nine children/adolescents with ADHD (ages 618) and forty healthy control subjects matched for age, gender, and handedness. The second is a public dataset of 18 healthy controls from the Internet Brain Segmentations Repository provided by the Center for Morphometric Analysis at Massachusetts General Hospital. We show that our method, CaudateCut, improves segmentation performance with respect to a classical atlasbased approach and a multiatlas approach proposed recently. Moreover, we provide a quantitative volumetric analysis of pediatric ADHD, and obtain specifications and results that are comparable to manual analysis based on caudate nucleus appearance.
The rest of the paper is organized as follows: Section 2 goes through the related work. Section 3 introduces the CaudateCut algorithm. Section 4 reports and discuss the results of experiments on caudate nucleus segmentation, as well as an ADHD volumetric quantitative analysis. Finally, Section 5 concludes the paper and describes future lines of research.
2 Related work
 a)
Anatomical atlasbased methods rely on comparing the image under study with a precomputed anatomical atlas of the brain. After the comparison, atlas label propagation is performed to give an estimation of the segmentation in the subject being studied [15–17]. Thus, these methods use knowledge about the structure of the brain directly. [15] develops ANIMAL, a fullyautomatic procedure for segmenting any structure in an anatomical image in a predefined native space in an anatomical atlas in a normalized space. They observed that since the deformation field is bandlimited, irregular structures could not be accurately segmented. It was in their next work, ANIMAL+INSECT [18], that the problem was addressed by introducing postprocessing that required tissue classification of the subject in order to refine the final segmentation of any labeled structure. Other authors have exploited the benefit of generative models with the aim of reaching optimal solutions. [19] and [20] combine tissue classification, bias correction, and nonlinear warping within the same framework. Version 8 of SPM [21] includes the unified approach of [20]. An important disadvantage of these methods is the computational cost necessary to build an atlas from different subjects. Moreover, the training set selection required to build the atlas is a difficult issue, and most of the methods in Challenge CAUSE07 [9] select different training sets manually to segment the different groups of test data. This fact converts these methods in semiautomatic. In [17], the influence of atlas selection is analyzed by comparing the segmentation of tissue from brain MRI of young children using different atlases. In this case, a standard expectationmaximization algorithm with registrationbased segmentation was used [22]. In [23], the authors incorporate structurespecific models using Markov random fields and [24] improves the results produced by [23] using diffeomorphic warps.
Atlasbased algorithms were first based on a single mean atlas, and, progressively, evolved to multiatlas strategies where decision fusion strategies are involved [10, 25, 26] together with label propagation. Classifier fusion, based on the majority vote rule, has been shown to be accurate for segmenting brain structures. This strategy can become more robust and increasingly accurate as the number of classifiers grows. However, it suffers from problems of scale when the number of atlases is large. [26] compares different classifier selection strategies, which are applied to a group of 275 subjects with manually labeled brain MRI. An adaptive multiatlas segmentation method (AMAS) is presented in [10]. AMAS includes an automated decision to select the most appropriate atlases for a target image and a stopping criterion for registering atlases when no further improvement is expected. This method obtained the best mark in the CAUSE07 challenge.
 b)
Different ways of exploiting supervised learning in segmentation methods have been incorporated. In [27], the atlasbased segmentation method presented uses segmentation confidence maps, which are learned from a small manuallysegmented training set, and incorporated into the cost term. This cost is responsible for weighting the influence of initial segmentations in the multistructure registration. Moreover, multiple atlases are used both in a supervised atlascorrection step, and multiple atlas propagation. In [28], a twostage method is presented, which benefits from capabilities of mathematical feature extractors and artificial neural networks. In the first stage, geometric moment invariants (GMIs) are applied at different scales to extract features that represent the shapes of the structures. Next, multidimensional feature vectors are constructed that contain the GMIs along with image intensity values, probability atlas values (PAVs), and voxel coordinates. These feature vectors are used to estimate signed distance maps (SDMs) of the desired structures. To this end, multilayer perceptron neural networks (MLPNN) are designed to approximate the SDMs of the target structures. In the second stage, the estimated SDM of each structure is used to design another MLPNN to classify the image voxels into two classes: inside and outside the structure.
 c)
Shape and appearance models involve establishing correspondence across a training set and learning the statistics of shape and intensity variation using PCA models. To segment an image being studied, model parameters which best approximate the structures have to be computed. [29] applies an active appearance model (AAM)based method to segment the caudate nucleus. A "composite" 3D profile AAM is constructed from the surfaces of several subcortical structures using a training set, and individual AAMs of the left and right caudate are constructed from a different training set. Segmentation starts with affine registration to initialize the composite model within the image. Then, a search is performed using the composite model. This provides a reliable but coarse segmentation, used to initialize a search with the individual caudate models. [30] uses a statistical shape model with elastic deformations to segment the hippocampus, thalamus, putamen, and pallidum. In [31], a comparison of four different strategies of brain subcortical structure segmentation is presented: two of them are atlasbased strategies ( [26] and [17]) and the other two are based on statistical models of shape and appearance ( [29] and [32]). The best results are achieved by the multiatlas classifier fusion and labeling approach [26] which treats atlases as classifiers and combines them using a majority voting rule.
 d)
With reference to energyminimization methods, [33] uses a deformable mesh followed by normalized cuts criterion to segment the caudate and the putamen from PET images. [34] proposes a multiphase level set framework for image segmentation using the MumfordShah model, as a generalization of an active contour model. In [35], a method is presented for the segmentation of anatomical structures, which incorporates prior information about the intensity and curvature profile of the structure from a training set of images and boundaries. In [36], the GC strategy is adapted for segmenting anatomical brain regions of interest in diffusion tensor MRI (DTMRI). An open source application called ITKSNAP was developed [37] for level set segmentation.
Finally, there exist some libraries, such as Freesurfer [38], Slicer [39], and SPM [21], which have been developed to address the MRI segmentation problem. However, all of them are limited to atlasbased algorithms which lack robustness when dealing with different types of subjects. Hence, constructing an hybrid approach that combines atlasbased and energybased strategies is a natural extension of stateoftheart algorithms. The combination presented in this paper exploits atlas structure information and an adaptive ad hoc energy model. Moreover, the proposed energy model also takes advantage of supervised learning techniques.
3 CaudateCut
Table of terms
Ω(L _{ p }, L _{ q })  Pulse function. 

δ  Trade off coefficient between U and B. 
∂_{ k }  Substraction of graylevel for a pair of bins in C _{ c×r }. 
$\frac{\partial}{\partial x}\frac{{\partial}^{2}}{\partial {x}^{2}}$  First and second derivatives w.r.t. x. 
θ _{{p, q}}  Angle between minimum gradient variation vectors in pixel p and q. 
α, σ and β  Weight parameters of boundary term. 
·  Cardinal of a set. 
$\mathcal{C},\mathcal{B}$  Sets of caudate and background seeds. 
C _{ c×r }  Correlogram structure of c circles and r radius. 
d p  Correlogram descriptor for pixel p. 
d(·,·)  Minimum Euclidean distance between two sets of voxels. 
$\mathsf{\text{Dilat}}{\mathsf{\text{e}}}_{{k}_{d}}\left(.\right)$  Dilatation function of structure element of k _{ d }pixels. 
E(·), U(·), B(·)  Cost function, Unary term, and Boundary term. 
$\mathsf{\text{Erod}}{\mathsf{\text{e}}}_{{k}_{e}}\left(.\right)$  Erosion function of structure element of k _{ e }pixels. 
$\mathcal{G}=<\mathcal{V},\mathcal{E}>$  Graph of nodes $\mathcal{V}$ and edges ε . 
G and ℓ  The 2dimensional Gaussian function and the Lindeberg parameter. 
H(·)  Entropy value. 
I and I _{ p }  Grayscale image and image intensity value at pixel p. 
J _{ p, γ, s }  Binary edge map of pixel p at scale s and with sensitivity edge threshold γ. 
$L=\left({L}_{1},...,{L}_{p},...,{L}_{\left\mathcal{P}\right}\right)$  Binary vectors of assignments to pixels $p\in \mathcal{P}$. 
$\mathcal{N}$  Set of unordered pairs {p, q} of neighboring pixels under a 4(8) neighborhood system. 
N _{{p, q}}, O _{{p, q}}  Boundary terms based on first and second derivatives. 
$\mathcal{P}=\left(1,...,p,...,\left\mathcal{P}\right\right)$  Set of indexes of I. 
P_{u}(·), P_{s}(·)  Unsupervised and supervised probability function. 
P(·)  General frequencybased Probability function. 
{p, q}  nlink connecting a pair of neighbors p and q. 
${\mathcal{R}}_{0}$  Region of Interest (ROI). 
R _{ p }(s)  Neighborhood of size s × s. 
S, T  Caudate and Background terminal nodes. 
SVM(.)  Support Vector Machines classifier function. 
T _{ p }  Atlasbased threshold over probability map. 
UU(·), SU(·)  Unsupervised and supervised unary terms. 
$\mathcal{X}=\left({\mathsf{\text{x}}}_{1},...,{\mathsf{\text{x}}}_{p},...,{\mathsf{\text{x}}}_{\left\mathcal{P}\right}\right)$  Set of pixels of I. 
3.1 GraphCut Framework
The coefficient δ ∈ ℝ^{+} in Eq.(1) specifies the relative importance of the unary term U(L) compared to the boundary term B(L). The unary term U(L) assumes that the individual penalties for assigning pixel p to "cau" and "back", correspondingly U _{ p }("cau") and U _{ p }("back"), are given. The term B(L) comprises the boundary properties of segmentation L. Coefficients B _{{p, q}}≥ 0 should be interpreted as a penalty for a discontinuity between p and q.
The GC method imposes hard constraints on the segmentation results by means of the definition of seed points where labels are predefined and cannot be modified. The subsets $\mathcal{C}\subset \mathcal{P},\mathcal{B}\subset \mathcal{P},\mathcal{C}\cap \mathcal{B}=\varnothing $ denote the subsets of caudate and background seeds, respectively. The goal of GC is to compute the global minimum of Eq. (1) from all segmentations L satisfying the hard constraints $\forall p\in \mathcal{C}$, L _{ p }= "cau", $\forall p\in \mathcal{B}$, L _{ p }= "back".
Let us describe the details of the graph created to segment an MRI image. A graph $\mathcal{G}=<\mathcal{V},\mathcal{E}>$ is created with nodes, $\mathcal{V}$, corresponding to pixels $p\in \mathcal{P}$ of the image plus two additional nodes: the caudate terminal (a source S) and the background terminal (a sink T), therefore, $\mathcal{V}=\mathcal{P}\cup \left\{S,T\right\}$. The set of edges ε consists of two types of undirected edges: nlinks (neighborhood links) and t links (terminal links). Each pixel p has two tlinks {p, S} and {p, T} connecting it to each terminal. Each pair of neighboring pixels {p, q} in $\mathcal{N}$ is connected by an nlink. Without introducing any ambiguity, an nlink connecting a pair of neighbors p and q will be denoted by {p, q}, giving $\mathcal{E}=\mathcal{N}{\bigcup}_{p\in \mathcal{P}}\left\{\left\{p,S\right\},\left\{p,T\right\}\right\}$. Final segmentation is then computed over the defined graph using the mincut algorithm to minimize E(L).
3.2 CaudateCut Segmentation Algorithm
Automatic CaudateCut Segmentation Algorithm
1.  Initial segmentation using AB method. 

2.  Set background and caudate seeds by erosion and dilatation of AB mask. 
3.  Initialize unsupervised unary potentials UU _{ p }("cau") and UU _{ p }("back") based on local graylevel intensities. 
4.  Initialize supervised unary potentials SU _{ p }("cau") and SU _{ p }("back") based on SVM correlogram classifier. 
5.  Initialize unary term based on combined unary potentials. 
6.  Initialize boundary term B(L) based on first and second derivatives of intensities and multiscale edge map. 
7.  Estimate caudate segmentation using GC. 
3.2.1 Atlasbased Segmentation
 1.
First, a nonuniformity image intensity correction is computed. Then, the corrected image is classified into WM, GM, and CSF.
 2.
In the next step, the GM image is elastically registered from its original geometrical space to match a template image (which represents the expected distribution of gray matter in the subjects under study) in the socalled normalized space. The deformation field obtained is inverted to map the normalized space onto the original space.
 3.
This inverted deformation is applied to the caudate segmentation in the normalized space, thus yielding a first segmentation of the caudate nucleus of the subject.
 4.
Finally, in order to refine this first segmentation, the GM mask of the subject under study is combined with the mask obtained by unwarping the normalized caudate segmentation. They are combined as follows: the GM and caudate probability maps are multiplied and a threshold T _{ p }is imposed over the result: we consider that a voxel belongs to the caudate only where the product map is larger than T _{ p }.
3.2.2 Seed Initialization
GC is a semiautomatic interactive method, since the seeds are manually defined. In order to achieve a fully automatic method, we use the result of the atlasbased method to define an initial segmentation taking advantage of the atlas caudate shape. We define caudate and background seeds by performing morphological operations on the ROI obtained ${\mathcal{R}}_{0}$ in the atlasbased mask. To define the caudate seeds, $\mathcal{C}$, we compute $\mathcal{C}=\mathsf{\text{Erod}}{\mathsf{\text{e}}}_{{k}_{e}}\left({\mathcal{R}}_{0}\right)$, where $\mathsf{\text{Erod}}{\mathsf{\text{e}}}_{{k}_{e}}$ denotes an erosion with a structural element of k _{ e }pixels. In the case of background seeds, we dilate the region ${\mathcal{R}}_{0}$ and keep the complementary set, $\mathcal{B}=\mathcal{P}\backslash \mathsf{\text{Dilat}}{\mathsf{\text{e}}}_{{k}_{d}}\left({\mathcal{R}}_{0}\right)$, where $\mathsf{\text{Dilat}}{\mathsf{\text{e}}}_{{k}_{d}}$ denotes a dilatation with a structural element of k _{ d }pixels. In the example shown in Figure 3 (a), the selection of $\mathcal{C}$ and $\mathcal{B}$ seeds is obtained from erosion and dilation of the AB segmentation shown in Figure 3 (b).
3.2.3 Unary Energy Term
In this section, we describe how to compute the unary energy term for the GC energy function. This energy term is divided into two: an unsupervised part and a supervised part. The unsupervised part is computed in a problemdependent image way, based on the graylevel distribution of the seed pixels. The supervised part is computed from a support vector machine classifier (SVM) based on the contextual learning of caudate derivatives. Next, we describe in detail both parts of the unary term and the final combination.
Unsupervised unary term
The probability of a pixel p being marked as "cau", P_{ u }(L _{ p }= "cau"), is computed using the histogram of graylevels of caudate seeds. The probability of a pixel being marked as "back" is computed using the inverse probability, as P_{u}(L _{ p }= "back") = 1P_{ u }(L _{ p }= "cau"), since background seeds contain GM, WM and CSF and it is difficult to extract a model directly from them. Figure 3 (c) shows the unsupervised probability values P_{ u }(L _{ p }= "cau")) for the image in Figure 3 (a).
The unsupervised unary term estimates imagedependent caudate pixel probabilities based on caudate seeds. However, given the noisy information of MRI images and the small number of caudate seed pixels, a high generalization based on this term is not always guaranteed. In this context, we propose using a combination of the unsupervised energy with the supervised, which is based on learning contextual caudate derivatives from Ground Truth (GT) data.
Supervised unary term
In order to define the supervised unary term, we train a binary classifier using a set of MRI slices as a training set. In particular, we extract a pixel descriptor using a correlogram structure. The correlogram structure captures contextual intensity relations from circular bins around the pixel analyzed [40].
where ∂_{ k }is the signed substraction of graylevel information within a pair of bins in C _{ c×r }. In this sense, the descriptor contains the n · (n  1)/2 graylevel derivatives of all pairs of bins within C _{ c×r }, which captures all spatial relations of graylevel intensities in the neighborhood of p. An example of a correlogram structure estimated for a caudate pixel is shown in Figure 3 (d).
The probability of a pixel being marked as "cau" is computed using the confidence of the SVM classifier over its correlogram descriptor P_{ s }(L _{ p }= "cau") = SVM(p). The probability of a pixel being marked as "back" is computed as the negative of the output margin of the classifier P_{ s }(L _{ p }= "back") = SVM(p). Figure 3 (d) shows the supervised caudate probability values P_{ s }(L _{ p }= "cau")) for the image in Figure 3 (b).
Combined unary term
3.2.4 Boundary Energy Term
To define boundary potentials, we use first and second intensity derivatives of the image to use the intensity and geometric information. Moreover, given the high variability in contrast between the caudate and background in different parts of the images, we propose weighting the boundary term using an imagedependent multiscale edgeness measure.
The term θ _{{p, q}}denotes the angle between two unitary vectors codifying the directions of minimum gradient variation in pixel p and q based on the Hessian eigenvectors. In particular, we choose the direction of the eigenvector of the Hessian matrix with the smallest eigenvalue which gives the direction of the smallest variation at each pixel. The parameter α is empirically set by crossvalidation, while σ and β are computed by adapting the image distribution to I _{ p }and θ _{{p, q}}, respectively. Intuitively, the function N _{{p, q}}penalizes discontinuities between pixels of similar intensities and O _{{p, q}}penalizes for discontinuities between pixels of similar gradient variations.
The differential operators involved in the previous definition (Eq. 2) are wellposed concepts of linear scalespace theory, defined as convolutions with derivatives of Gaussians: $\frac{\partial}{\partial x}I\left(x,s\right)={s}^{\ell}I\left(x\right)*\frac{\partial}{\partial x}G\left(x,s\right)$, where G is the 2dimensional Gaussian function and ℓ is the Lindeberg parameter.
where P(iR _{ p }(s)) is the probability of taking the value i in the local region R _{ p }(s), with r being all the possible discrete values. The scale chosen is defined by the maxima of the function H in the space of scales ${\mathcal{S}}_{p}=\left\{s:\partial H\left({R}_{p}\left(s\right)\right)\u2215\partial s=0,{\partial}^{2}H\left({R}_{p}\left(s\right)\right)\u2215\partial {s}^{2}<0\right\}$.
where J _{ p, γk, sj }is the binary edge map using threshold γ _{ k }and scale s _{ j }for pixel p. If pixel p is labeled as an edge pixel for most of the threshold levels at a significant scale, it has a high probability of being an edge pixel. In order to decrease the smoothness effect at the regions near a boundary, we convolve the probability map with a Gaussian kernel. Figure 3 (e) shows the boundary potential values B(L) for the image in Figure 3 (a). Intuitively, the term J adaptively changes the influence of the boundary term for pixels in the image, since boundary regions should be less regularized than the rest of the image regions.
Finally, by applying the mincut algorithm over the defined energy function and image graph, we obtain the final caudate segmentation. Figure 3 (f) shows the segmentation resulting from applying the CaudateCut algorithm.
4 Experimental Section
Before presenting our results, we first describe the material and methods of comparison, and also the validation protocol for the experiments.
4.1 Material
We considered two different databases, named URNC database and IBSR database, in order to validate the CaudateCut method we have proposed.

URNC database. This is a new database, which includes 39 children (35 boys and 4 girls) with ADHD, according to DSMIV, referred from the Unit of Child Psychiatry at the Vall d [001]ebron Hospital in Barcelona, Spain, and coordinated by the Unit of Research in Cognitive Neuroscience (URNC) at the IMIM Foundation, together with 39 control subjects (27 boys and 12 girls) recruited from the community. The mean age of the groups was 10.8 (S.D.: 2.9) and 11.7 (S.D.: 3), respectively. The groups were matched for handedness and IQ. The 1.5T system was used to acquire brain MRI scans. The resolution of the scans is 256 × 256 × 60 pixels with 2mm thick slices. Expert segmentations of the 79 individual caudate nuclei was obtained. MRIcro software^{1} was used for volume labeling and manipulation.

IBSR database. This dataset is part of a public database released by CAUSE07 Challenge [9]. It is composed by 18 T1weighted MRI scans from the Internet Brain Segmentation Repository (IBSR). It also contains expert segmentations of caudate structure. The MRI scans are of 1.5 mm thickness. Originally, the data size was 256 × 128 × 256 pixels, but in order to prepare data for the later application of the CaudateCut algorithm, we reoriented they data by Xaxis rotation and converted it into 256 × 256 × 128 pixels. For more details of the acquisition, visit CAUSE07 Challenge website^{2} from were the data was downloaded.
4.2 Methods
We compared the CaudateCut method to two stateoftheart methods: a classical atlasbased method, and a multiatlas segmentation method. We also compared the results with the interobserver (IO) variability of the expert GT.
AB method
We implemented atlasbased segmentation of the caudate following the strategy presented in [18]. To this end we used the SPM toolbox implementation of the unified nonlinear normalization and tissue segmentation. The parameters of the method were set by default as in the SPM8 implementation, except for the threshold T _{ p }, which was estimated using a subset of 5 control subjects from the URNC database and set to T _{ p }= 0.1. The method was implemented using Matlab2008.
AMAS method
An adaptive multiatlas segmentation method (AMAS) was implemented as presented in [10]. For the atlas selection strategy we computed the absolute voxelwise difference between the target image and the registered images from the atlases and ordered them from smallest to largest. Then, the atlas information was propagated until the stopping criterion was reached. The stopping criterion was defined by the percentage of voxels that change their segmentation label after a new atlas propagation. This threshold was set to 0.05 for all experiments. The rest of the parameters in the AMAS method were set as described in [10]. The method was implemented using Matlab2008, and elastix6 version 3.9^{3} was used for volume registration, as suggested in [10].
CaudateCut method
The CaudateCut method was implemented using Matlab2008 and the SPM toolbox. In all the experiments the parameters were set to k _{ e }= 4, k _{ d }= 10, c = 3, r = 5, α = 0.5, ${\mathcal{S}}_{p}=\left[1,1.5,...,6\right]$, ℓ = 0, γ _{ k }∈ [0.02,0.03,..., 0.3] and s _{ j }∈ [0.5,1,..., 5]. The parameters σ and β were estimated for each image, as explained above. The parameter δ was tuned by crossvalidation and was set to 50 for the URNC dataset and 100 for the IBSR database. In order to train the SVM classifiers for computation of the supervised unary term, we performed a subsampling of pixels from each slice. In particular, we took all the pixels labeled as caudate in the GT, and the same number of background pixels. The background pixels were subsampled in a stratified way, trying to select pixels from all parts of the background.
Manual method
Experts use MRIcro [42] to manually delineate the caudate boundaries slice by slice. See [1] for more details of the procedure.
4.3 Validation
 1.Volumetric similarity index (or mean overlap), in percent:$\mathsf{\text{SI}}=2\left\frac{\mathsf{\text{R}}\cap \mathsf{\text{G}}}{R+G}\right\cdot 100.$
 2.Volumetric union overlap, in percent:$\mathsf{\text{VO}}=\left\frac{\mathsf{\text{R}}\cap \mathsf{\text{G}}}{\mathsf{\text{R}}\cup \mathsf{\text{G}}}\right\cdot 100.$
 3.Relative absolute volume difference, in percent:$\mathsf{\text{VD}}=\left\frac{\mathsf{\text{VO}}{\mathsf{\text{L}}}_{\mathsf{\text{R}}}\mathsf{\text{VO}}{\mathsf{\text{L}}}_{\mathsf{\text{G}}}}{\mathsf{\text{VO}}{\mathsf{\text{L}}}_{\mathsf{\text{G}}}}\right\cdot 100,$
 4.Average symmetric surface distance, in millimeters:$\mathsf{\text{AD}}=\frac{\left(\sum _{i=1}^{N}\mathsf{\text{d}}{\left({\mathsf{\text{B}}}_{\mathsf{\text{S}}i},{\mathsf{\text{B}}}_{\mathsf{\text{R}}}\right)}^{2}+\sum _{i=1}^{M}\mathsf{\text{d}}{\left({\mathsf{\text{B}}}_{\mathsf{\text{S}}},{\mathsf{\text{B}}}_{\mathsf{\text{R}}i}\right)}^{2}\right)}{\left{B}_{S}\right\cdot \left{B}_{R}\right},$
 5.Root Mean Square (RMS) symmetric surface distance, in millimeters:$\mathsf{\text{RMSD}}=\sqrt{AD}.$
 6.Maximum symmetric surface distance, in millimeters:$\mathsf{\text{MD}}=\underset{i,j}{max}\left(\mathsf{\text{d}}\left({\mathsf{\text{B}}}_{\mathsf{\text{S}}i},{\mathsf{\text{B}}}_{\mathsf{\text{R}}}\right),\mathsf{\text{d}}\left({\mathsf{\text{B}}}_{\mathsf{\text{S}}i},{\mathsf{\text{B}}}_{\mathsf{\text{R}}j}\right)\right).$
Note that the volumetric measures VO and SI have 100 as a perfect segmentation and 0 as the lowest possible value, when there is no overlap at all between the estimated segmentation and GT. In the case of VD, the perfect value is 0, which can also be obtained for a nonperfect segmentation, as long as the volume of that segmentation is equal to the volume of the reference. For voxel comparison measures, AD, RMSD and MD, the perfect value is 0.
In order to validate the AMAS and CaudateCut methods (SVM classifiers for supervised unary term computation), we followed a leaveoneout strategy. Finally, Student's paired ttest [43] was used to evaluate the statistical significance between pairs of segmentation algorithms with a particular dataset (threshold of p < 0.05). The null hypothesis corresponds to the hypothesis that the two groups belong to the same distribution and is called H _{0}. Matlab2008 was used to perform this test.
4.4 Results and Discussion
We divide the results into two sections corresponding to two related experiments: segmentation evaluation and ADHD volumetric quantitative analysis.
4.4.1 Segmentation Evaluation
A) Quantitative segmentation results
Quantitative results of AB, AMAS and CaudateCut
Database  Method  SI  VO  VD  AD  RMSD  MD 

AB  79.85  66.55  9.63  0.0029  0.0861  48.31  
URNC  AMAS  66.67  51.45  24.81  0.0091  0.0913  48.11 
CaudateCut  82.60  70.49  9.10  0.0028  0.0780  47.97  
AB  74.02  58.85  23.34  0.0030  0.0950  30.86  
IBSR  AMAS  75.00  60.14  25.54  0.0024  0.0750  28.37 
CaudateCut  78.91  65.55  17.80  0.0019  0.0687  23.43 
B) Qualitative segmentation results
C) Statistical analysis
Statistical test results
Database  Test  ttest  t  p  CI(95%) 

CaudateCut vs. AB  false  4.08  0.0001  0.02 to 0.0586  
URNC  CaudateCut vs. AMAS  false  11.36  3.2810^{18}  0.24 to 0.34 
IBSR  CaudateCut vs. AB  false  3.23  0.0028  0.0248 to 0.1092 
CaudateCut vs. AMAS  false  2.49  0.0177  0.01 to 0.0982 
D) Difference analysis
E) Computational differences
Concerning the computational time, AMAS was the most costly method in terms of testing time, since multiple registration had to be performed for each subject segmentation. On average, 23 registrations were performed for each volume segmentation and each registration took 7 minutes on a standard highend PC, thus making 17.5 minutes for the whole volume segmentation. The AB method was the fastest, taking around 5 minutes on average for the whole volume segmentation. CaudateCut involves applying the AB method and later the GC minimization process method. The total time was around 6 minutes for the whole volume segmentation.
F) Interobserver variability
Quantitative Measures of IO Variability
Database  SI  VO  VD  AD  RMSD  MD 

IO on URNC  80.56  67.80  22.84  0.003  0.092  92.54 
4.4.2 ADHD Volumetric Quantitative Analysis
The a priori hypothesis that developmental anomalies exist in the caudate nucleus of people with ADHD is generally accepted. Previous imaging studies have analyzed this hypothesis [4, 2, 3].
In this work, we analyzed right and left caudate volumetric differences between ADHD and control subjects in the URNC database. To this end, we performed a comparison of mean volume values applying Student's ttest for independent samples (with a threshold of p < 0.05). The aim of this experiment was to show that the analysis performed using automatic CaudateCut segmentation was coherent with the results of manual analysis. To carry out the manual and automated statistical analysis we considered GT and CaudateCut segmentations, respectively. ROI measures in voxels were transformed into cubic millimeters, mm^{3} (ROI total number of voxels multiplied by voxel dimensions).
Manual Control and ADHD statistical results
Manual analysis  group  N  M  std  d  ttest  t  p  CI(95%) 

R caudate  Control  39  5031.44  660.18  312.29  false  1.9983  0.0493  1.03 to 623.56 
ADHD  39  4719.15  718.81  
L caudate  Control  39  4882.45  643.81  195.11  true  1.1946  0.2360  130.19 to 520.42 
ADHD  39  4687.34  791.17 
Automatic Control and ADHD statistical results
Automatic analysis  group  N  M  std  d  ttest  t  p  CI(95%) 

R caudate  Control  39  4636.72  596.66  430.19  false  2.74  0.0075  118.05 to 742.35 
ADHD  39  4206.52  775.86  
L caudate  Control  39  4426.24  615.69  288.14  true  1.93  0.0571  8.90 to 585.19 
ADHD  39  4138.10  698.89 
5 Conclusion
In this work, we present a new method, CaudateCut, for caudate nucleus segmentation in brain MRI. CaudateCut combines the power of an atlasbased strategy and the adaptiveness of the defined energy function within the GC energyminimization framework, in order to segment the small and lowcontrast caudate structure. We define the new energy function with data potentials by using intensity and geometry information, and also making the most of the supervised learned local brain structures. Boundary potentials are also redefined using a new multiscale edgeness measure. CaudateCut has different advantages for different neuroimaging researchers. First of all, it is fully automatic, and secondly, the algorithm is reliable. The results are 100% reproducible in subsequent runs with the same data, avoiding the inaccuracies of intrarate and interrater drift.
The method was tested on two different datasets. Although the method was tuned on the novel URNC database, it provided outstanding results on the IBSR dataset, showing the inherent robustness of the approach. Moreover, we obtained results comparable to manual volumetric analysis of children with ADHD based on automatic caudate nucleus volume measurements. Future lines of research include the use of multiplehypotheses for seed initialization in order to increase the robustness to possible errors of atlas application and the incorporation of 3D information in the caudate segmentation. From the clinical point of view, new features based on the caudate appearance can be added to analyze ADHD abnormalities in an automatic way.
Declarations
Acknowledgements
This work was supported in part by the projects: TIN200914404C02, La Marató de TV3 082131, CONSOLIDERINGENIO CSD 200700018, and MICINN SAF2009 10901.
Authors’ Affiliations
References
 Carmona S, Vilarroya O, Bielsa A, Trèmols V, Soliva JC, Rovira M, Tomàs J, Raheb C, Gispert J, Batlle S, Bulbena A: Global and regional gray matter reductions in ADHD: A voxelbased morphometric study. Neuroscience Letters 2005, 389(2):88–93. 10.1016/j.neulet.2005.07.020View ArticleGoogle Scholar
 Filipek PA, SemrudClikeman M, Steingard RJ, Renshaw PF, Kennedy DN, Biederman J: Volumetric MRI analysis comparing subjects having attentiondeficit hyperactivity disorder with normal controls. Neurology 1997, 48(3):589–601.View ArticleGoogle Scholar
 Reiss A, Abrams M, Singer H, Ross J, Denckla M: Brain development, gender and IQ in children. A volumetric imaging study. Brain 1996, 119.Google Scholar
 Tremols V, Bielsa A, Soliva JC, Raheb C, Carmona S, Tomas J, Gispert JD, Rovira M, Fauquet J, Tobeña A, Bulbena A, Vilarroya O: Differential abnormalities of the head and body of the caudate nucleus in attention deficithyperactivity disorder. Psychiatry Res 2008, 163(3):270–8. 10.1016/j.pscychresns.2007.04.017View ArticleGoogle Scholar
 Soliva JC, Fauquet J, Bielsa A, Rovira M, Carmona S, RamosQuiroga JA, Hilferty J, Bulbena A, Casas M, Vilarroya O: Quantitative MR analysis of caudate abnormalities in pediatric ADHD: Proposal for a diagnostic test. Psychiatry Research: Neuroimaging 2010, 182(3):238–243. 10.1016/j.pscychresns.2010.01.013View ArticleGoogle Scholar
 Xia Y, Bettinger K, Shen L, Reiss AL: Automatic Segmentation of the Caudate Nucleus From Human Brain MR Images. IEEE Transactions on Medical Imaging 2007, 26: 509–517.View ArticleGoogle Scholar
 Balafar M, Ramli A, Saripan M, Mashohor S: Review of brain MRI image segmentation methods. Artificial Intelligence Review 2010, 33: 261–274. 10.1007/s1046201091550View ArticleGoogle Scholar
 Duncan JS, Member S, Ayache N: Medical image analysis: progress over two decades and the challenges ahead. IEEE Transactions on Pattern Analysis and Machine Intelligence 2000, 22: 85–106. 10.1109/34.824822View ArticleGoogle Scholar
 Ginneken BV, Heimann T, Styner M: 3D segmentation in the clinic: A grand challenge. In: MICCAI Workshop on 3D Segmentation in the Clinic: A Grand Challenge 2007.Google Scholar
 van Rikxoort E, Isgum I, Arzhaeva Y, Staring M, Klein S, Viergever M, Pluim J, van Ginneken B: Adaptive Local MultiAtlas Segmentation: Application to the Heart and the Caudate Nucleus. Medical Image Analysis 2010, 14: 39–49. 10.1016/j.media.2009.10.001View ArticleGoogle Scholar
 Kolmogorov V, Zabih R: What energy functions can be minimized via graph cuts. PAMI 2004, 26: 65–81.View ArticleGoogle Scholar
 Boykov Y, FunkaLea G: Graph Cuts and Efficient ND Image Segmentation. IJCV 2006, 70(2):109–131. 10.1007/s1126300679345View ArticleGoogle Scholar
 Boykov Y, Kolmogorov V: An Experimental Comparison of MinCut/MaxFlow Algorithms for Energy Minimization in Vision. IEEE Transactions on Pattern Analysis and Machine Intelligence 2001, 26: 359–374.Google Scholar
 Candemir S, Akgul Y: Adaptive Regularization Parameter for Graph Cut Segmentation. 2010, 117–126.Google Scholar
 Collins DL, Holmes CJ, Peters TM, Evans AC: Automatic 3D modelbased neuroanatomical segmentation. 1995.Google Scholar
 Iosifescu DV, Shenton ME, Warfield SK, Kikinis R, Dengler J, Jolesz FA, Mccarley RW: An automated registration algorithm for measuring MRI subcortical brain structures. Neuroimage 1997, 6: 13–25. 10.1006/nimg.1997.0274View ArticleGoogle Scholar
 Murgasova M, Dyet L, Edwards D, Rutherford M, Hajnal J, Rueckert D: Segmentation of brain MRI in young children. Academic radiology 2007, 14(11):1350–1366. 10.1016/j.acra.2007.07.020View ArticleGoogle Scholar
 Collins DL, Zijdenbos AP, Baaré WFC, Evans AC: Animal+insect: Improved cortical structure segmentation. In IPMI. Springer; 1999:210–223.Google Scholar
 Fischl B, Salat DH, van der Kouwe AJW, Makris N, Ségonne F, Quinn BT, Dalea AM: SequenceIndependent Segmentation of Magnetic Resonance Images. Neuroimage 2004, 23(Supplement 1):S69S84.View ArticleGoogle Scholar
 Ashburner J, Friston K: Unified segmentation. NeuroImage 2005, 26: 839–851. 10.1016/j.neuroimage.2005.02.018View ArticleGoogle Scholar
 Statistical Parametric Mapping (SPM) [Http://www.fil.ion.ucl.ac.uk/spm/]
 Van Leemput K, Maes F, Vandermeulen D, Suetens P: Automated modelbased tissue classification of MR images of the brain. Medical Imaging, IEEE Transactions on 1999, 18(10):897–908. 10.1109/42.811270View ArticleGoogle Scholar
 Fischl B, Salat DH, Busa E, Albert M, Dieterich M, Haselgrove C, van der Kouwe A, Killiany R, Kennedy D, Klaveness S, Montillo A, Makris N, Rosen B, Dale AM: Whole brain segmentation: automated labeling of neuroanatomical structures in the human brain. Neuron 2002, 33(3):341–55+. 10.1016/S08966273(02)00569XView ArticleGoogle Scholar
 Khan AR, Wang L, Beg MF: FreeSurferinitiated fullyautomated subcortical brain segmentation in MRI using Large Deformation Diffeomorphic Metric Mapping. NeuroImage 2008, 41(3):735–746. 10.1016/j.neuroimage.2008.03.024View ArticleGoogle Scholar
 Heckemann RA, Hajnal JV, Aljabar P, Rueckert D, Hammersc A: Automatic anatomical brain MRI segmentation combining label propagation and decision fusion. Neuroimage 2006, 33: 115–126. 10.1016/j.neuroimage.2006.05.061View ArticleGoogle Scholar
 Aljabar P, Heckemann R, Hammers A, Hajnal J, Rueckert D: Classifier Selection Strategies for Label Fusion Using Large Atlas Databases. Neuroimage 2007, 523–531.Google Scholar
 Khan AR, Chung MK, Beg MF: Robust AtlasBased Brain Segmentation Using Multistructure ConfidenceWeighted Registration. In Proceedings of the 12th International Conference on Medical Image Computing and ComputerAssisted Intervention: Part II. MICCAI '09, SpringerVerlag; 2009:549–557.Google Scholar
 Jabarouti Moghaddam M, Soltanian Zadeh H: Automatic Segmentation of Brain Structures Using Geometric Moment Invariants and Artificial Neural Networks. In Information Processing in Medical Imaging, Volume 5636 of Lecture Notes in Computer Science. Edited by: Prince J, Pham D, Myers K. Springer Berlin/Heidelberg; 2009:326–337.Google Scholar
 Babalola KO, Petrovic V, Cootes TF, Taylor CJ, Twining CJ, Mills A: Automatic Segmentation of the Caudate Nuclei using Active Appearance Models. 2007.Google Scholar
 Kelemen A, Székely G, Gerig G: Elastic modelbased segmentation of 3D neuroradiological data sets. IEEE Trans Med Imaging 1999, 18(10):828–839. 10.1109/42.811260View ArticleGoogle Scholar
 Babalola KO, Patenaude B, Aljabar P, Schnabel J, Kennedy D, Crum W, Smith S, Cootes T, Jenkinson M, Rueckert D: An evaluation of four automatic methods of segmenting the subcortical structures in the brain. Neuroimage 2009, 47: 1435–1447. 10.1016/j.neuroimage.2009.05.029View ArticleGoogle Scholar
 Patenaude B, Smith SM, Kennedy DN, Jenkinson M: A Bayesian model of shape and appearance for subcortical brain segmentation. NeuroImage 2011, 56(3):907–922. 10.1016/j.neuroimage.2011.02.046View ArticleGoogle Scholar
 Tohka J, Wallius E, Hirvonen J, Hietala J, Ruotsalainen U: Automatic Extraction of Caudate and Putamen in [^{ 11 } C] Raclopride PET Using Deformable Surface Models and Normalized Cuts. Nuclear Science, IEEE Transactions on 2006, 53: 220–227.View ArticleGoogle Scholar
 Vese LA, Chan TF, Tony , Chan F: A Multiphase Level Set Framework for Image Segmentation Using the Mumford and Shah Model. International Journal of Computer Vision 2002, 50: 271–293. 10.1023/A:1020874308076MATHView ArticleGoogle Scholar
 Leventon ME, Grimson WEL, Faugeras O, III WMW: Level Set Based Segmentation with Intensity and Curvature Priors. Proceedings of the IEEE Workshop on Mathematical Methods in Biomedical Image Analysis 2000, 4–12. MMBIA '00Google Scholar
 Weldeselassie YT, Hamarneh G: DTMRI Segmentation Using Graph Cuts. SPIE 2007.Google Scholar
 Yushkevich PA, Piven J, Cody Hazlett H, Gimpel Smith R, Ho S, Gee JC, Gerig G: UserGuided 3D Active Contour Segmentation of Anatomical Structures: Significantly Improved Efficiency and Reliability. Neuroimage 2006, 31(3):1116–1128. 10.1016/j.neuroimage.2006.01.015View ArticleGoogle Scholar
 Freesurfer [Http://surfer.nmr.mgh.harvard.edu/]
 3D Slicer [Http://www.slicer.org/]
 Escalera S, Fornés A, Pujol O, Lladós J, Radeva P: Circular Blurred Shape Model for Multiclass Symbol Recognition. IEEE Transactions on Systems, Man, and Cybernetics 2010.Google Scholar
 Weickert J: Anisotropic Diffusion in Image Processing. ECMI Series, Teubner; 1998.MATHGoogle Scholar
 MRIcro and MRIcron Medical Image Viewer Softwares [Http://www.cabiatl.com/mricro/mricro/]
 Dietterich TG: Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms. Neural Computation 1998, 10: 1895–1923. 10.1162/089976698300017197View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.