- Open Access
An automatic segmentation and classification framework for anti-nuclear antibody images
© Cheng et al.; licensee BioMed Central Ltd. 2013
Published: 9 December 2013
Autoimmune disease is a disorder of immune system due to the over-reaction of lymphocytes against one's own body tissues. Anti-Nuclear Antibody (ANA) is an autoantibody produced by the immune system directed against the self body tissues or cells, which plays an important role in the diagnosis of autoimmune diseases. Indirect ImmunoFluorescence (IIF) method with HEp-2 cells provides the major screening method to detect ANA for the diagnosis of autoimmune diseases. Fluorescence patterns at present are usually examined laboriously by experienced physicians through manually inspecting the slides with the help of a microscope, which usually suffers from inter-observer variability that limits its reproducibility. Previous researches only provided simple segmentation methods and criterions for cell segmentation and recognition, but a fully automatic framework for the segmentation and recognition of HEp-2 cells had never been reported before. This study proposes a method based on the watershed algorithm to automatically detect the HEp-2 cells with different patterns. The experimental results show that the segmentation performance of the proposed method is satisfactory when evaluated with percent volume overlap (PVO: 89%). The classification performance using a SVM classifier designed based on the features calculated from the segmented cells achieves an average accuracy of 96.90%, which outperforms other methods presented in previous studies. The proposed method can be used to develop a computer-aided system to assist the physicians in the diagnosis of auto-immune diseases.
The immune system enables us to resist infections by counteracting invading organisms. Autoimmune disease is a disorder of immune system due to over-reaction of lymphocytes against one's own body tissues . Common autoimmune diseases include Hashimoto's thyroiditis, rheumatoid arthritis, diabetes mellitus type 1, and lupus erythematosus. Anti-Nuclear Antibody (ANA) is an autoantibody produced by the immune system directed against the self body tissues or cells. The ANA test widely used to detect antibody in the blood plays an important role in the diagnosis of autoimmune diseases. When a particular antibody pattern has been detected, the patient may have the possibility of acquiring certain autoimmune diseases.
Indirect ImmunoFluorescence (IIF) technique applied on HEp-2 cell substrates provides the major screening method to detect ANA patterns in the diagnosis of autoimmune diseases. It produces the ANA images with distinct fluorescence intensities and staining patterns through IIF slides. Currently, the ANA patterns are inspected by experienced physicians to identify abnormal cell patterns, which is a laborious task and may cause harm to physicians' eyes. It is not easy to train a qualified physician in a short term. Furthermore, manual inspection suffers from the difficulties, such as intra- and inter-observer variability, that limit the reproducibility of IIF readings [2–5].
Although previous studies have proposed several methods for automatic segmentation of ANA cells [6, 7] and criteria for recognition of cell patterns [3, 6, 8–10], a fully automatic segmentation and recognition framework has never been developed so far. In this study, we propose a framework based on the watershed approaches to automatically segment the HEp-2 cells. It is a crucial preprocessing step for a computer aided system to classify the cell patterns to provide information to assist physicians in disease diagnosis and treatment.
Since the cytoplasm of HEp-2 cells is invisible in the IIF images, in what follows, the term "cell" means cell nucleus, "foreground" indicates the cell region, and "background" denotes the rest of the image. The rest of this paper is organized as follows. Section "Related Works" reviews the techniques used for ANA image segmentation and cell recognition in previous studies. Section "Segmentation of ANA Cells" describes the methods proposed in this study for the segmentation of ANA cells. Classification of ANA cell patterns is demonstrated in section "Cell Classification of ANA Images". Finally, discussions, conclusions, and future works are made in sections "Discussion" and "Conclusion and Future Work".
In this section, the methods proposed in previous investigations for the segmentation and classification of ANA cell images are presented.
ANA image segmentation
Perner et al.  used image processing techniques, including image transformation, histogram equalization, Otsu thresholding , and morphological operation, to obtain a binary mask for segmenting the cells from the ANA images. By modifying the methods, Huang et al.  presented two adaptive automatic segmentation frameworks to precisely extract the ANA cells. In their studies, the first framework classified an image into two categories, i.e., sparse and mass cell regions, based on the number of connected regions. Depending on the category of the images, different color spaces and processing techniques were adopted for cell segmentation. Morphological operations were also used to obtain smooth segmentation results. It was demonstrated to be able to deal with the segmentation of different patterns of IIF images. On the other hand, in the second framework, watershed segmentation  was applied on the green channel of the RGB images, followed by region merging and elimination to obtain the cell boundaries. If the number of regions in the obtained image was larger than a pre-defined threshold, the framework converted the original image into CMY color space and performed marker-controlled watershed segmentation  on the cyan color component. It was reported that the segmentation performance achieved an overall sensitivity of 94.7%.
Creemers et al.  proposed a unsupervised segmentation algorithm, based on iterative global Otsu thresholding and morphological opening operation, to support IIF testing. It was reported to have the capability to split connected regions into individual regions with an average accuracy of 89.57%.
ANA cell recognition
Perner  presented the first study on fluorescent image analysis, feature extraction and classification. Then, an automatic cell recognition approach based on a variety of features, including size, color density, and number of cells, extracted from the segmented images was proposed . For the cells with identical color density, features including standard deviation, mean shape factor, mean of perimeter, and standard deviation of perimeter, were further extracted. Data mining techniques, including Boolean model and decision tree induction, were then used to label the cell regions. Finally, human experts tagged each labeled region with a semantic label. Based on the aforementioned methods, Sack et al.  presented a system to automatic classify HEp-2 fluorescent patterns with a classification accuracy greater than 83%.
According to the fluorescence intensity, Soda and Iannello  classified the ANA images into a variety of patterns. They further proposed a framework consisting of hybrid rule-based multi-expert systems for the classification of ANA patterns with an overall error rate of 2.7-5.8% . The framework extracted the features including the first, second, and fourth moments of the gray-level co-occurrence matrix, Zernike moments, as well as the coefficients of discrete cosine transform (DCT) and discrete wavelet transform (DWT). Based on the efforts of previous researches, Rigon et al.  proposed a comprehensive system based on two approaches, in which the first approach discriminated the positive cells from the negative and weakly positive cells based on the features of fluorescence intensity, whereas the second one recognized the staining pattern of the positive cells. The performance of positive/negative recognition ranges from 87% to more than 94%, whereas the staining pattern classification accuracy of the main classes, i.e. homogeneous cells, peripheral nuclear cells, speckled cells, nucleolar cells, and artefacts, ranges from 71% to 74%.
Elbischger et al.  developed an iterative thresholding algorithm for processing HEp-2 cells and a cell classifier for detecting auto-immune diseases. Features including area to perimeter ratio, variance, 30th and 60th normalized percentiles, percentile range, dent number, auto-covariance percentage, and roundness, were extracted from the segmented cells and used for cell classification. The system was reported to be capable of distinguishing 5 different patterns with an overall accuracy of 93% based on the dataset consisting of 982 ROIs extracted from 38 images.
Recently, Huang et al.  employed the self-organizing map (SOM) to identify the fluorescence patterns of HEp-2 cells. Fourteen features, including the perimeter, area, and histogram uniformity of the cell; area and average intensity of the inside and perimeter areas of the cell; higher and lower intensity ratios of the inside area, perimeter area, and whole area of the cell; and standard deviation of the inside area of the cell, were used for designing the classifier with an average accuracy of 92.4%. In , the EUROPattern designed based on k-nearest neighbor algorithm, was compared with the conventional visual IIF evaluation with the sensitivity and specificity achieving 100% and 97.5%, respectively. In addition, it was shown that 94.0% of all the main antibody patterns, including the positive patterns, i.e., homogenous, speckled, nucleolar, centromere, nuclear dotted, and cytoplasmic patterns, as well as the negative patterns, could be correctly recognized.
Segmentation of ANA cells
First, Otsu thresholding algorithm is used to roughly separate the foreground regions from the background.
The closing morphological operation is employed to fill the holes and to eliminate small regions in the foreground.
If the number of foreground regions in an image is larger than the threshold, th_num, or its foreground regions contain staining noises with variance higher than the threshold, th_fg_var, it is segmented using "Cell detection 1"; otherwise "Cell detection 2" is adopted. In this study, the thresholds th_num and th_fg_var are set to 200 and 1000, respectively.
- 4)For images segmented with "Cell detection 2", the staining noise in the background regions are removed according to their noise level, as defined in the following equation:
where i indicates the gray level of the image, and p(i) denotes the frequency of gray level i in the image. The threshold of noise level, th_noise, is set to 10.
Cell detection 1
As demonstrated in Figure 4(c), the original image is smoothed by the morphological opening operation using a disk-shaped structuring element with a radius of 15.
The initial markers are superimposed on the thresholded image shown in Figure 4(e), followed by applying the same opening morphological operation mentioned in the 2nd step to obtain the marker image, Figure 4(f), used for marker-controlled watershed segmentation. The flowchart of marker extraction is depicted in Figure 5.
Furthermore, Gaussian differentiation parameter with σ=2 and thresholding parameter of h-minima suppression with th_h1= 0.12  are applied on the smoothed image for conducting watershed segmentation to obtain the foreground watershed segmentation image (f-ws). Figure 6(b) presents the "f-ws" image superimposed on the original image.
Similar to the foreground watershed segmentation, the smoothed image is first filtered by the Gaussian differentiation and minima suppressed by h-minima transform, which is then superimposed by the marker image and the "b-ws" image to obtain the foreground marker-controlled watershed (fmc-ws) image used for marker-controlled watershed segmentation. Figure 6(c) shows the foreground marker-controlled watershed (fmc-ws) image.
These 3 types of watershed images were further used for cell segmentation. As demonstrated in Figure 6, it can be observed that the "b-ws" image is effective in splitting cells that are close to each other. The blobs in "fmc-ws" are mostly over-segmented with unsmooth contours, resulting in a failure to effectively delineate the cell contours. On the other hand, the "f-ws" image is unable to detect some of the cell regions. Consequently, in the cell extraction stage of "Cell detection 1", the three types of watershed images and the marker image are combined to precisely extract cell boundaries.
The cell regions are extracted from the "fmc-ws" image according to the perimeters of the connected regions, since it can potentially detect more cell regions than "f-ws". The regions whose areas larger than the threshold th_area are justified by "ellipse test" and considered as the cells after having passed the test.
For the regions with areas smaller than the threshold th_area, closing morphological operation is conducted to merge smaller regions. The merged regions are then justified by ellipse test for cell extraction. As demonstrated in Figure 9, the small inner regions of the remains are merged to larger regions.
For the regions, which are not deemed as ellipses from the remains of the "fmc-ws" at Step 3 and the "f-ws", having areas higher than th_area and containing markers in the corresponding locations, they should be treated as the candidate cells. Due to the fact that the blobs of "f-ws" are more similar to real cells than "fmc-ws", "f-ws" is used to perform cell extractions before applying "fmc-ws" here.
Most of the cells in "f-ws" and "fmc-ws" should have been extracted at the previous 4 steps, but some regions may not be detected because their markers are large enough to cover the edges of the regions. Figure 10(a) demonstrates the cells detected at Steps 1-4. However, as shown in Figure 10(b), watershed segmentation may fail when detecting the cells whose corresponding markers are large enough to cover the whole candidate cell. Hence, if the markers existed in the marker-image are larger than the threshold th _ area 2, the watershed segmentation (with a parameter of h-minima transform th _ h 2) is performed in the corresponding region of the smooth image. Here, only the corresponding region of "b-ws", as shown in Figure 10 (c), is considered for extracting the cells.
Cell detection 2
Remove the background regions of the "f-ws" image.
Extract cell regions from "f-ws".
Because of the characteristics of watershed segmentation, the adjacent regions form connected regions. The regions which are not extracted in step 2 may be fake connected regions, which can be split by using the information embedded in the "b-ws" image. As illustrated in Figure 13(a), a sub-region, which connects two watershed regions, with a line in the "b-ws" image crossing it will be eliminated, resulting in the separation of two cell blobs, Figure 13(b). Subsequently, watershed segmentation (with a designated parameter of h-minima transform, th_h1) is further performed on the individual cell regions appeared on "f-ws", Figure 13 (c). The sub-regions in the refined cell regions are merged and justified by "ellipse test" afterward.
For connected cell regions which can't be split at step 3, all possible combinations of sub-regions will be tested to obtain combinations of sub-regions which are similar to ellipses. Once the best combination has been obtained, the cell regions can be well-separated from the background. Figure 14 illustrates the procedure in splitting the region containing three candidate cell regions. A connected region r i consisting of N i sub-regions can be indicated as:
Due to the fact that the foreground of dataset images may contain inhomogeneous gray levels, some regions can not be detected because they are darker than other regions, even though they can be discriminated by human eyes. In order to detect these regions, global Otsu thresholding is again performed on the remaining image after cell extraction. Detected regions with areas greater than th_area are considered as cells.
Parameters for cell segmentation
Parameters designated for different stages of cell segmentation.
Cell Detection 1
Cell Detection 2
Comparisons of cell segmentation performance
Cell classification of ANA images
By considering the effect of astigmatism, the texture details of the cells which are not located at the central field may be lost due to optical aberration. Hence, only the cells located in the central field, accounting to 50% of the area from the image center, are used for cell classification. A total of 3830 cells extracted from 196 images are classified into 6 different patterns, i.e. diffused (599), peripheral (529), nucleolar (94), coarse-speckled (1956), fine-speckled (56), and discrete-speckled (596), by an experienced physician, Dr. Hsieh. The classified cell patterns are adopted as the ground truth to verify classification performance of the proposed method.
Features for cell classification
For the purpose of finding suitable features to represent the patterns of ANA images, conventional and the state-of-the-art features are investigated. The conventional features used to describe patterns include statistics of intensity and texture of blobs. The statistics of blob intensity include mean, variance, skewness, and entropy. Tamura features, including coarseness, contrast, and directionality, as well as the Haralick features, including contrast, correlation, energy, and homogeneity, obtained from co-occurrence matrix (GLCM) with 0, 45, 90, and 135 degrees, are also used to characterize the blobs. Furthermore, the most frequently used state-of-the-art features, such as fuzzy texture spectrum (FTS) [27, 28] and local binary pattern (LBP) [29–31], are also adopted for cell classification in this study.
where P avg denotes the average intensity of pixels located at the perimeter of a blob, and C avg indicates the average intensity of the central area with a size of 7×7 pixels.
By observing images in Figure 1, it can be found that different cell patterns contain a variety of regions with different sizes and patterns. For example, although nucleolar and discrete-speckled patterns both contain light regions, the number of light regions in the cells with discrete-speckled pattern is greater than the nucleolar pattern. In contrast, some dark regions can be observed in the coarse-speckled and fine-speckled patterns. These are important and useful characteristics to reduce false cases in discriminating cells with different patterns. A total of 6 features derived from statistics of light and dark regions inside the blobs, including numbers of dark and light regions as well as mean and variance of intensity of dark and light regions, are obtained for cell discrimination.
Categories of features used for cell classification.
Intensity statistics of blobs: mean, variation, and skewness
Coarseness, contrast, and directionality
Contrast, correlation, energy, and homogeneity in co-occurrence matrix with degrees 0, 45, 90, and 135.
Fuzzy texture spectrum based on the relative intensity levels among pixels
Statistics of light regions: No. of regions as well as mean and variation of intensity
Statistics of dark regions: No. of regions as well as mean and variation of intensity
Calculated based on 8 neighbors the distance of one
Calculated based on 16 neighbors the distance of one
Calculated based on 24 neighbors the distance of one
Intensity statistic of blobs
Intensity difference between perimeter and central area
Design and validation of cell classifier
Support vector machine (SVM) is a supervised learning method widely used for classification of data patterns [32, 33]. A special property of SVM is that it can simultaneously minimize the empirical classification error and maximize the geometric margin of a classifier. It is a powerful methodology for solving problems in nonlinear classification, function estimation, and density estimation, leading to many applications .
In this study, SVM classifier was implemented by the LIBSVM tool  which supports multi-class classification. Radial basis function (RBF) was selected as the kernel because of its advantages in mapping samples into a higher dimensional space so that it can handle the case when the relation between class labels and attributes is nonlinear . The optimal combination of penalty parameters, C and γ of the RBF kernel, were determined by dividing the range 2-10-2+10 into 21 steps, resulting in a total of 441 combinations.
Two experiments for the verification of classification performance of the SVM classifier were conducted: cross validation (CV) and independent training and testing (ITT). For the CV experiment, 5-fold cross-validation was conducted to obtain the optimal parameters C and γ in the training phase. On the other hand, the images dataset was randomly divided into training set and testing set, each containing 50% of the randomly selected images, for ITT. Again in the training phase, 5-fold cross-validation was used to obtain the optimal combination of parameters C and γ based on the training set. The ITT experiment were repeated for 10 times.
Classification accuracies (%) of different cell patterns
Cytology evaluation has been shown to be a safe, efficient and well-established technique for the diagnoses of many diseases. Its ability to reduce the mortality and morbidity of cervical cancer through mass screening is the most famous success. Classical cytological diagnosis is based on microscopic observation of specialized cells and qualitative assessment with descriptive criteria, which may result in inconsistent results because of subjective variability found in different observers . Recently, automatic or semi-automatic computerized systems developed for segmenting and analyzing stained cervical cells from Pap smear images are demonstrated to be effective and efficient to assist pathologists in the diagnosis of abnormal cells [34, 41–43] and in the discrimination of different types of cells [34, 44, 45] through accurate and objective measurements of cell texture and morphology.
Tracing the cell migration, cell cycle, and cell differentiation from fluorescent microscopic images through automatic segmentation, classification, and tracking of living and cultured cells has also been widely conducted [46–48]. However, an automated image analysis system developed to fit a specific type, assay, or image set is hardly applicable to different cells acquired from different modalities . Hence, techniques used for segmenting cells from visible-light microscopic images may not be directly applied in extracting cells from fluorescent microscopic images, whereas techniques used for extracting cells in a living cell population from fluorescent microscopic images may not be effective for processing IIF images.
Comparison of error rate between this study and previous investigations.
No. of cells/images
Perner et al. 
Sack et al. 
Elbischger et al. 
Soda & Iannello 
Rigon et al. 
Huang et al. 
Voigtet al. 
Comparisons of number of segmented cells and classification performance between proposed method and CellProfiler evaluated based on PVO, PVD, RAE, and MHD.
Detected Cell Number
RAE ± Std.
MHD ± Std.
0.6386 ± 0.42
0.5248 ± 0.36
250.90 ± 228.42
510.69 ± 250.56
0.3039 ± 0.35
0.5120 ± 0.31
87.15 ± 182.86
602.36 ± 267.78
0.4777 ± 0.41
0.6445 ± 0.33
157.99 ± 208.70
560.73 ± 225.77
0.3506 ± 0.36
0.7038 ± 0.21
101.90 ± 199.74
556.02 ± 261.51
0.5423 ± 0.41
0.8637 ± 0.13
173.20 ± 13.16
458.46 ± 21.41
0.5457 ± 0.37
0.7277 ± 0.18
143.99 ± 202.59
504.55 ± 242.68
Conclusion and future work
In this study, a segmentation method was proposed to detect the boundaries of HEp-2 cells automatically, and then classification of cell patterns was performed based on the selected features. The results show that the proposed method can detect cells correctly in most image cases with PVO greater than 89% and PVD less than 22%, whereas the best combination of selected features can achieve an average accuracy as high as 96.90% in discriminating 6 different types of cell patterns.
More cell images will be included in the dataset for verifying the segmentation performance and classification performance in the future. Furthermore, an automatic segmentation and classification system with graphical user interface (GUI) will be developed for computer-aid diagnosis. In fact, several different ANA patterns can appear in a single image, but the segmentation method proposed here only considers images with a unique cell pattern. Future works will focus on developing a segmentation method to extract cells with different patterns appearing in an image.
Funding for this article came from National Science Council of Taiwan under grant NSC100-2410-H-166-007-MY3.
This article has been published as part of BioMedical Engineering OnLine Volume 12 Supplement 1, 2013: Selected articles from the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society: Workshop on Current Challenging Image Analysis and Information Processing in Life Sciences. The full contents of the supplement are available online at http://www.biomedical-engineering-online.com/supplement/12/S1
- Miller JF: Self-nonself discrimination and tolerance in T and B lymphocytes. Immunol Res 1993,12(2):115–130. 10.1007/BF02918299View ArticleGoogle Scholar
- Piazza A, Manoni F, Ghirardello A, Bassetti D, Villalta D, Pradella M, Rizzotti P: Variability between methods to determine ANA, antidsDNA and anti-ENA autoantibodies: A collaborative study with the biomedical industry. J Immunol Methods 1998, 219: 99–107. 10.1016/S0022-1759(98)00140-9View ArticleGoogle Scholar
- Sack U, Knoechner S, Warschkau H, Pigla U, Emmerich MKF: Computer-assisted classification of HEp-2 immunofluorescence patterns in autoimmune diagnostics. Autoimmunity Reviews 2003, 2: 298–304. 10.1016/S1568-9972(03)00067-3View ArticleGoogle Scholar
- Rigon A, Soda P, Zennaro D, Iannello G, Afeltra A: Indirect immunofluorescence (IIF) in autoimmune diseases: Assessment of digital images for diagnostic purpose. Cytometry Part B: Clinical Cytometry 2007, 72B: 472–477. 10.1002/cyto.b.20356View ArticleGoogle Scholar
- Soda P, Iannello G: Aggregation of classifiers for staining pattern recognition in antinuclear autoantibodies analysis. IEEE T Inf Technol Biomed 2009, 13: 322–329.View ArticleGoogle Scholar
- Perner P, Perner H, Müller B: Mining knowledge for Hep-2 cell image classification. J Artificial Intelligence in Medicine 2002, 26: 161–173. 10.1016/S0933-3657(02)00057-XView ArticleGoogle Scholar
- Huang YL, Chung CW, Hsieh TY, Jao YL: Adaptive automatic segmentation of HEp-2 cells in indirect immunofluorescence images. IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing 2008, 418–422.Google Scholar
- Perner P: Image analysis and classification of HEp-2 cells in fluorescent images. Proceedings of the 14th International Conference on Pattern Recognition 1998, 2: 1677.Google Scholar
- Soda P, Iannello G: A multi-expert system to classify fluorescent intensity in antinuclear autoantibodies testing. The 19th IEEE International Symposium on Computer-Based Medical Systems 2006, 219–224.Google Scholar
- Soda P, Iannello G: A Hybrid Multi-Expert Systems for HEp-2 Staining Pattern Classification. The 14th International Conference on Image Analysis and Processing 2007, 685–69.Google Scholar
- Otsu N: Threshold selection method from gray-Llevel histograms. IEEE Transactions on Systems Man and Cybernetics 1979, 9: 62–66.View ArticleGoogle Scholar
- Vincent L, Soille P: Watersheds in digital spaces: An efficient algorithm based on immersion simulations. IEEE Transactions on Pattern Analysis and Machine Intelligence 1991,13(6):583–598. 10.1109/34.87344View ArticleGoogle Scholar
- Lotufo R, Falcao A: The ordered queue and the optimality of the watershed approaches. In Mathematical Morphology and its Application to Image and Signal Processing. Edited by: Goutsias J, Vincent L, Bloomberg D. Dordrecht: Kluwer Academic Publishers; 2000:341–345.Google Scholar
- Creemers C, Guerti K, Geerts S, Cotthem KV, Ledda A, Spruyt V: HEp-2 cell pattern segmentation for the support of autoimmune disease diagnosis. Proceedings of the 4th International Symposium on Applied Sciences in Biomedical and Communication Technologies 2011, 28: Article 1–5.Google Scholar
- Soda P, Iannello G: Aggregation of classifiers for staining pattern recognition in antinuclear autoantibodies analysis. IEEE Transactions on Information Technology in Biomedicine 2009, 13: 322–329.View ArticleGoogle Scholar
- Rigon A, Buzzulini F, Soda P, Onofri L, Arcarese L, Iannello G, Afeltra A: Novel opportunities in automated classification of antinuclear antibodies on HEp-2 cells. Autoimmunity Reviews 2011,10(10):647–652. 10.1016/j.autrev.2011.04.022View ArticleGoogle Scholar
- Elbischger P, Geerts S, Sander K, Ziervogel-Lukas GSinah P: Algorithmic framework for HEP-2 fluorescence pattern classification to aid auto-immune diseases diagnosis. IEEE International Symposium on Biomedical Imaging: From Nano to Macro 2009, 562–565.View ArticleGoogle Scholar
- Huang YC, Hsieh TY, Chang CY, Cheng WT, Lin YC, Huang YL: HEp-2 cell images classification Bbased on textural and statistic features using self-organizing map. Lecture Notes in Computer Science 2012, 7197: 529–538. 10.1007/978-3-642-28490-8_55View ArticleGoogle Scholar
- Voigt J, Krause C, Rohwäder E, Saschenbrecker S, Hahn M, Danckwardt M, Feirer C, Ens K, Fechner K, Barth E, Martinetz T, Stöcker W: Automated indirect immunofluorescence evaluation of antinuclear autoantibodies on HEp-2 cells. Clinical and Developmental Immunology 2012, 2012: 1–7. (Article ID 651058)View ArticleGoogle Scholar
- Center for Disease Control: Quality assurance for the indirect immunofluorescence test for autoantibodies to nuclear antigen (IF-ANA): approved guideline. NCCLS I/LA2-A 1996.,16(11):Google Scholar
- Solomon DH, Kavanaugh AJ, Schur PH: Evidence-based guidelines for the use of immunologic tests: Antinuclear antibody testing. Arthritis Care Res 2002, 47: 434–444. 10.1002/art.10561View ArticleGoogle Scholar
- Soille P: Morphological image analysis: Principles and applications. New York: Springer-Verlag; 2003.Google Scholar
- Collins D, Dai W, Peters T, Evens A: Automatic 3D model-based neuroanatomical segmentation. Human Brain Mapping 1995,3(3):190–205. 10.1002/hbm.460030304View ArticleGoogle Scholar
- Fischl B, Salat DH, Busa E, Albert M, Dieterich M, Haselgrove C, van der Kouwe A, Killiany R, Kennedy D, Klaveness S, Montillo A, Makris N, Rosen B, Dale AM: Whole brain segmentation: automated labeling of neuroanatomical structures in the human brain. Neuron 2002, 33: 341–355. 10.1016/S0896-6273(02)00569-XView ArticleGoogle Scholar
- Bae MH, Pan R, Wu T, Badea A: Automated segmentation of mouse brain images using extended MRF. Neuroimage 2009,46(3):717–725. 10.1016/j.neuroimage.2009.02.012View ArticleGoogle Scholar
- Brown TT, Kuperman JM, Erhart M, White NS, Roddey JC, Shankaranarayanan A, Han ET, Rettmann D, Dale AM: Prospective motion correction of high-resolution magnetic resonance imaging data in children. Neuroimage 2010,53(1):139–145. 10.1016/j.neuroimage.2010.06.017View ArticleGoogle Scholar
- Taur JS, Tao CW: Texture classification using a fuzzy texture spectrum and neural networks. Journal of Electronic Imaging 1998,7(1):29–35. 10.1117/1.482623View ArticleGoogle Scholar
- Taur JS, Lee GH, Tao CW, Chen CC, and Yang CW: Segmentation of psoriasis vulgaris images using multiresolution-based orthogonal subspace techniques. IEEE Trans System Man Cybern, Part-B 2006,36(2):390–402.View ArticleGoogle Scholar
- Ojala T, Pietikainen M, Maenpaa T: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence 2002,24(7):971–987. 10.1109/TPAMI.2002.1017623View ArticleGoogle Scholar
- Ahonen T, Hadid A, Pietikainen M: Face description with local binary patterns: Application to face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 2006,28(12):2037–2041.View ArticleGoogle Scholar
- Zhao G, Pietikainen M: Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence 2007,29(6):915–928.View ArticleGoogle Scholar
- Vapnik VN: The nature of statistical learning theory. New York: Springer-Verlag; 1995.View ArticleGoogle Scholar
- Chang CC, Lin CJ: Training ν-support vector classifiers: Theory and algorithms. Neural Computation 2001, 13: 2119–2147. 10.1162/089976601750399335View ArticleGoogle Scholar
- Chen YF, Huang PC, Lin KC, Lin HH, Wang LE, Cheng CC, Chen TP, Chan YK, Chiang JY: Semi-automatic segmentation and classification of Pap Ssmear cells. IEEE J Biomedical Health Informatics 2013, in press.Google Scholar
- Chang CC, Lin CJ: LIBSVM: a library for support vector machines. 2001. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm Google Scholar
- Hsu CW, Chang CC, Lin CJ: A practical guide to support vector classification. 2003.Google Scholar
- Gutierrez-Osuna R: Introduction to Pattern Analysis. , Retrieved November 9, 2012 http://research.cs.tamu.edu/prism/lectures/pr/pr_l11.pdf
- Zhao YM, Yang ZX: Improving MSVM-RFE for multiclass gene selection. The Fourth International Conference on Computational Systems Biology 2010, 43–50.Google Scholar
- Peng H, Long F, Ding C: Feature selection based on mutual information: Criteria of max-dependency, max-relevance, and min-redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence 2005, 27: 1226–1237.View ArticleGoogle Scholar
- DeMay RM: Common problems in Papanicolaou smear interpretation. Archives of Pathology & Laboratory Medicine 1997,121(3):229–238.Google Scholar
- Plissiti ME, Nikou C, Charchanti A: Automated detection of cell nuclei in Pap smear images using morphological reconstruction and clustering. IEEE Trans Inf Technol Biomed 2011,15(2):233–241.View ArticleGoogle Scholar
- Sulaimana SN, Isab NAM, Othmanc NH: Semi-automated pseudo colour features extraction technique for cervical cancer's pap smear images. Int J Knowledge-based Intell Eng Syst 2011, 15: 131–143.Google Scholar
- Bergmeir C, García-Silvente M, Benítez JM: Segmentation of cervical cell nuclei in high-resolution microscopic images: A new algorithm and a web-based software framework. Comput Methods Prog Biomed 2012,107(3):497–512. 10.1016/j.cmpb.2011.09.017View ArticleGoogle Scholar
- Sokouti B, Haghipour S, Tabrizi AD: A framework for diagnosing cervical cancer disease based on feedforward MLP neural network and ThinPrep histopathological cell image features. Neural Comput Appl 2012.Google Scholar
- Gençtav A, Aksoy S, Önder S: Unsupervised segmentation and classification of cervical cell images. Pattern Recognition 2012, 45: 4151–4168. 10.1016/j.patcog.2012.05.006View ArticleGoogle Scholar
- Chen X, Zhou X, Wong ST: Automated segmentation, classification, and tracking of cancer cell nuclei in time-lapse microscopy. IEEE Trans Biomed Eng 2006,53(4):762–766. 10.1109/TBME.2006.870201View ArticleGoogle Scholar
- Du TH, Puah WC, Wasser M: Cell cycle phase classification in 3D in vivo microscopy of Drosophila embryogenesis. BMC Bioinformatics 2011,12(s13):1–9.Google Scholar
- Jones TR, Carpenter AE, Lamprecht MR, Moffat J, Silver SJ, Grenier JK, Castoreno AB, Eggert US, Root DE, Golland P, Sabatini DM: Scoring diverse cellular morphologies in image-based screens with iterative feedback and machine learning. Proc Natl Acad Sci 2009,106(6):1826–1831. 10.1073/pnas.0808843106View ArticleGoogle Scholar
- Carpenter AE, Jones TR, Lamprecht MR, Clarke C, Kang IH, Friman O, Guertin DA, Chang JH, Lindquist RA, Moffat J, Golland P, Sabatini DM: CellProfiler: image analysis software for identifying and quantifying cell phenotypes. Genome Biology 2006,7(10):R100. 10.1186/gb-2006-7-10-r100View ArticleGoogle Scholar
- Sahoo PK, Soltani S, Wong AK, Chan YC: A survey of thresholding techniques. Computer Vision Graphics Image Processing 1988,41(2):233–260. 10.1016/0734-189X(88)90022-9View ArticleGoogle Scholar
- Sezgin M, Sanlur B: Survey over image thresholding techniques and quantitative performance evaluation. Journal of Electronic Imaging 2004,13(1):146–165. 10.1117/1.1631315View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.