Skip to main content

Clustering gene expression data using a diffraction‐inspired framework

Abstract

Background

The recent developments in microarray technology has allowed for the simultaneous measurement of gene expression levels. The large amount of captured data challenges conventional statistical tools for analysing and finding inherent correlations between genes and samples. The unsupervised clustering approach is often used, resulting in the development of a wide variety of algorithms. Typical clustering algorithms require selecting certain parameters to operate, for instance the number of expected clusters, as well as defining a similarity measure to quantify the distance between data points. The diffraction‐based clustering algorithm however is designed to overcome this necessity for user‐defined parameters, as it is able to automatically search the data for any underlying structure.

Methods

The diffraction‐based clustering algorithm presented in this paper is tested using five well‐known expression datasets pertaining to cancerous tissue samples. The clustering results are then compared to those results obtained from conventional algorithms such as the k‐means, fuzzy c‐means, self‐organising map, hierarchical clustering algorithm, Gaussian mixture model and density‐based spatial clustering of applications with noise (DBSCAN). The performance of each algorithm is measured using an average external criterion and an average validity index.

Results

The diffraction‐based clustering algorithm is shown to be independent of the number of clusters as the algorithm searches the feature space and requires no form of parameter selection. The results show that the diffraction‐based clustering algorithm performs significantly better on the real biological datasets compared to the other existing algorithms.

Conclusion

The results of the diffraction‐based clustering algorithm presented in this paper suggest that the method can provide researchers with a new tool for successfully analysing microarray data.

Background

The rapid development in microarray or DNA chip technology has resulted in the field of data analysis lagging behind the measurement technology [1]. The simultaneous monitoring of a large number of gene expressions has led to noisy high‐dimensional data and unpredictable results from which no real analysis has yet been developed. The main problem is due to the number of measured variables versus the number of samples, a problem referred to by statisticians as “p‐bigger than N”, for p features and N samples [1]. A consequence of this is that if classical statistical tools are applied the results can be spurious, as shown by Tibshirani in a review by [1].

Microarray technology is becoming a valuable diagnostic tool and has the potential to replace some conventional diagnostic modalities, which can be both expensive in resources and time due to the vast amount of expertise required for an accurate diagnosis [2]. The initial step therefore is to find patterns in the genome, assuming they exist, and build classifiers from which more accurate and faster diagnostic times can be achieved. The problem with supervised techniques is that they are vulnerable to sources of bias, such as deciding which portion of data forms the training or validation set.

The unsupervised field of clustering is the process of finding structure or groups in data, such that points in a cluster are more similar compared to points located in different clusters. Clustering has been shown to be a potential solution for discovering and analysing information in gene expression measurements [3]. The similarity between points and clusters is often ambiguous, a result which has led to a wide spectrum of algorithms [4, 5]. The two main categories of clustering algorithms are hierarchical and partitional. In hierarchical clustering the data is grouped using a divisive or agglomerative procedure [6], whereas in partitional clustering there are k groups into which the data points are separated [7].

In the context of microarray experiments cluster analysis can be used to cluster genes, samples or both, a process known as bi‐clustering. Gene‐based clustering can be used to uncover genes that are co‐regulated and share similar function. Sample‐based clustering is used to discover novel subtypes in cancer, an example being the discovery of five unique subtypes found in a breast cancer study performed by [8]. It is generally accepted that the same algorithm can be applied to cluster samples and genes, however for bi‐clustering further analysis is often required [4].

The performance of clustering algorithms, in general, degrades as the dimensionality of the feature space increases [9]. A common solution to this problem involves reducing the dimensionality of the data before cluster analysis using an appropriate mapping technique. The most recognised and used technique is principal component analysis (PCA), which has been shown to perform inadequately for clustering gene expression data [10]. The idea of using non‐linear reduction techniques on expression data, such as isometric mapping (ISOMAP), has also been tested with surprising results that outperform linear techniques like PCA [11].

Methodology

The diffraction‐based clustering algorithm and resulting hierarchical agglomerative scheme is derived. The partitioning of the clusters and corresponding lifetime is used to determine the correct number of clusters. The data points are assigned to each cluster using an exponential metric that includes both magnitude and direction.

Derivation of clustering algorithm

The properties of diffraction closely resemble the properties of clustering when examined in detail. The dispersion and overlapping of two light sources can be viewed as a similarity measure. The data points can be treated as point light sources that diffract and interact with one another to form a cluster. The idea can be generalised for data points in the real d dimensional space R d by firstly examining the Fresnel‐Kirchoff diffraction equation as follows [12]:

U μ , ν =a x , y e i μx + νy dxdy,
(1)

where a(x y) is the aperture function. The equation states an important fact about diffraction, namely that the diffraction pattern U(X Y)can be found by performing a Fourier transform over the aperture function. The diffraction pattern U(μ ν) can be altered using a spatial filter G(μ ν)to obtain the new aperture function, given by

a (x,y)= + + G μ , ν U μ , ν e i μx + νy dμdν.
(2)

The next step is to define the aperture function for a given dataset {p 1,p 2,…,p n }. The aperture function a(x)for x R d is defined as

a(x)= i = 1 n δ(x p i ).
(3)

The aperture function is therefore a collection of impulses located at each datum p i . The initial clusters are defined at each data point. The aperture function is then filtered using the properties of Fourier functions and applying a spatial filter G(ξ), as shown by

Y(ξ)=A(ξ)G(ξ),
(4)
= i = 1 n e 2 Πi ξ · p i G(ξ).
(5)

Using the inverse Fourier properties for a shifted function, the filtered aperture function is obtained

y(x)= i = 1 n g x p i .
(6)

The result is a set of filter functions g(x), each centred at a separate data point p i . The choice of the filter is somewhat arbitrary, however satisfying the constraints set by [13], a Gaussian function is used

G(ξ)= e σ ξ 2 2 .
(7)

The inverse Fourier transform of the Gaussian filter function is itself, as shown by

g(x,σ)= 1 ( 4 Πσ ) d 2 e x 2 2 4 σ ,
(8)

with the width of the Gaussian determined by the free parameter σ. The final result is an aperture function specified by

y(x,σ)= 1 ( 4 Πσ ) d 2 i = 1 n e x p i 2 2 4 σ .
(9)

The spectral width of the Gaussian filter decreases as σ increases, resulting in the removal of higher frequencies in the measured data. The result is a family of aperture functions defining a hierarchical agglomerative scheme in which the data points merge as σ increases. Information pertaining to the aperture function, such as the slope, can then be used to determine which data points belong to a cluster.

The derivative of the aperture function is used to locate the centres of the clusters which, under the condition for local maxima, satisfy

∂y x = 1 2 σ 4 Πσ d 2 i = 1 n ( p i x) e x p i 2 2 4 σ =0.
(10)

The centre points that satisfy (10) can be found using data points together with the dynamic gradient system equation

d x dt = x y(x,σ)= 1 2 σ 4 Πσ d 2 i = 1 n ( p i x) e x p i 2 2 4 σ .
(11)

This equation can then be approximated using the Euler difference method

x [ n + 1 ] = x [ n ] + h x y ( x [ n ] , σ ) ,
(12)

which is suitable for software implementation and solving for the maxima of the aperture function. The solutions to (12) provide the cluster centres for the dataset. A data point is considered to be cluster centre if it satisfies x [ n + 1 ] x [ n ] <ε for an arbitrarily chosen small number ε. Cluster centres that satisfy x 1 x 2 <ε are considered to have merged to form a single cluster centre.

Cluster lifetime

The problem of determining the correct number of clusters is resolved by measuring the cluster lifetime. The cluster lifetime is defined as follows: The σlifetime of a cluster is the range of σ values over which a cluster remains the same and does not merge i.e. it is the difference in σ values between the point of cluster formation and cluster merging.

It was found in an empirical study performed by [14] that for uniformly distribute data the lifetime curve decays exponentially as Π(σ)=Π(0)eβσ, where Π(σ)is the number of clusters that are found using (10). The parameter β is dependent on the dimensionality of the problem and is usually unknown [14]. The logarithmic scale can be used to eliminate the unknown parameter and obtain a linear function of σ [14].

The cluster lifetime does not decay exponentially, or linearly depending on scale used, when there exists structure in the data. The information about the data structure is then observed from the lifetime plot for the corresponding σ value, which is used in the final cluster solution. The σ value at the beginning of the longest lifetime interval is often used [14]. The major steps in selecting the valid cluster structure in a dataset are summarised in the following five steps:

  1. 1.

    Observe the lifetime plot of Π(σ)and if it is constant over a wide range of σ values then structure exists in the dataset, otherwise the dataset is uniformly distributed.

  2. 2.

    If the data has an inherent structure then the lifetime can determine the correct number of clusters and the corresponding clustering.

  3. 3.

    The validity of the cluster can be determined by its lifetime and other defined validity indices.

  4. 4.

    If required the measure of outlierness can be used to detect any spurious points in the dataset.

Classification

The assignment of the iterated data points x i to each partition Π(σ) for a chosen σis achieved, using a similar metric as [13], by

P( x i | C k )= 1 2 1 + x y ( x i ) x y ( x i ) , d i , k d i , k × e d i , k 2 2 4 ( σ ) ,
(13)

where d i , k = x ̄ k x i .

Here, the exponential metric is used to classify each iterated data point x i to cluster C k as it has been shown, by [15], to perform superior to the commonly used Euclidean metric. The exponential metric is maximised when each datum point is closest to a cluster centre and in the same direction, with the latter calculated using the inner product · , · .

Results

Algorithm properties and illustration

The properties of the diffraction‐based clustering algorithm are illustrated using the [16] dataset. The data was normalised using the range as the scale measure and the minimum as the location measure [17]. Figure 1 shows the a priori classification of the samples, with the circle‐markers indicating the acute lymphoblastic leukaemia (ALL) subtypes and the cross‐markers indicating the acute myeloid leukaemia (AML) subtypes. The first two dimensions of the dataset are labelled using X and Y as seen in Figure 1.

Figure 1
figure 1

Scatter plot of the a priori classification for the Golub dataset. The two distinct classes of acute lymphoblastic leukaemia (ALL) and acute myeloid leukaemia (AML) are shown.

The diffraction‐based clustering algorithm was applied to the filtered two dimensional Golub dataset for optimal results. The lifetime curve for the clustering algorithm in Figure 2 illustrates that the inherent number of clusters matches the expected amount. The value of σ was selected to be the minimum value of the range where the cluster number remains constant for the longest time, in this case the value of σ is 9.2×10−3.

Figure 2
figure 2

Cluster lifetime plot for the two dimensional Golub dataset. The lifetime plot illustrates how long a certain cluster size exists under evolution of the σ parameter.

The evolution of the aperture function is shown in Figure 3 for increasing σ values. The data points initially are point sources that slowly diffuse and merge to form larger clusters, which is depicted by the intensity change in Figure 3. The final result is that all the data points form a single cluster, a theme common to hierarchical clustering.

Figure 3
figure 3

Sigma evolution of the aperture function on the Golub dataset. The analysis of the Golub dataset was performed using a final selected σ = 9.2×10−3.

The cluster centres shown in Figure 4 are tracked as σ increases. The centres of the clusters move and merge to form new cluster centres, eventually leading to a single cluster centre. The route of the centre points is determined by the gradient and maxima of the aperture function which evolves as the parameter σ changes.

Figure 4
figure 4

Sigma evolution on the Golub dataset. The two dimensions of the Golub dataset are split into two separate graphs. The illustrations show that as σ increases the cluster centres merge and eventually become a single cluster.

The final aperture function in Figure 5 has a σ value determined from the cluster lifetime plot and is used to classify the points into clusters. The aperture function has two peaks, one that is relatively higher than the other, which is a result of the higher density ALL data points. The number of clusters of dataset therefore correspond with the number of aperture peaks, which in this case is two.

Figure 5
figure 5

Aperture function of the Golub dataset at σ = 9.2×10−3 . The aperture function with its shape is used to classify the data points into two clusters.

Dimensionality reduction

Clustering algorithms often perform poorly in a high‐dimensional space due to the relative contrast between data points, x R d , as expressed in the following equation

lim d max ( x p ) min ( x p ) min ( x p ) 0,
(14)

where x p is the Lpnorm [9]. The stated equation shows that the relative contrast between data points is degraded and has no meaning in a high‐dimensional space. The high‐dimensional feature space therefore needs to be reduced using an appropriate reduction technique.

Principal component analysis (PCA), as well as singular value decomposition (SVD), are both commonly used linear methods for reducing the dimensionality of the feature space. The linear methods construct a lower dimensional space using a linear function of the original higher dimensional space. A recent article written by Shi and Luo explored the use of a non‐linear dimensionality technique on cancer tissue samples called isometric mapping (ISOMAP) [11]. ISOMAP replaces the usual Euclidean distance with a geodesic distance, which has the ability to capture and characterise the global geometric structure of the data [11]. The ISOMAP technique is also able to deal with non‐linear relationships between data points as it is based on manifold theory [11].

The ISOMAP algorithm is applied to the cancer datasets to reduce their dimensionality. The dimension selected for each dataset is based on a paper by Tenenbaum which shows that the intrinsic dimensionality of the dataset can be estimated by the point of inflection, or “elbow” point, of the generated ISOMAP residual variance curve [18]. It should also be noted that Shi and Luo used a wide range of dimensions for the k‐means and hierarchical clustering algorithm on the same cancer datasets used in this paper [11]. Their results show that the performance of the clustering algorithms for most of the cancer datasets remain relatively constant as the dimension changes [11].

Leukaemia dataset

The Golub et al. dataset is a well known and established set for testing classifiers and class discovery algorithms. The dataset is comprised of acute lymphoblastic leukaemia (ALL) samples and acute myeloid leukaemia (AML) samples. Patient samples are currently classified using techniques such as histochemistry, immunophenotyping and cytogenetic analysis [16]. The Golub dataset was classified using the conventional techniques of nuclear morphology, enzyme‐based histochemical analysis and antibody‐attachment to either a lymphoid or myeloid cell.

The dataset is divided into two types one for training and one for testing the classifier. The initial training set contains 38 samples of which 27 are ALL and 11 are AML samples. The independent testing set contains 34 samples of which 20 are ALL and 14 are AML samples. The RNA was prepared from bone marrow mononuclear cells with the samples hybridised to a high‐density oligonucleotide Affymetrix array containing 6 817 probes. The expression profile for each sample was then recorded and quantified using quality control standards [16].

The Golub data was first filtered using the call markers to find genes that were present more than 1% out of all the samples. The dimensionality of the data was then reduced using the ISOMAP algorithm to a suitable dimension. The residual variance plot of the dataset, shown in Figure 6, reveals that the correct dimension is in fact two, as this is the dimension where the curve begins to approximate linear decay.

Figure 6
figure 6

Residual variance of the ISOMAP algorithm for the Golub dataset. The residual variance decays approximately linearly at 2, which indicates that 2 is the correct dimension to use for the dataset.

The performance of the clustering algorithms are evaluated using three main measures: average external criterion (AEC), average validity index (AVI) and accuracy (ACC). The average external criterion is the average of the three main external criteria [17], and defined as

Average External Criterion= J + R + F 3 ,
(15)

where

J = Jaccard Coefficient , R = Rand Score , F = Folkes and Mallows Index .

The average validity index is the average of four commonly used indices, analysed by [19], and is given by

Average Validity Index= 1 I B + I D + I C + I 4 ,
(16)

where

I B = Davies‐Bouldin Index , I D = Dunn’s Index , I C = Calinski Harabasz Index , I = I Index .

It is noted in (16) that the Davies‐Bouldin index is inverted, since the index is minimised when there is a suitable clustering result. The average validity index should therefore be maximised for a good clustering result. It should be noted that other validation techniques exist which have been tested on numerous gene datasets [20, 21].

The accuracy of the clustering results are determined using a simple misclassification ratio as shown by

Accuracy= N s N m N s ×100%,
(17)

where N s is total number of samples and N m is the total number of misclassified samples produced by the algorithm.

The results of the diffraction‐based clustering algorithm were compared to the other main clustering schemes as shown in Table 1. The k‐means algorithm was used with the number of expected clusters equated to 2, similarly for the fuzzy c‐means algorithm. The fuzzy c‐means exponent was set to 2, the most commonly used value, however there are critiques about choosing this exponent value [22]. The hierarchical clustering algorithm used the standard Euclidean distance with single linkage and centroid linkage as the merging measurement. The topology of the self‐organising map was 2×1, such that two cluster centroids could be found [16]. The number of clusters in the Gaussian mixture model and DBSCAN algorithm were set to 2.

Table 1 Comparison of the clustering results for the Golub dataset

The results in Table 1 demonstrate that the diffraction‐based algorithm outperforms the other algorithms in terms of accuracy and validity. An accuracy of 94.4% for diffraction‐based clustering implies that only 4 samples were misclassified, whereas in fuzzy c‐means and k‐means 8 samples were misclassified which is double that of the diffraction‐based clustering algorithm.

The SOM and hierarchical clustering algorithms both perform relatively poorly compared to the other algorithms. The reason being that perhaps the incorrect choice of neurons and topology for the SOM was used, or the cut‐off level for the hierarchical dendrogram was not optimal. The main problem with these algorithms is the choice for the parameters and determining the cluster number a priori. The diffraction‐based clustering algorithm bypasses these problems by finding the lifetime for the cluster numbers, which gives the optimal parameter choice for clustering the selected dataset.

MILEs dataset

The Microarray Innovations in LEukaemia (MILE) study is a collection of analyses from an international standardisation programme that was conducted in 11 countries [23]. The subtypes of acute lymphoblastic leukaemia (ALL) which are analysed follow those from the study by Li et al. and include: t(4;11) MLL‐rearrangement, t(9;22) BCR‐ABL, T‐ALL, t(12;21) TEL‐AML1, t(1;19) E2A‐PBX1 and Hyperdiploid > 50 [24]. The lymphoblastic leukaemias result from the failed differentiation of the haematopoietic cells, specifically the lymphoid stem cells [23]. The number of samples were evenly distributed as much as possible resulting in 276 samples with a total of 54 675 genes.

The dimensionality of the MILEs dataset was reduced to three using the ISOMAP algorithm and the information provided by the residual variance curve shown in Figure 7. The figure reveals that the inherent dimensionality is three as the curve begins to approximate linear decay at that point.

Figure 7
figure 7

Residual variance of the ISOMAP algorithm for the MILEs dataset. The residual variance decays approximately linearly at 3, which indicates that 3 is the correct dimension to use for the dataset.

The data was background corrected and normalised using the robust multiarrary average (RMA) technique. The data was then normalised again using the range such that the gene expression values were between [0,1]. The a priori classification of the dataset in Figure 8 shows each of the six subtypes designated by their own specific marker.

Figure 8
figure 8

Scatter plot of the a priori classification for the MILEs dataset. The six relatively distinct classes of the acute lymphoblastic leukaemias are shown. t(4;11) MLL‐rearrangement, t(9;22) BCR‐ABL, T‐ALL, t(12;21) TEL‐AML1, t(1;19) E2A‐PBX1 and Hyperdiploid > 50.

The lifetime curve for the Miles dataset is shown in Figure 9. The curve shows that the cluster number stays constant at six for a large range of σ. The graph is not complete since the cluster number stays fixed at six for more than one order of magnitude in σ, and is therefore assumed to be the correct cluster number. The chosen value of σ is the minimum value at which the cluster number stays constant, which in this case is 17.3×10−3.

Figure 9
figure 9

Cluster lifetime plot for the three dimensional MILEs dataset. The lifetime plot illustrates how long a certain cluster size exists under evolution of the σ parameter, and in this specific instance the cluster lifetime persists at 6 for a large range of σ values. The persistence of the line therefore indicates the correct cluster number is 6.

The results of the diffraction‐based clustering algorithm were compared to the other clustering algorithms, as shown in Table 2. The number of expected clusters in the k‐means algorithm was equated to 6, similarly for the fuzzy c‐means algorithm. The fuzzy c‐means exponent was set to 2. The hierarchical clustering algorithm uses the Euclidean distance with single linkage and centroid linkage as the merging metric at a maximum cluster level of six. The topology of the self‐organising map was set to 6×1 such that six cluster centroids could be found. The number of clusters in the Gaussian mixture model and DBSCAN algorithm were set to 6.

Table 2 Comparison of the clustering results for the MILEs dataset

The results in Table 2 show that the diffraction‐based clustering algorithm outperforms the other algorithms both in validity and accuracy. The fuzzy c‐means algorithm is the closest, in terms of accuracy, to the diffraction‐based clustering algorithm with a value of 71.4%. The accuracies in general are low when compared to the Golub dataset, with the reason being attributable to the large number of different subtypes and high‐dimensionality of the feature space [9].

Other datasets

The cluster analysis of three other cancerous datasets was performed. The σ parameter for the diffraction clustering algorithm was determined using the cluster lifetime curve. The parameters of the other clustering algorithms were adjusted to the correct number of clusters for each dataset.

Khan dataset

The Khan dataset was obtained from a study performed on classifying small, round blue‐cell tumours (SRBCT) [25]. The tumours belong to four distinct diagnostic categories which present challenges for clinical diagnostics [25]. The four classes are Neuroblastoma (NB), Rhabdomyosarcoma (RMS), Burkitt lymphomas (BL) and the Ewing family of tumours (EWS). The correct class to which the tumour belongs is important since treatment options, responses to therapy and prognoses vary significantly depending on the diagnosis [25].

The Khan gene‐expression data was obtained from cDNA microarrays that each contained 6 567 genes, and a sample size of 83. The data was normalised to a range of [0,1] using the minimum as the location measure and the range as the scale measure. The dimensionality for the Khan dataset was originally selected as ten using PCA to allow for well‐calibrated artificial neural network models [25]. The dataset however was reduced to three using the ISOMAP algorithm which is it’s intrinsic dimensionality. The predetermined classes of SRBCT tumours are shown in Figure 10. The sigma value was chosen to be 0.0099, which was obtained using the longest cluster lifetime.

Figure 10
figure 10

Scatter plot of the a priori classification for the Khan dataset. The four classes of the small, round blue‐cell tumours are shown. Neuroblastoma (NB), Rhabdomyosarcoma (RMS), Burkitt lymphomas (BL) and Ewing family of tumours (EWS).

The diffractive clustering algorithm was compared to the clustering algorithms for the Khan dataset, as shown in Table 3. The results show that the diffractive clustering algorithm is able to accurately separate the data into four distinct classes. The average external index of the diffractive clustering algorithm is also relatively higher indicating a suitable clustering solution.

Table 3 Comparison of the clustering results for the Khan dataset

The diffractive clustering algorithm was able to correctly classify 58 out of the 83 samples as opposed to the fuzzy c‐means algorithm which only correctly classified 54 out of the 83 samples, probably due to the fuzzy c‐means algorithm finding a local minimum for its cost function as opposed to a global minimum.

Shipp dataset

The Shipp dataset is a study performed on diffuse large B‐cell lymphoma (DLBCL), which is the most common malignancy in adults and is curable in less than 50% of cases [26]. The experiment performed by Shipp et al. identified tumours in a single B‐cell lineage, specifically the distinction of DLBCL from a related germinal centred B‐cell follicular lymphoma (FL) [26]. The clinical distinction between the two types of lymphomas is usually difficult as FLs acquire the morphology and clinical characteristics of DLBCLs over time [26]. The microrarray transcription study of the lymphomas, containing 6 817 genes, was performed on 77 patients, of whom 58 were diagnosed with DLBCL and other 19 with FL.

The dimensionality of the Shipp dataset was reduced to two dimensions using the ISOMAP algorithm. The a priori classification of the 77 samples in two dimensions is shown in Figure 11. The similarity of the tumour lineage between DLBCLs and FLs is evident by the amount of mixing of the different data points, as shown in Figure 11. The sigma value was chosen to be 0.0073, which was obtained using the longest cluster lifetime.

Figure 11
figure 11

Scatter plot of the a priori classification for the Shipp dataset. The single lineage, diffuse large B‐cell lymphoma (DLBCL), and germinal centred B‐cell follicular lymphoma (FL) are shown.

The diffractive clustering algorithm, applied to the Shipp dataset, was compared to the other main types of clustering algorithms. The clustering results are shown in Table 4, with the most accurate and valid results pertaining to the diffractive clustering algorithm. The diffractive clustering algorithm correctly classifies 49 out of the 77 samples, as opposed to the SOM and hierarchical clustering algorithm which correctly classify only 41 out of the 77 samples. The accuracy however of the diffractive algorithm, although 10% larger than the rest, is still relatively low.

Table 4 Comparison of the clustering results for the Shipp dataset

Pomeroy dataset

The Pomeroy dataset is a study performed on embryonal tumours of the central nervous system (CNS) [27]. Medulloblastomas, a highly malignant brain tumour that originates in the cerebellum or posterior fossa, are most common in paediatrics with very little known about their response to treatment and pathogenesis [27]. The study performed by Pomeroy et al. analysed the transcription levels of 99 patients to identify any expression differences between medulloblastomas (MED), primitive neuroectodermal tumours (PNETs), atypical teratoid/rhabdoid tumours (AT/RTs), malignant gliomas (MAL) and normal tissue [27].

The Pomeroy study analyses the DNA from 99 patients on oligonucleotide microarrays with 6 817 genes. The data was also split into three datasets of varying sample size. The dataset known as A2 is used in this clustering analysis and contains 90 samples, of which 60 are MED, 10 are MAL, 10 are AT/RTs, 6 are PNETs and 4 are normal [27]. The dimensionality of the Pomeroy dataset was reduced to two using the ISOMAP algorithm. The a priori classification of the samples using clinical methods is shown in Figure 12. The sigma value was chosen to be 0.00281, which was obtained using the longest cluster lifetime.

Figure 12
figure 12

Scatter plot of the a priori classification for the Pomeroy dataset. The five embryonal tumours of the central nervous system are shown. Medulloblastomas (MED), Primitive neuroectodermal tumours (PNETs), Atypical teratoid/rhabdoid tumours (AT/RTs), Malignant gliomas (MAL) and Normal tissue.

A cluster analysis was performed on the Pomeroy dataset using the diffractive clustering algorithm and compared to the other main types of clustering algorithms. The results are shown in Table 5, with the most accurate and valid results pertaining to the diffractive clustering algorithm. The diffractive clustering algorithm correctly classifies 61 out of the 90 samples as opposed to the hierarchical clustering algorithm which only classifies 51 out of the 90 samples correctly.

Table 5 Comparison of the clustering results for the Pomeroy dataset

The validity of the clustering solution produced by the diffraction algorithm is also remarkably high compared to the other algorithms. The main reason for the accuracy of the diffraction algorithm being low, although still high relative to the other clustering results, is the unbalanced distribution of samples, which is a result of the large number of medulloblastoma samples in the dataset.

Discussion

The k‐means and fuzzy c‐means algorithms minimise intra‐cluster variance and as a result the global minimum is not always discovered. Also the other algorithms such as the hierarchical clustering algorithm, self‐organising map, Gaussian mixture model and DBSCAN require initialisation of parameters and layouts which can often lead to poor results.

Another difficulty associated with conventional clustering algorithms is deciding on correct number of clusters to select. There are however techniques, such as the gap statistic, which allows one to estimate the correct number of clusters [28]. The problem with the gap statistic is the choice of the reference distribution, which if chosen to be uniform produces ambiguous results for elongated clusters [28]. There are other more robust techniques that can be combined with the algorithms, such as robustness analysis and cluster ensembles, which can determine the correct number of clusters [29, 30].

Resampling‐based methods can also be used in conjunction with conventional clustering algorithms to improve their performance [31]. By contrast, the algorithm presented in this paper can inherently determine the correct cluster number using the cluster lifetime. The merit of the diffractive clustering algorithm is therefore due first to the ability of the algorithm to handle non‐spherical or arbitrarily shaped clusters, and secondly due to the optimisation of its single parameter, σ, using the cluster lifetime.

Conclusion

The recent development in microarray technology has given rise to a large amount of data on the genetic expressions of cells. A solution to this problem of discovery and analysis in gene expression data is the application of unsupervised techniques such as cluster analysis. The clustering of samples allows one to find the inherent structure in the genome without filtering or representing the data with only a few selected genes.

The validation and performance of the clustering results, although poorly defined, can be addressed successfully with relative and external criteria. The number of clustering algorithms is large and as such the choice of the correct or preferred algorithm remains ambiguous. The languid approach is usually to choose the fastest algorithm such as the k‐means algorithm. The classical algorithms although fast lack the insight to the clustering process, and rely on the predetermined, usually biased number of clusters to work. The diffractive clustering algorithm is independent of the number of clusters as the algorithm searches the feature space and requires no other form of feedback. The results also show that the diffractive clustering algorithm, in terms of accuracy, outperforms the other classical types of clustering algorithms.

References

  1. Cobb K: Microarrays: The Search for Meaning in a Vast Sea of Data. Tech. rep., Biomedical Computation Review 2006

  2. Statnikov A, et al.: A comprehensive evaluation of multicategory classification methods for microarray gene expression cancer diagnosis. Bioinoformatics 2005, 21(5):631–634. 10.1093/bioinformatics/bti033

    Article  Google Scholar 

  3. Domany E: Cluster Analysis of Gene Expression Data. J Stat Physcs 2003, 110(3–6):1117–1139.

    Article  Google Scholar 

  4. Jiang D, et al.: Cluster Analysis for Gene Expression Data: A Survey. IEEE Trans Knowledge Data Eng 2004, 16(11):1370–1386. 10.1109/TKDE.2004.68

    Article  Google Scholar 

  5. Frey BJ, Dueck D: Clustering by Passing Messages Between Data Points. Science 2007, 315: 972–976. 10.1126/science.1136800

    Article  MathSciNet  Google Scholar 

  6. Drăghici S: Data Analysis Tools for DNA Microarrays. London: Chapman & Hall/CRC Mathematical Biology and Medicine Series; 2003.

    Book  Google Scholar 

  7. Belacel N, et al.: Clustering Methods for Microarray Gene Expression Data. J Integrative Biol 2006, 10(4):507–531.

    Google Scholar 

  8. Sorlie T, et al.: Gene Expression Patterns of Breast Carcinomas Distinguish Tumor Subclasses with Clinical Implications. Proc National Acad Sci 2001, 98(19):10869–10874. 10.1073/pnas.191367098

    Article  Google Scholar 

  9. Aggarwal CC, et al.: On the Surprising Behaviour of Distance Metrics in High Dimensional Space. Lecture notes, Institute of Computer Science, University of Halle 2001

  10. Yeung KY, Ruzzo WL: Principal Component Analysis for Clustering Gene Expression Data. Bioinformatics 2001, 17(9):763–774. 10.1093/bioinformatics/17.9.763

    Article  Google Scholar 

  11. Shi J, Luo Z: Nonlinear dimensionality reduction of gene expression data for visualisation and clustering analysis of cancer tissue samples. Comput Biol Med 2010, 40: 723–732. 10.1016/j.compbiomed.2010.06.007

    Article  Google Scholar 

  12. Fowles GR: Introduction to Modern Optics. New York: Dover Publications; 1975.

    Google Scholar 

  13. Roberts SJ: Parametric and Non‐parametric Unsupervised Cluster Analysis. , Technical Report 2, 1997

  14. Leung Y, et al.: Clustering by Scale‐Space Filtering. IEEE Trans Pattern Anal Machine Intelligence 2000, 22(12):1396–1410. 10.1109/34.895974

    Article  Google Scholar 

  15. Yao Z, et al.: Quantum, Clustering Algorithm based on Exponent Measuring Distance. International Symposium on Knowledge Acquisition and Modeling Workshop, 2008. KAM Workshop 2008, IEEE Catalog Number 10560087, IEEE 2008, 436–439. [ISBN 978‐1‐4244‐3530‐2]

    Google Scholar 

  16. Golub TR, et al.: Molecular Classification of Cancer: Class Discovery and Class Prediction by Gene Expression Monitoring. Science 1999, 286: 531–537. 10.1126/science.286.5439.531

    Article  Google Scholar 

  17. Gan G, et al.: Data Clustering Theory, Algorithms, and Applications. Philadelphia: SIAM, Society for Industrial and Applied Mathematics; 2007.

    Book  Google Scholar 

  18. Tenenbaum JB, de Silva V, Langford JC: A global geometric framework for nonlinear dimensionality reduction. SCIENCE 2000, 290: 2319–2323. 10.1126/science.290.5500.2319

    Article  Google Scholar 

  19. Maulik U, Bandyopadhyay S: Performance Evaluation of Some Clustering Algorithms and Validity Indices. IEEE Trans Pattern Anal Machine Intelligence 2002, 24(12):1650–1654. 10.1109/TPAMI.2002.1114856

    Article  Google Scholar 

  20. Datta S, Datta S: Comparisons and validation of statistical clustering techniques for microarray gene expression data. Bioinformatics 2003, 19(4):459–466. 10.1093/bioinformatics/btg025

    Article  Google Scholar 

  21. Handl J, Knowles J, Kell DB: Computational cluster validation in post‐genomic data analysis. Bioinformatics 2005, 21(15):3201–3212. 10.1093/bioinformatics/bti517

    Article  Google Scholar 

  22. Dembele D, Kastner P: Fuzzy c‐means method for clustering microarray data. Bioinformatics 2003, 19(8):973–980. 10.1093/bioinformatics/btg119

    Article  Google Scholar 

  23. Mills K, et al.: Microarray‐based classifiers and prognosis models identify subgroups with distinct clinical outcomes and high risk of AML transformation of myelodysplastic syndrome. Blood 2009, 114(5):1063–1072. 10.1182/blood-2008-10-187203

    Article  Google Scholar 

  24. Li Z, et al.: Gene Expression‐Based Classification and Regulatory Networks of Pediatric Acute Lymphoblastic Leukemia. BLOOD 2009, 114(20):4486–4493. 10.1182/blood-2009-04-218123

    Article  Google Scholar 

  25. Khan J, Wei JS, Ringnér M, et al.: Classification and Diagnostic Prediction of Cancers using Gene Expression Profiling and Artificial Neural Networks. Nat Medicine 2001, 7(6):673–679. 10.1038/89044

    Article  Google Scholar 

  26. Shipp MA, Ross KN, Tamayo P, et al.: Diffuse Large B‐cell Lymphoma Outcome Prediction by Gene Expression Profiling and Supervised Machine Learning. Nat Medicine 2002, 8: 68–74. 10.1038/nm0102-68

    Article  Google Scholar 

  27. Pomeroy SL, Tamayo P, Gaasenbeek M, et al.: Prediction of Central Nervous System Embryonal Tumour Outcome Based on Gene Expression. Nat Medicine 2002, 415: 436–442. 10.1038/415436a

    Article  Google Scholar 

  28. Tibshirani R, Walther G, Hastie T: Estimating the number of clusters in a data set via the gap statistic. R Stat Soc 2001, 63(2):411–423. 10.1111/1467-9868.00293

    Article  MathSciNet  Google Scholar 

  29. Gallegos MT, Ritter G: A robust method for cluster analysis. Ann Stat 2005, 33: 347–380. 10.1214/009053604000000940

    Article  MathSciNet  Google Scholar 

  30. Strehl A, Gosh J: Cluster ensembles–A knowledge reuse framework for combining multiple partitions. J Machine Learning Res 2002, 3: 583–617.

    Google Scholar 

  31. Monti S, Tamayo P, Mesirov J, Golub T: Consensus clustering: a resampling‐based method for class discovery and visualization of gene expression microarray data. Machine learning 2003, 52: 91–118. 10.1023/A:1023949509487

    Article  Google Scholar 

Download references

Acknowledgements

This work is funded by Fogarty International Center grants for HIV‐related malignancies (D43TW000010‐21S1) and TB (5U2RTW007370‐05). The opinions expressed herein do not necessarily reflect those of the funders.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Steven C Dinger.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

SD proposed the diffraction approach, developed the algorithm, performed its implementation and carried out the research. MV helped supervise the project and provided consultation on related pattern analysis and classification matters. SC helped with the medical aspects of the project and also with obtaining the microarray data. DR formulated the general project, supervised the research, coordinated the biomedical and engineering aspects, and contributed to editing the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Dinger, S.C., Van Wyk, M.A., Carmona, S. et al. Clustering gene expression data using a diffraction‐inspired framework. BioMed Eng OnLine 11, 85 (2012). https://doi.org/10.1186/1475-925X-11-85

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1475-925X-11-85

Keywords