Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia.
In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm.
A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%.
The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians.
As the healthcare system becomes ubiquitous, the necessity of an automatic diagnosis algorithm increases. In particular, automatic arrhythmia classification algorithm research is the most active area, as arrhythmia is diagnosed by reading long-term data. High-performance arrhythmia classification algorithms based on electrocardiography (ECG) [1–3] have been proposed in many areas over the last several decades. However, the results from these studies have not been applied widely in practice. This situation has arisen due to the differences among the biosignals of different individuals. Particularly, ECG readings from different people show significant differences in terms of their waveform, which can be used for a biometric application . There is no reliable algorithm capable of dealing with these differences thus far.
Most arrhythmia classification algorithms [1–3] have been evaluated with the same subjects (people) from a training dataset. The results of the aforementioned research showed high performance (more than 95% accuracy), because the algorithm could learn the characteristics of normal and abnormal heartbeats from a subject who was included in both datasets (the training dataset and the evaluation dataset). (We subsequently refer to this assessment condition as the intrasubject condition (Figure 1(a)). However, in practical use, it is impossible to incorporate the ECG data from all humans in a training dataset. Therefore, arrhythmia classification algorithms should analyse ECG data from subjects who are not included in a training dataset.
The intersubject condition is a contrary concept of the intrasubject condition (Figure 1(b)). Superficially, studies undertaken in an intersubject condition demonstrate significantly lower performance levels than those in an intrasubject condition. According to Christov et al. , premature ventricular contractions were detected 78.3% in terms of specificity and 81.6% in terms of sensitivity in an intersubject condition, whereas these values were 96.7% in terms of specificity and 96.6% in terms of sensitivity in an intrasubject condition. However, studies in an intersubject condition undergo more appropriate assessments for practical use. To improve the performance in intersubject condition, researchers [6–8] have recently proposed a number of compensation methods. Thus far, these typical compensation methods use a local training dataset.
Chazal et al.  evaluated various configurations of feature vectors extracted from ECG data to determine which configuration has robust performance in the intersubject condition. The best performance among the configurations was 75.9% and 77.7% in terms of sensitivity for supraventricular ectopic beat (SVEB) and ventricular ectopic beat (VEB) data, respectively. Chazal and Reilly  studied the effect of a local classifier in the intersubject condition to enhance the accuracy of their algorithm. In that study, a physician first verified the subject's data, which was then learned by a local classifier. Their study showed improved performances of 87.7% and 94.3% in terms of sensitivity for SVEB and VEB, respectively. Ince et al.  also were able to detect VEB and SVEB respectively at 98.3% and 97.4% in terms of accuracy and 84.6% and 63.5% in terms of sensitivity using a local training dataset. Regarding the use of a local training dataset, other studies [6, 7] did not reach a perfect intersubject condition but instead tried to reduce intervention by the physicians by evaluating the effect of various sizes of the local training dataset. In this paper, we propose a new automatic arrhythmia classification method to compensate for the use of the intersubject condition and to improve the performance using a dedicated wavelet.
Wavelets are advantageous because of analyzing a signal in the time-frequency domain using a variety of mother wavelets. Because we can use a variety of mother wavelets, there are already numerous evaluation studies with which to find the optimal wavelet for each application, and this is also true for studies on arrhythmia classification algorithms based on ECG. Using the Haar wavelet, Minhas  and Yu  achieved high performance by classifying heartbeats with a kth nearest-neighbor (KNN) and a probabilistic neural network (PNN), respectively. Lin proposed an arrhythmia classification scheme which uses the Morlet wavelet as a feature extraction method and a PNN as a classifier . In addition, Froese captured features using the Daubechies 6(db6) wavelet or a wavelet obtained by two proposed wavelet optimization strategies. They compared the performances of various classifiers with the features . In terms of using a deformable wavelet adaptively, such an approach is similar to the proposed algorithm. However, the purpose of their study is to configure the feature vectors by optimal compression of the morphological data, whereas the proposed algorithm has the goal of reducing the differences among the subjects.
In this study, an approach utilizing the continuous wavelet transform (CWT) with a mother wavelet fitted into a target subject's ECG morphology is proposed. Through this process, we expect to reduce the impact of the personal characteristic while obtaining only the differences caused by abnormal heartbeats. The mother wavelet acquired from the signal to process was known as a dedicated wavelet in previous studies [13, 14]. The previous studies [13, 14] proposed an optimal mother wavelet to perform a voltammetric determination effectively. In this paper, we apply the same method to develop a reliable algorithm to diagnose arrhythmia in an intersubject condition.
A block diagram of the proposed algorithm is presented in Figure 2. The first required action in an ECG-based arrhythmia classification algorithm is to detect the QRS complex. However, the main purpose of the proposed algorithm focuses on the classification of arrhythmia; consequently, a QRS complex detection process is not included as a part of this research. There are many studies that seek to detect the QRS complex, and these studies have achieved high performance, such as a sensitivity of 99.5% [15, 16]. In the proposed algorithm, the annotation data from the MIT-BIH arrhythmia database  is used for the implementation and evaluation of the algorithm.
First, the preprocessing step consists of filtering and segmentation. The ECG signal is filtered to remove noise components, and then segmentation is performed to classify each heartbeat. The following subject adaptation stage generates dedicated wavelets using the ECG signal of a target subject. The ECG signal from the target subject is transformed with the dedicated wavelet at the CWT stage. As CWT produces a large amount of transformed data, we utilize principal component analysis (PCA) and linear discriminate analysis (LDA) in the subsequent dimension reduction stage. In the next stage, feature vectors are composed of the compressed morphology data and other features. Lastly, classification is performed.
ECG signals are typically exposed to noise components such as power line interference, EMG, respiratory components, and motion artifacts. Therefore, various filtering techniques to remove these noise sources have been proposed. The purpose of filtering techniques is to remove the noise components while preserving the characteristics of the original ECG signal components. In this study, a morphological filtering technique is utilized to remove the noise components. Morphological filtering, which was proposed by Sun , has the virtue of removing high-frequency noise and baseline drift with less compromising of the original ECG signal and a low computational burden. When a general filter attenuating a specific frequency band is used, the QRS complex can be expanded because the frequency band of the EMG has some overlap with the band of the QRS complex . This can be a serious weakness in an arrhythmia classification algorithm, because it can increase the likelihood of misclassifying normal heartbeats as VEBs. When using morphological filtering, this phenomenon does not occur, as attenuating specific frequency bands is not used with this type of filtering.
After the morphological filtering process, the ECG signal passes through a segmentation process. Segmentation is performed around the R peak. The PR interval is normally 0.12~0.2 seconds, and T-wave has a duration of about 0.2 to 0.4 seconds. Thus, morphology data 0.25 seconds before the R peak and 0.4 seconds after the R peak are obtained. The major ECG fiducial points (P, T wave and the QRS complex) are included in the segmentation range.
Subject adaptation method
The template segments are then discerned from each subject to make the mother wavelet similar to the appearance of each subject's normal heartbeat. To reduce noise components, we used an average segment of several consecutive normal heartbeats. In this paper, six consecutive normal heartbeats were used to make template segment. We extracted six consecutive normal heartbeats after passing the first 5 minutes of the ECG record. The mother wavelet is calculated using the template segment. The creation of the optimal mother wavelet is based on Misiti's research . The mother wavelet is generated via least squares approximation. The normal heartbeats and the mother wavelets of subject 106 and 207 are presented below in Figure 3 as an example. The obtained mother wavelets meet the conditions for a square norm of one and a zero mean.
CWT with a dedicated wavelet
CWT is a type of linear multi-resolution data processing technique that is used to detect or measure the characteristics of signals. CWT analyzes a signal using a scaled and translated version of a mother wavelet to express the signal in the time-scale (frequency) domain. The CWT results are obtained as wavelet coefficients CWTx (a,b), where a and b refer to the scale and the translation, respectively. a is proportional to the inverse of the frequency. b represents the location of the wavelet in the time domain. We are able to obtain the information throughout the entire time-frequency plane by varying these values. The CWT coefficients of the input signal x(t) are as follows.
in equation (1) indicates the complex conjugation of Ψab(t), and Ψab(t) is obtained through the process of shifting and scaling the mother wavelet Ψ(t), as follows.
As shown in equation (1), each wavelet coefficient denotes the correlation between the signal and the wavelet in the given scale and position. In other words, CWT is used to evaluate how the signal is similar to the scaled and translated mother wavelet. Thus, the optimal mother wavelet in a certain application would have a shape identical to the signal to analyze; we can explain the necessity of a dedicated wavelet based on these ideas.
This study uses sampled data; thus, discrete-time CWT (DT-CWT) is performed. The equations (1) and (2) are represented by equation (3):
Ts in equation (3) refers to the sampling period, it is 2.78 ms in this paper. As shown in equation (3), the value of scale a should be determined to obtain the wavelet coefficients of the input signal x(nTs ), and the length of the wavelet coefficients CWTx (a, kTs ) is identical to that of the input signal at each scale.
The range of scale is chosen to be concentrated in the frequency band of 3~30 Hz, where the main frequency components of the ECG signal exist. The scales approximately relate to the frequency according to equation (4) below.
Here, Fc represents the centre frequency of the mother wavelet, and Fa denotes the pseudo-frequency values corresponding to the scale a. The centre frequency of the QRS complex is approximately 5~15 Hz. As shown in Figurer 3, the dedicated wavelets have very similar waveform to each subject's normal heartbeat. Thus, the obtained dedicated wavelets also occupy the same frequency. Table 1 shows Fa values given scale level and the center frequency from 5 to 15 Hz. The highlighted region of Table 1 covers 3~30 Hz of Fa . As shown in Table 1, selected scales from 90 to 1000 covers frequency band 3~30 Hz.
CWT transfers a one-dimensional signal to a two-dimensional signal, thus increasing the amount of data considerably. Therefore, a dimension reduction process is adopted to reduce the amount of data in this study.
The output data of CWT is represented by two-dimensional data (images) in the time-scale domain. Each heartbeat is diagnosed based on the obtained morphological features and other features such as the RR interval and the previous heartbeat's classification result. However, because morphological features have a very high dimension, a dimension reduction process is necessary. Linear discriminant analysis (LDA) has been successfully applied to various applications, such as image recognition. However, the small amount of training data with high dimensional features here makes this difficult to apply. As a means of resolving this limitation, LDA after the application of a principal component analysis (PCA) has been widely used . The proposed algorithm also performs PCA and LDA to reduce the dimension of the ECG morphology data transformed by CWT.
The obtained scalogram has 234 samples in the time domain and 20 levels in the scale domain. Thus, 20 × 234 (4680) dimensional morphology features occur for each heartbeat. These are compressed by PCA and LDA to obtain the final four-dimensional feature vectors. The 4680-dimensional dataset was compressed into a 100-dimensional dataset via PCA. At this time, 99.99% of the information based on the Eigenvalues was maintained using the 100-dimensional data. The 100-dimensional dataset was compressed to four dimensions by LDA.
Feature vector composition
Each heartbeat is classified based on a feature vector configured as shown in Table 2. There are six dimensions of features associated with the RR interval, which are RRI[k], RRIavg, and RRIdiv. RRI[k] represents the features of the RR interval of the second-to-last heartbeat, the last heartbeat, the current heartbeat and the following heartbeat when k equals -2, -1, 0 and 1, respectively. They reflect the information of a premature beat, which normally has a short RR interval and a compensation period. In addition, the average of the RR interval of the last 10 heartbeats is included in the feature vector to establish a baseline. The standard deviation among the last 10 heartbeats is also used to provide information about the variability of the RR intervals at that time.
Physicians diagnose heartbeats using not only information about the current heartbeat but also information about previous heartbeats. Some researchers also used the previous heartbeat's information to classify heartbeats correctly . Therefore, to obtain information about this, Annotbf1 and Annotbf2 are included in the feature vector (features 4 and 5 in Table 2). Annotbf1 refers to the classification result of the last heartbeat, and Annotbf2 refers to the second-to-last heartbeat. These features are acquired from the annotations of the MIT-BIH arrhythmia database in the training phase. However, when evaluating an algorithm, these features come from the results classified by the algorithm at run time. Annotbf1 and Annotbf2 at the beginning of the ECG data are set to normal. Annotbf1 and Annotbf2 are represented in the form of five-dimensional binary features, as shown in Table 2. In addition, Morph, Morphbf1 and Morphbf2 are included (features 6~8 in Table 2) to describe the morphology of the present and the previous heartbeats. Morph, Morphbf1 and Morphbf2 are four-dimensional feature vectors, which are generated by CWT with a dedicated wavelet and PCA-LDA. Morphbf1 and Morphbf2, respectively, represent the morphologies of the last heartbeat and the second-to-last heartbeat. The total dimension of the feature vector is 28. To ensure that the feature vectors are in an appropriate range, each component of the feature vectors, apart from Annotbf1 and Annotbf2, is normalized using equation (5).
μx in equation (5) refers to the average of component x, and σx represents the standard deviation of x.
Typically, the Back Propagation Neural Network (BPNN) has some potential disadvantages, such as a slow learning speed, the possibility of convergence to the local minimum and the degradation of the classification performance due to over-training. The Extreme Learning Machine (ELM) proposed by Huang  is a new learning technique for multi-layer neural networks which overcomes the limitations of a BPNN. It can obtain the weights of neural network not through iterative learning but through an analytical method. ELM completes the learning process using the Moor-Penrose generalized inverse. Thus, it has the advantage of a high learning speed. In addition, ELM is more convenient to use than a conventional neural network, because only the type of activation function and the number of hidden neurons are necessary to configure the neural network. Several studies have shown that the performance of ELM is higher than that of a BPNN and similar to or slightly lower than that of a support vector machine [2, 22]. In this study, ELM is more appropriate than SVM, because of using large training dataset. Therefore, we used ELM due to its fast training speed and high performance. The sigmoid function is used for an activation function, and 50 hidden neurons are used.
In this study, the MIT-BIH arrhythmia database is used to evaluate the performance of the proposed algorithm. There are five classes of heartbeats according to the AAMI . These are the normal beat class (N class), the supraventricular ectopic beat class (S class), the ventricular ectopic beat class (V class), the fusion beat class (F class), and the unknown beat class (Q class). Matching between the annotations of the MIT-BIH arrhythmia database and the AAMI heartbeat classes is presented in Table 1 of Chazal's paper . The MIT-BIH arrhythmia database is composed of ECG records from 48 subjects with a sampling frequency of 360 Hz, and each record is 30 minutes long. The ECG signals were measured using two leads. Channel A was generally measured by a modified limb lead II (MLII), and channel B by an augmented lead 1 (A1). Some subjects to whom those lead methods were difficult to apply were measured by other lead methods. In this study, only channel A (ML II) is used, and subjects with pacing beats are not included so as to match the AAMI standard. Thus, 44 records were used to configure and evaluate the algorithm after excluding records 102, 104, 107 and 217.
Typically, the morphology signal from the S class is similar to that from the N class. On the other hand, the morphology signal from the V class shows a sizeable difference. Thus, many feature extraction methods for ECG morphology may discriminate the V class from the N class very well in one subject. However, because the morphology from the N class and the V class vary depending on the subject, a normalization process is necessary. The proposed method normalizes the difference in the ECG morphology among subjects using dedicated wavelets. To evaluate the effectiveness of the normalization process, we compared five feature extraction methods. Those are a normal template cross-correlation (TemplateM), a discrete wavelet transform (DWT) with the Haar wavelet (Haar(7)), a DWT with the Daubechies 6 wavelet (db6(4)), a CWT with the dedicated wavelet of subject 106 (106CWT), and a CWT with the dedicated wavelet of each subject (DedicatedW). Each feature extraction method was applied to the ECG signals of the N class and V class, which were extracted from subjects who had over 100 heartbeats of the V class. Using the methods above, we compressed the output data to one-dimensional data via LDA. For 106CWT and DedicatedW, we used the PCA-LDA composition, because these two methods are based on CWT. The compressed data was normalized to remove amplitude variation due to the different signal processing methods.
In terms of using the template segments, TemplateM stands on a basis similar to that of the proposed method. Thus, it is included in this evaluation. Haar(7) and db6(4) are included in this assessment, as these methods are frequently used with many arrhythmia classification algorithms using DWT. Haar(7) and db6(4) indicates that the signal was decomposed up to 7 and 4 levels, respectively. The decomposition levels were set to the extent possible given the support of the signal length. 106CWT is included to evaluate the effect when using the same mother wavelet, similar to the ECG morphology, for all subjects. DedicatedW is the proposed method.
The discrimination ability of the above feature extraction methods was estimated using the Fisher discriminant ratio (FDR). FDR represents how far the data of two classes are spread apart (equation (6)).
In equation (6), μN and μV denote the data averages from the N class and the V class, respectively, and σN and σV are the respective standard deviations. The results are represented in Table 3.
Evaluation results and discussion
The scalograms using the dedicated wavelet optimized for each subject (106 or 207) are presented in Figure 3 as an example. The purpose of this study is to reduce the differences from each subject's ECG waveform characteristics while leaving the differences from the types of heartbeats. Figure 3 indicates that this objective is achieved well. Although subjects 106 (top of Figure 3) and 207 (bottom of Figure 3) have very different normal heartbeat waveforms from each other (Figures 3(a) and 3(f)), the normal heartbeat scalograms from these subjects have similar pattern, high components in similar position, through CWT using a dedicated wavelet (Figure 3(c) for subject 106 and (h) for 207). In contrast to the normal heartbeats, the high coefficients of premature ventricular contractions (PVC) are concentrated at the high scale (low frequency, Figure 3(e) and 3(j)).
As shown in Table 3, the FDRs are large when the templates or the mother wavelets are similar to the ECG morphology. (TemplateM, 106CWT and DedicatedW uses the ECG signal as a template or a mother wavelet; some researchers use the db6 wavelet due to its similarity to the ECG morphology [24, 25].). Specifically, TemplateM and DedicatedW, which use the ECG of each subject as a template, show the highest performance. We highlight the spaces of Table 3 where FDR is less than 2. The method which has the highest average FDR and the fewest subjects with a small FDR was noted to be DedicatedW. Based on these results, we confirm that DedicatedW has better discrimination ability than any other method.
In addition, while TemplateM can only evaluate the similarity between the template heartbeat and the input signal, the proposed method can analyze other characteristics. There are ECG morphologies and scalograms of a normal heartbeat and the right bundle branch block (RBBB) shown in Feature 4. Both the normal heartbeat and RBBB are included in the N class. However, the morphology of RBBB has a wide and deep S wave owing to its slow right ventricular depolarization (Figures 4(b) and 4(f)).
These differences should be ignored when classifying these two types of heartbeats into the N class. Nonetheless, this is not always feasible when using a method such as TemplateM. The proposed method is able to deal with these types of problems, as it can recognize some characteristics of RBBB through a scalogram. ('1' in Figure 4(d) and '2' in Figure 4(h) both show some low-frequency components when an S wave occurs.)
Figure 5 shows how the proposed algorithm deals with such a problem. Figure 5 represents the average of the scalograms and the PCA-LDA mapping matrix. The proposed algorithm mapped 20 × 234 (4680) dimensional data to four-dimensional feature vectors through equation (7).
In equation (7), xinand xoutare the input and output feature vectors, respectively. xinis a 4680 dimensional data, and xoutis a 4 dimensional data. MPCA, MLDA, μPCA, and μLDAare the mapping matrix of PCA (4680 × 100), mapping matrix of LDA (100 × 4), mean value of PCA (1 × 4680), and mean value of LDA (1 × 100), respectively . μPCAis shown in Figure 5(a). As shown in the bottom line of equation (7), only the first term is a variable related to the input feature vectors xin, as the second term and third term are constants. Thus, MPCA·MLDAdetermine the relationship strength between the input feature vectors and the output feature vectors. Figures 5(b) represents the first component of MPCA·MLDA, which refer to the relationship strength between the first component of the final four-dimensional feature vectors and each coefficients value of the scalogram.
According to the output features through the first PCA-LDA mapping matrix, the features of the V class are generally greater than those of the other classes. As shown in Figure 5(b), the components of the first PCA-LDA mapping matrix are small at a low scale level and near zero in the time domain, where normal scalograms have large components ('1' in (b) in Figure 5). On the other hand, the components are large at a high scale level and 0.2~0.3 in the time domain, where the scalograms of the V class have large components ('2' in (b) in Figure 5). Interestingly, there are virtually zero components in which the scalograms of RBBB have large components ('3' in (b) in Figure 5). Through this process, the proposed method can disregard the unwanted features in the time-frequency domain.
Performance evaluations were carried out in two ways. First, the training and the evaluating datasets were constructed using a leave one out rule to evaluate the algorithm's performance for all of the subjects in the MIT-BIH arrhythmia database. This indicates that the data from 43 subjects were used as the training dataset and that the data from the other one subject was used to evaluate the algorithm. In this case, as the size of the training dataset was too large, with over 100,000 heartbeats, the training dataset was reduced to 15,000 heartbeats (13,300 heartbeats for the N class, 450 heartbeats for the S class, 1200 heartbeats for the V class, 130 heartbeats for the F class and 2 heartbeats for the Q class). The obtained results are presented in Table 4. The proposed algorithm was evaluated in terms of its accuracy, specificity, sensitivity and positive predictive value.
In addition, to facilitate a comparison with the findings of Chazal [7, 8], the training and evaluating datasets of the second assessment were constructed in the same manner used in those studies. The training dataset (subjects 101, 106, 108, 109, 112, 114, 115, 116, 118, 119, 122, 124, 201, 203, 205, 207, 208, 209, 215, 220, 223 and 230) and the test dataset (subjects 100, 103, 105, 111, 113, 117, 121, 123, 200, 202, 210, 212, 213, 214, 219, 221, 222, 228, 231, 232, 233 and 234) consisted of data from 22 subjects in each case. The details of the possession rate of each class are shown in Table 2 in a referenced study . The size of the training dataset was also too large, with a total of 51,020 beats. Thus, the training dataset was reduced by about 15,000 beats with the same rate. The obtained results are presented in Table 5.
The results of the proposed algorithm show high performance for the N class in all of the subjects. In addition, for the V class, the proposed algorithm shows high performance in most subjects, but some subjects were characterized by low performance. According to some findings [7, 8] reporting the performance against each subject, the proposed algorithm shows better performance compared to one study  and comparable performance compared to another  with the exception of subject 213. The proposed algorithm shows very low ST and +P results for subject 213, as it reports a low ST in the F class and because the subject 213 has many F class heartbeats.
On the other hand, the ST of the S class showed low performance in almost all of the subjects except subject 232. Interestingly, this performance characteristic is common in intersubject research. According to some findings [7, 8] as well as the results of the proposed algorithm, those studies show low ST and +P values for the S class in many subjects but achieve high overall performance owing to the high performance only for subject 232. This may be due to characteristics that are too distinctive and the presence of too many S class heartbeats by subject 232. Subject 232 has 1382 S class heartbeats among a total of 2779 S class heartbeats in the entire MIT-BIH arrhythmia database. Moreover, this subject has simultaneously bradycardia and many consecutive atrial premature beats which are in the S class. Therefore, the RRIs of subject 232's normal beats occurred for about 2 seconds, and the RRIs of the atrial premature beats occurred for about 1 second. These characteristics make the feature weak, because the atrial premature beats normally have a RRI of less than 0.6 seconds. This distinctive characteristic of #232 makes the previous heartbeats' information to become important. The proposed algorithm uses information of past heartbeats, such as Annotbf1, Annotbf2, Morphbf1 and Morphbf2, to classify S class heartbeats correctly. Table 6 shows classification results of the proposed algorithm in subject 232 with or without the previous heartbeats' information. Through using the features, the proposed algorithm could classify N and S class more clearly. This kind of previous heartbeats' information, like the number of consecutive S class heartbeats, was also used by other researchers .
As shown in Table 5, it is very common for existing studies to classify N class heartbeats into the S class. However, the proposed algorithm significantly reduces the number of such misclassifications. This gives the proposed algorithm higher AC, SF and +P values than those of the other studies of the S class. For a V class heartbeat, the proposed algorithm also shows higher performance compared to other research, except for one existing study .
On an average performance basis, as obtained by the weighted sum of the performance of each class, the proposed algorithm showed higher AC, ST and +P results, whereas the SF result was slightly lower compared to that of previous studies. This arose because the proposed algorithm misclassified many non-N class heartbeats into the N class. However, in terms of the misclassified heartbeat rate against the entire set of heartbeats, the proposed algorithm misclassified only 2.52% of all heartbeats, while 4.42%, 6.11% and 14.12% of all heartbeats were incorrectly classified in the research of Chazal in 2004 and 2006 and Ince in 2009 [6–8]. In addition, the proposed algorithm is based on 1 channel ECG signal only, but the studies [6–8] used 2 channel ECG signals.
The proposed algorithm specifically uses a different subject-adaptation technique against the existing subject-adaptation algorithms. The aforementioned previous studies [6, 7] attempted to adapt the characteristics of a target subject using a part of his data preclassified by a physician in the training dataset. On the other hand, the proposed algorithm does not apply a target subject's data to the training dataset, but uses specific transfer functions to reduce the differences among subjects. In terms of the efforts of physicians, they preclassify 500 heartbeats and then apply them to the training dataset in one previous study . In another , they preclassify and apply the first 5 minutes of the data (about 300 heartbeats). The proposed algorithm improved the usability with only 6 heartbeats.
In this paper, a dedicated wavelet-based arrhythmia classification algorithm is proposed. This algorithm has the characteristic using a mother wavelet optimized for each subject to achieve stable performance, even in the intersubject condition. Through this process, we sought to reduce the variation among the subjects and to preserve only the differences from the arrhythmia. It was verified that this approach works effectively though an assessment of the features and an evaluation of the algorithm. The proposed algorithm is able to ensure higher performance with less effort compared to previous studies. However, the low performance of the S class and V class of some subjects remain as a problem. In addition, the high computational load due to the use of CWT is a disadvantage. We cannot reach a perfect intersubject condition, although those would significantly reduce the amount of intervention needed by physicians. We will continue to develop a robust arrhythmia classification algorithm to deal with these problems.
Yu S, Chen Y: Noise-tolerant electrocardiogram beat classification based on higher order statistics of subband components.Artif Intell Med 2009, 46: 165–178. 10.1016/j.artmed.2008.11.004
Rodríguez-Sotelo JL, Cuesta-Frau D, Castellanos-Dominguez G: Unsupervised classification of atrial heartbeats using a prematurity index and wave morphology features.Med Biol Eng Comput 2009, 47: 731–741. 10.1007/s11517-009-0435-2
Patil G, Rao K, Niranjan U, Satyanarayan K: Evaluation of QRS complex based on DWT coefficients analysis using Daubechies wavelets for detection of myocardial ischemia.J Mech Med Biol 2010, 10: 273–290. 10.1142/S0219519410003356
The authors declare that they have no competing interests.
JK conceived the study, implemented algorithm and drafted the manuscript. SD participated in the design and coordination of the study, and helped analysis and interpretation of the results. ML finally reviewed the manuscript as corresponding author. All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License (
), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.