Open Access

Reference point detection for camera-based fingerprint image based on wavelet transformation

BioMedical Engineering OnLine201514:40

Received: 17 November 2014

Accepted: 2 April 2015

Published: 30 April 2015

The Erratum to this article has been published in BioMedical Engineering OnLine 2016 15:30



Fingerprint recognition systems essentially require core-point detection prior to fingerprint matching. The core-point is used as a reference point to align the fingerprint with a template database. When processing a larger fingerprint database, it is necessary to consider the core-point during feature extraction. Numerous core-point detection methods are available and have been reported in the literature. However, these methods are generally applied to scanner-based images. Hence, this paper attempts to explore the feasibility of applying a core-point detection method to a fingerprint image obtained using a camera phone.


The proposed method utilizes a discrete wavelet transform to extract the ridge information from a color image. The performance of proposed method is evaluated in terms of accuracy and consistency. These two indicators are calculated automatically by comparing the method’s output with the defined core points.


The proposed method is tested on two data sets, controlled and uncontrolled environment, collected from 13 different subjects. In the controlled environment, the proposed method achieved a detection rate 82.98%. In uncontrolled environment, the proposed method yield a detection rate of 78.21%.


The proposed method yields promising results in a collected-image database. Moreover, the proposed method outperformed compare to existing method.


Mobile phone Camera Ridge analysis Wavelet Core-point detection Reference point Fingerprint


Nowadays, smartphones and tablet PCs are commonly used and are stimulating the utilization of mobile ecommerce, which has significantly increased. This is because a smartphone or tablet PC has a powerful computing capability and is portable and easy to use. As an example, one of the most popular smartphone applications is mobile banking (M-Banking) [1]. M-Banking has become famous because the mobility of a smartphone or table PC makes it possible to conduct any kind of electronic transaction related to banking services at any time and place. Several recent works have reported recognition methods for personal biometrics using a smartphone [2-4]. Although biometric authentication on a smartphone or tablet PC is very challenging, it will provide many benefits in the future.

One of the most popular biometric methods being implemented in the public sector is fingerprint recognition. The uniqueness and immutability of fingerprints with aging have been proven. This encourages the rapid growth of the device technology for fingerprint scanners. In general, such a device has a flat scanner, upon which the tip of a finger is placed. This action allows the scanner lens to easily acquire the ridges and valleys of the fingerprint imprinted on the surface. Further, the acquired fingerprint can be matched with a stored template in a fingerprint collection.

Prior to matching the fingerprint, it is necessary to extract some features from the fingerprint image. Feature extraction for fingerprint recognition can be classified into two types called local-feature and global-feature extractions. Local-feature extraction considers the small details or minutiae of a fingerprint ridge. In contrast, global-feature extraction considers the flow pattern of the whole fingerprint. The literature shows that global features are more robust than local features in a large database [5]. Hence, this paper considers unresolved issues related to global features.

At the global level, a fingerprint can be presented as a flow of ridges with symmetric properties, as depicted in Figure 1. The orientation flow that drastically changes, as denoted by the thick arrow in Figure 1, is called the core point or singular point. Two unknown fingerprint images can be aligned with reference to the core point. In addition, it can be utilized to classify the fingerprint type [6,7]. Thus, it is very important to properly identify the core point [6-14]. Figure 2 shows the known distinctive types of fingerprints, and the corresponding singular points are marked.
Figure 1

Example of flow ridges found in fingerprint (re-printed from [32]).

Figure 2

Known six types of fingerprints that acquire using scanner-based along with core-point denoted with circle and delta-point denoted with triangle. The known size types are (a) Plain-arch, (b) Tented-arch, (c) Left-loop, (d) Right-loop, (e) Twin-loop, (f) Whorl (re-printed from [20]).

The common problems reported for large-scale databases, including the variety of fingerprint types and poor image quality, can be solved with the help of the reference point [6,15,16]. Currently, core-point detection methods have only been proposed for scanner-based images. Numerous studies, as reported in the literature, have proposed new methods for the analysis and detection of the core points in scanner-based fingerprint images. In general, the existing methods can be classified into two categories. The first method utilizes the Poincare index to locate the core point. This algorithm computes the total orientation variation around a point to determine whether a core point is present. The second method uses template matching or ridge, probability, or shape analysis [8,16-18]. In real applications, Poincare index-based methods have been proven to be more robust than the second method because they are able to handle image rotation. Moreover, even though the computation cost is high, it is still acceptable.

Meanwhile, no study can be found related to the use of core-point detection for camera-based images. The characteristics of a camera-based image are totally different than those of a scanner-based image. As seen in Figure 3, it can be blurred and could have been captured at a different angle or illumination and have other issues. Hence, it is challenging to explore a core-point detection method that can be applied to camera-based images. Prior to discussing the detection method, recent core-point detection methods are discussed to show the state-of-the-art.
Figure 3

Various fingerprint image acquire using a camera and taking in different angle and environment (re-printed from [33]).

Related works

Poincare’s index (PI) method is frequently used in the early stage of fingerprint recognition systems [6,10,11,19,20]. This conventional method considers the closed curve in order to locate the core point. The Poincare index, along with its properties, can be defined as follows:
  • Define V as a vector field identified as a continuous two-dimensional vector as follows:
    $$ V(x,y)=p(x,y)+i.q(x,y) $$
  • The Poincare index of V(x,y) within an unsystematic simple bounded path γ can be presented as:
    $$ I(\gamma)=\frac{1}{2\pi} \int_{(x,y)\in\gamma} d\phi(x,y) $$
    $$ \phi(x,y)=arg\ V(x,y) $$

    where ϕ[0,2π) represents the angle at coordinate (x,y). The integration is taken contrary to clockwise within γ.

  • The Poincare index within the predefined region is equivalent to the summation of the singular point Poincare indices inside the predefined region. It can be defined as follows:
    $$ \sum_{k}I(\gamma_{k})=I(\Gamma_{E})-I(\Gamma_{I}) $$
  • If a singular point does not exist on two homotopic closed paths, it has the same Poincare indices.

The main drawbacks of this method are its high computation cost, inability to handle a wide range of fingerprint types, and extreme sensitivity to noise [20]. Poincare index-based algorithms typically produce countless unauthentic detections, particularly when dealing with degraded fingerprint images, even after applying image enhancement or post-processing on spurious detections. This issue brings problems in many applications, resulting in poor performance. There are two reasons that such an outcome can occur. First, the feature of the Poincare index itself is not strong enough to accurately detect a singular point. Second, spurious detections should reduce not only based on local characteristics but also global discriminative is necessary to be incorporated. Perona [21] reported an interesting method called orientation diffusion. In the dynamic diffusion process, Perona utilized the global constraint of oriented texture.

The sine-map-based approach [12] processes two defined regions of a fingerprint image and then calculates the sine component. Then, the sine component is analyzed using a multiple resolution analysis to locate the core point. Jain et al. reported that their method is resistant to rotation and noise. However, the sine component needs perfect circulars of the fingerprint to achieve good result.

Liu et al. [15] considered the orientation uniformity in a fingerprint image to trace the core point. This method only considers the orientation field and curvature direction, which is suitable only for limited fingerprint types. Quite similar to the method of Liu et al. was the curvature-based technique introduced by Van and Le [22]. However, their method was found to be weak when handling plain arch ridges and twofold core-point fingerprints.

Numerous features have been reported in the literature, including a pixel-wise orientation field [23,24], orientation curvature [17,18,25], and template model matching [26]. Such methods are known to have the same drawbacks of being unable to handle the degraded fingerprint images that might occur in scanner-based images and even more frequently in camera-based images.

Recently, Le and Van [5] proposed a core-point detection method that employs vertical variation and rotational symmetry features. An experiment was performed on a standard database called FVC2004 DB1. The result showed that this method was able to handle the fingerprint variety in the dataset. Then, Bahgat et al. [27] utilized an orientation map that was presented in grayscale to locate the core point. They assumed that the core point should appear at the end of the discontinuous line in the orientation map. In addition, a predefined equation was used to verify the core point. Their method was tested on the FVC2002 and FVC2004 datasets, and their reported results were better than those of the fast edge-map based method [28].

A comprehensive survey by Khalil et al. [29] on core-point detection reported that the literature contains some initial studies related to camera-based images. However, there are many gaps still to be resolved regarding the implementation of a robust core-point detection method for camera-based images. Hence, this paper proposes a method for locating the core point in a fingerprint image acquired using a camera.


A general overview of the proposed method is given in Figure 4, and explanations are provided in the next sub-sections.
Figure 4

General overview of the proposed method.

DFT-based fingerprint ridge analysis

Following fingerprint image acquisition using a camera, the fingerprint image in the red, green, and blue format is converted to the grayscale format to reduce the illumination problem due to the use of several lighting sources. Such conversion is important because the lighting issue has been recognized as one of the foremost aspects that degrade the detection performance. In addition, fingerprint enhancement using STFT analysis (see section ‘Fingerprint enhancement’) requires a normalized image to obtain good results. The simple normalization algorithm [30] computes the variance and mean of the image to reduce the illumination variation. Using equation 5 below, a normalized image is produced:
$$ g(x,y)=\frac{f(x,y)-m_{f}(x,y)}{\sigma_{f}(x,y)} $$
where f(x,y) defines the original fingerprint image, m f (x,y) is the calculated estimation of the mean of f, and σ f (x,y) presents the assessment of the standard deviation from the original image. The assessments for m f and σ f are produced from spatial smoothing. Figure 5 shows the original fingerprint image converted into the grayscale format and the final normalized image.
Figure 5

RGB (a) to grayscale conversion (b) and the normalized image (c).

The surface of a fingertip contains a unique pattern formed by ridges and valleys, which is known as the fingerprint. Both the ridges and valleys create a circular shape as they repeatedly run in parallel and sometime diverge. The pattern can be revealed through a ridge analysis method, as presented in this paper. A high-pass filter is applied to the input image through a discrete Fourier transform (DFT) to reflect the ridge information. The ridge information is used later to locate the core point. The proposed DFT-based ridge analysis for a fingerprint image is presented in Figure 6.
Figure 6

Overview of ridge analysis using DFT.

In the first step, a Gaussian high-pass filter, G(u,v), is created by taking a padding size (P x Q) and a filter width for the Fourier transform process as defined in equations 6, 7, and 8.
$$ D_{0}=0.05 \times P $$
$$ D(u,v)=\sqrt{\left(u-\frac{P}{2} \right)^{2} + \left(v-\frac{Q}{2} \right)^{2}} $$
$$ G(u,v)=1-\left(e^{\frac{-D^{2}(u,v)}{2{D_{0}}^{2}}} \right) $$
Figure 7 illustrates the Gaussian high-pass filter utilized in the proposed method. The padding size is computed with respect to the input size, 2(M×N). Afterward, a proper filter width is selected during the experiments, where the most suitable filter width is 5% of the padding size, P. The input image, f(x,y), is transformed into the wavelet domain using equation 9.
$$ F(u,v)=\sum_{x=0}^{M-1}\sum_{y=0}^{N-1} f(x,y)e^{-i2\pi\left(\frac{ux}{M}+\frac{vy}{N}\right)} $$
Figure 7

high-pass filter that generated using Gaussian function in DFT domain.

Figures 8 and 9 present the input and result of the conversion process, respectively. The filter, G(u,v), is employed as a scalar product of the matrices of the input image, F(u,v), in the Fourier domain using equation 10, as drawn in Figure 10. Then, the matrix H(u,v) is reverted back into the spatial domain using the inverse function in equation 11. The result is depicted in Figure 11.
$$ H(u,v)=F(u,v).G(u,v) $$
Figure 8

Fingerprint image acquired using mobile phone camera.

Figure 9

Fingerprint image (Figure 8) in Fourier domain.

Figure 10

Output of scalar product between input image and the filter.

Figure 11

Output of scalar product (Figure 10) in spatial domain.

$$ h'(x,y)=\frac{1}{MN} \sum_{u=0}^{M-1}\sum_{v=0}^{N-1} H(u,v)e^{-i2\pi\left(\frac{ux}{M}+\frac{vy}{N}\right)} $$
In the last step, the generated image is cropped to eliminate the preceding padding. The ridge information is obtained based on a simple threshold, as defined in Equations 12 and 13. Figure 12 presents the result of the DFT-based fingerprint ridge analysis.
$$ I_{ridge}=\left\{ \left.I_{ridge}(i,j) \in h' \right| i=1,\ldots,I_{h},j=1,\ldots,I_{w} \right\} $$
Figure 12

Output of proposed ridge analysis.

$$ I_{ridge}= \left\{\begin{array}{ll} 1,& I_{ridge} (i,j)>0.005 \times max (I_{ridge})\\ 0,& \text{otherwise} \end{array}\right. $$

Fingerprint enhancement

Locating the core point and extracting the fingerprint features are critical steps. Therefore, a fingerprint improvement is preferred to attain a high-quality image for the fingerprint verification step. In this paper, the short time Fourier transform (STFT) approach is utilized to enhance the fingerprint image [31]. Figure 13 depicts the fingerprint enhancement diagram, which consists of two major steps. Initially, the image of the ridge information is split into overlapping windows. These overlapping windows are considered to maintain the continuity of the ridges and eliminate the block effect that naturally occurs as a result of block-by-block operations. The small window preserves the invariance of a small region of the image and is easily modeled as a surface wave. In every window, the STFT analysis is applied to produce a ridge orientation image (ROI), energy image (EI), and ridge frequency image (RFI). Prior to performing the STFT analysis, the Fourier spectrum of the window F(r,θ) is first presented in a polar system. Then, two functions called the marginal density function {p(θ),p(r)} and probability density function p(r,θ) are used to generate the ROI, as calculated below:
$$ p(r,\theta)=\frac{\left|F(r,\theta)\right|^{2}}{\int_{r}{\int{\theta{\left|F(r,\theta)\right|^{2}}}}} $$
Figure 13

Overview of fingerprint enhancement.

$$ p(r)=\int_{\theta}{p(r,\theta)d\theta} $$
$$ p(\theta)=\int_{r}{p(r,\theta)dr} $$
The orientation value denoted as θ is a variable that is understood to be a random value, and it has the probability density function denoted by p(θ). Using equation 17, the ROI value shown below can be obtained. Afterward, vector averaging is applied to smooth the orientation image.
$$ E(\theta)=\frac{1}{2} tan^{-1} \left(\frac{\int_{\theta}{p(r,\theta)d\theta}}{\int_{r}{p(r,\theta)dr}} \right) $$
Similar to the above, the ridge frequency r is presumed to consist of random values, along with function p(r), which is known as the function of the probability density. RFI is calculated using equation 18 below, and it is smoothed by applying 3×3 Gaussian mask to it.
$$ E(r)=\int_{r} {p(r).rdr} $$

Afterward, a coherence image is generated from the smoothed orientation image. This coherence image is required to prevent spurious artifacts. Such artifacts occur because the ridge flow stops at the block edge, particularly at high curvature regions next to the core points. It is known that around the core point, numerous dominant orientations are present.

Prior to refining the fingerprint image, the energy image is computed using Equation 19, as shown below. The Otsu’s well-known thresholding is utilized to obtain the region mask.
$$ E(x,y)=log\left\{ \int_{r}{\int_{\theta}{\left|F(r,\theta)\right|^{2}}} \right\} $$
Two filters called a radial filter (Equation 20) and angular filter (Equation 21) are obtained from the ridge orientation matrix, coherence matrix, and frequency matrix. These filters are utilized to smooth the image. The filter is applied on overlapping 16×16 blocks in the Fourier spectrum [31]. These two filters are defined below:
$$ H_{r}(r)=\sqrt{\frac{\left(rr_{BW}\right)^{2n}}{\left(rr_{BW}\right)^{2n}+\left(r^{2}+r_{BW}^{2}\right)^{2n}}} $$
$$ H_{\phi} (\phi)= \left\{\begin{array}{ll} \frac{cos^{2}\pi\left(\phi-\phi_{c}\right)}{2\phi_{BW}},& \left|\phi-\phi_{c}\right| \leq \phi_{BW} \\ 0, & \text{otherwise} \end{array}\right. $$
where r BW is the radial bandwidth, ϕ BW is the angular bandwidth, and ϕ c is the mean orientation. Finally, after applying the above-mentioned filters, the Fourier spectrum is reconstructed back into the spatial domain. Figure 14 depicts the fingerprint image following the enhancement step.
Figure 14

Final image after fingerprint enhancement.

Locating the core-point

Looking at the convex ridge information, the core point can be defined as the point that has the highest curvature. One way to locate the dominant orientation is by considering the reliability of the orientation field. The reliability of the orientation field provides flexibility to analyze the dominant orientation for various types of fingerprints. Figure 15 presents a diagram of the proposed method to locate the core point.
Figure 15

Overview of core-point detection method.

The preliminary step to discovering the core point is computing the orientation field. Therefore, the fingerprint image is split into non-overlapping blocks, with the block size fixed at 16×16. Then, each block is defined by a single orientation that matches the foremost orientation. However, this orientation might not be accurate because of various issues, including degraded valley and ridge structures, noise, and low contrast. Therefore, a low-pass filter is utilized on the local ridges to smooth the flawed orientation. Finally, the orientation of the ridges is calculated based on the vertical G yy and horizontal G xx gradients and the orientation image {ϕ x ,ϕ y }, denoted as follows:
$$ G_{xx}=\sum_{(x,y)\in w}{{G_{x}^{2}} (x,y)} $$
$$ G_{yy}=\sum_{(x,y)\in w}{{G_{y}^{2}} (x,y)} $$
$$ G_{xy}=\sum_{(x,y)\in w}{G_{x} (x,y). G_{y} (x,y)} $$
$$ \theta(x,y)=\frac{1}{2} \tan^{-1} \frac{2G_{xy}}{G_{xx}-G_{yy}} $$
$$ \phi_{x}=\cos(2\theta(x,y)) $$
$$ \phi_{y}=\sin(2\theta(x,y)) $$
Subsequently, the maximum reliability peak is computed to obtain the greatest curvature based on vectors of the gradients G and filtered orientation ϕ using the equation shown below:
$$ \gamma_{min}=\frac{\left(\left(G_{xx}+G_{yy}\right)-\left(\phi_{x}' G_{xx}+G_{yy}\right)-\left(\phi_{y}' G_{xy} \right) \right)}{2} $$
$$ \gamma_{max}=G_{xx}+G_{yy}-\gamma_{min} $$
$$ \gamma=1-\frac{\gamma_{min}}{\gamma_{max}} $$
The result of applying Equation 27 for generating the orientation field is presented in Figure 16. The orientation {ϕ x ,ϕ y } is shown by a short arrow in Figure 16 and points at a different angle. Afterward, equation 30 is computed to obtain the reliability peak prior to locating the core point. There are three main steps to locate the core point. Initially, two distinct regions are decomposed from the orientation field reliability by considering the predefined range of values. Afterward, a thinning operation is applied to obtain the vital connectivity and contour of the original orientation profile. The core point should appear on a ring-shaped contour. Hence, non-ring-shaped contours are removed using a shrinking operation. The final core point is obtained by applying a shrink and fill operation. The core-point location of the input image in Figure 8 is depicted in Figure 17.
Figure 16

Enhanced ridge image (left) and input image (right) overlapped with orientation Field.

Figure 17

Locating core point.

Experimental results

The image input used in this paper is acquired by utilizing the camera on a mobile device, HTC magic (3.5 megapixels). Fingerprint images are acquired from 13 subjects. There are two scenarios in our dataset. In the first scenario, the environment was controlled, and the fingerprints were captured on a white background. The dataset for the 1st scenario is called dataset A, which consists of 94 fingerprint images. In the other scenario, the images were acquired in a natural way without any human intervention. These images are called dataset B, which consists of 546 fingerprint images. This dataset is known to have unsteady angles due to the natural acquisition, uncontrolled environment lighting, and blur issue. The unsteady angle occurred because the fingerprint and camera were not placed at fixed positions. The uncontrolled environment lighting created illumination issues, which are challenging for core-point detection. The blur issue occurred when the camera was unable to focus on the fingerprint or the fingerprint itself was not stable.

The performance of the proposed method is evaluated in a way similar to Le & Van’s experiments [5], as presented in Figure 18. An attempt is made to evaluate the consistency and accuracy. In this regard, a fingerprint expert manually assigned the core point on each fingerprint image. These data are used as references for the detected core points generated by the proposed method. The Euclidean distance between each core point assigned by the expert and detected by the proposed method is calculated. The result is considered to be the distance error from the predefined core point. Similar to Ref. [5,15], the acceptable distance error should be less than 20 pixels. According to the literature, a distance error greater than 20 pixels is a significant error and counted as a false detection. This is because such a detected core point can be classified as a false core point, which drastically affects the subsequent processing steps. A smaller value for the distance error means better accuracy. The detection rate is counted from the number of correctly detected core points with a distance error of less than 20 pixels. The false alarm rate is identified as the wrongly identified core points with a distance error greater than 40 pixels.
Figure 18

Le & Van [5] evaluated their method with reference points that detected by human experts (denoted with black square).(a)(d) example of their accepted cases where the detected core-point has distance error is less than 20 pixels, and (e)(h) shows their false cases, which is the detected core-point has distance error greater than 20 pixels. (this figure re-printed from [5]).

The experiment shows that the proposed method is able to appropriately identify the core points in the dataset. The method accuracy is assessed based on the distance error as mentioned above. The proposed method achieves a detection accuracy of up to 82.98% on dataset A and 78.21% on dataset B. Meanwhile, the conventional method mostly fails to locate the core points in both dataset A and dataset B. The conventional method can achieves an accuracy of less than 55%. Tables 1 and 2 present comparisons between the proposed method and conventional method. Figure 19 depicts some results for the detected core points, denoted by small circles, that were obtained using the proposed approach and conventional method.
Figure 19

Some comparison results of singular point detection. First row is the detected results using conventional algorithm [6] and second row is using the proposed algorithm.

Table 1

The comparison result of proposed method and conventional method on dataset A


Rates (%)

Proposed method

Conventional method [ 6 ]


Detection rate




False Alarm Rate (FAR)



Table 2

The comparison result of proposed method and conventional method on dataset B


Rates (%)

Proposed method

Conventional method [ 6 ]


Detection rate




False Alarm Rate (FAR)



Figures 20, 21, 22, 23 present the distributions of the distance errors and accuracy of the proposed method and conventional method. As seen in Figure 21, the proposed method can reach an accuracy of 90% on dataset A when the distance error considers up to 30 pixels. In the larger dataset B, the proposed method can achieve that same accuracy only if the distance error considered is less than 40 pixels, as depicted in Figure 23. On the other hand, the conventional method is unable to reach such an accuracy even after considering a large distance error of 100 pixels. The experimental results show that the proposed method outperforms the conventional method. This is because the conventional method is actually intended for scanner-based images. The conventional method was mostly unable to detect the core point properly. It was proven that the proposed ridge analysis using the Fourier transform is able to construct clear ridges from a camera-based image. The ridge analysis is critical because the subsequent steps are correlated with the output of the ridge analysis step.
Figure 20

The distance error result on dataset A.

Figure 21

The accuracy rate result on dataset A.

Figure 22

The distance error result on dataset B.

Figure 23

The accuracy rate result on dataset B.

Based on observations, the wrong results occurred as a result of degraded fingerprint images. In this paper, blurry images were typically inescapable because of shaky hands. It is known that if the ridge information becomes fragile, it is hard to reveal the pattern. Therefore, the proposed ridge analysis method was incapable of properly extracting the ridges of a fingerprint. Once an inaccurate ridge is produced, the computed orientation field has the wrong direction, which generates an erroneous core point. Nevertheless, this condition does not often occur and can be disregarded. In addition, the latest cameras included in mobile devices include a built-in image stabilizer to reduce the blur-motion issue.

Conclusions and future works

The state-of-the-art core-point detection methods for scanner-based fingerprint images have been discussed, and it was found that most of the existing methods are intended specifically for scanner-based images. Recently, there has been promising research that exploits the camera on a mobile phone to capture a fingerprint image, and attempts have been made to authenticate such an image. This recent research motivated this paper to fill the research gap, particularly on core-point detection methods.

The authentication of biometric data such as fingerprints using a mobile phone will increase and open up a wide range of handy applications, including banking, device protection, and other daily tasks that require individual authentication. However, it is too early to release the system to the public, as more studies are needed, especially concerning the analysis of the image quality, image enhancement, pre-processing, feature extraction, and securing the biometric data.

In this paper, a core-point detection method was proposed and tested on two datasets that considered both controlled and uncontrolled environments. A comparison was also carried out with a widely used robust core-point detection method intended for scanner-based images. In the controlled environment, the proposed method outperformed the conventional method with a detection rate of 82.98%. Afterward, in dataset B, where the fingerprint images were taken without controlling the environment, the proposed method again surpassed the conventional method with a detection rate of 78.21%. The proposed method can locate the core point with the allowable distance error of 20 pixels. Hence, ridge analysis using the DFT-based approach has been proven to produce enough information to find the core point.

In addition, the experiment showed that core-point detection methods suffer under an uncontrolled environment. Both methods obtained less detection accuracy on dataset B. This occurred because a fingerprint image taken using a camera in a natural environment usually has motion blurs, under-lighting, or over-lighting, and the camera itself may fail to focus on the object. Therefore, more study to improve the image quality is necessary to solve this issue.

Bringing fingerprint authentication to a mobile phone device has great potential for the future. However, additional studies must be conducted to achieve acceptable accuracy. One of the crucial issues on this topic is degraded input images. Problems such as blurry fingerprints, unstable images due to shaky hands, low-resolution images, and uncontrolled environments that create unsatisfactory lighting requirements affect the quality of fingerprint images. Hence, more study on the pre-processing phase can be considered to improve the outcome for the next phases (i.e. core-point detection, feature extraction). Thus, it should be possible to have robust features prior to recognizing the fingerprint image. However, such a feature extraction method must consider the hardware limitations on a mobile phone device.




This research paper is made possible through the help from Mr. Fajri Kurniawan. Thanks to him for helping collected the fingerprint database and formatting the manuscript in LaTEX.

Authors’ Affiliations

Faculty of Commerce and Economics, Sana’a University
Center of Excellence Information Assurance, King Saud University


  1. Al-Jabri IM, Sohail MS. Mobile banking adoption: Application of diffusion of innovation theory. J Electron Commerce Res. 2012; 13(4):379–91.Google Scholar
  2. Ramos-Lara R, López-García M, Cantó-Navarro E, Puente-Rodriguez L. Real-time speaker verification system implemented on reconfigurable hardware. J Signal Process Syst. 2013; 71(2):89–103.View ArticleGoogle Scholar
  3. Liu B, Lam S-K, Srikanthan T, Yuan W. Iris recognition of defocused images for mobile phones. Int J Pattern Recognition Artif Intell. 2012; 26(08):23.MathSciNetView ArticleGoogle Scholar
  4. Vivaracho-Pascual C, Pascual-Gaspar J. On the use of mobile phones and biometrics for accessing restricted web services. IEEE Trans Syst Man Cybern C Appl Rev. 2012; 42(2):213–22.View ArticleGoogle Scholar
  5. Le TH, Van HT. Fingerprint reference point detection for image retrieval based on symmetry and variation. Pattern Recognition. 2012; 45(9):3360–72.View ArticleGoogle Scholar
  6. Karu K, Jain A. Fingerprint classification. Pattern Recognit. 1996; 17(3):389–404.View ArticleGoogle Scholar
  7. Zhang Q, Yan H. Fingerprint classification based on extraction and analysis of singularities and pseudo ridges. Pattern Recognition. 2004; 37(11):2233–43.View ArticleGoogle Scholar
  8. Cappelli R, Lumini A, Maio D, Maltoni D. Fingerprint classification by directional image partitioning. IEEE Trans Pattern Anal Machine Intell. 1999; 21(5):402–21.View ArticleGoogle Scholar
  9. Li J, Yau W-Y, Wang H. Combining singular points and orientation image information for fingerprint classification. Pattern Recognition. 2008; 41(1):353–66.View ArticleMATHGoogle Scholar
  10. Wang L, Dai M. Application of a new type of singular points in fingerprint classification. Pattern Recognition Lett. 2007; 28(13):1640–50.View ArticleGoogle Scholar
  11. Jain AK, Prabhakar S, Hong L. A multichannel approach to fingerprint classification. IEEE Trans Pattern Anal Machine Intell. 1999; 21(4):348–59.View ArticleGoogle Scholar
  12. Jain AK, Prabhakar S, Hong L, Pankanti S. Filterbank-based fingerprint matching. IEEE Trans Image Process. 2000; 9(5):846–59.View ArticleGoogle Scholar
  13. Liu M, Jiang XD, Kot AC. Efficient fingerprint search based on database clustering. Pattern Recognition. 2007; 40(6):1793–803.View ArticleMATHGoogle Scholar
  14. Kulkarni JV, Patil BD, Holambe RS. Orientation feature for fingerprint matching. Pattern Recognition. 2006; 39(8):1551–54.View ArticleMATHGoogle Scholar
  15. Liu M, Jiang XD, Kot AC. Fingerprint reference point detection. EURASIP J Appl Signal Process. 2005; 2005:498–509.View ArticleMATHGoogle Scholar
  16. Nilsson K, Bigun J. Localization of corresponding points in fingerprints by complex filtering. Pattern Recognition Lett. 2003; 24(13):2135–44.View ArticleGoogle Scholar
  17. Wang X, Li J, Niu Y. Definition and extraction of stable points from fingerprint images. Pattern Recognition. 2007; 40(6):1804–15.View ArticleMATHGoogle Scholar
  18. Parka CH, Lee JJ, Smith MJT, Park KH. Singular point detection by shape analysis of directional fields in fingerprints. Pattern Recognition. 2006; 39(5):839–55.View ArticleGoogle Scholar
  19. Bazen AM, Gerez SH. Systematic methods for the computation of the directional fields and singular points of fingerprints. IEEE Trans Pattern Anal Machine Intell. 2002; 24(7):905–19.View ArticleGoogle Scholar
  20. Zhou J, Chen F, Gu J. A novel algorithm for detecting singular points from fingerprint images. IEEE Trans Pattern Anal Machine Intell. 2009; 31(7):1239–50.View ArticleGoogle Scholar
  21. Perona P. Orientation diffusions. IEEE Trans Image Process. 1998; 7(3):457–67.View ArticleGoogle Scholar
  22. Van TH, Le HT. An efficient algorithm for fingerprint reference-point detection. In: Proceedings of the international conference on computing and communication technologies (RIVF’09). Da Nang, Vietnam: 2009. p. 1–7.Google Scholar
  23. Huang CY, Liu LM, Douglas Hung DC. Fingerprint analysis and singular point detection. Pattern Recognit Lett. 2007; 28(15):1937–45.View ArticleGoogle Scholar
  24. Jin C, Kim H. Pixel-level singular point detection from multi-scale Gaussian filtered orientation field. Pattern Recognition. 2010; 43(11):3879–90.View ArticleMATHGoogle Scholar
  25. Chikkerur S, Ratha N. Impact of singular point detection on fingerprint matching performance, presented at Fourth IEEE Workshop on Automatic Identification Advanced Technologies. Buffalo, NY, USA, 17-18 October, 2005, pp.207–12.Google Scholar
  26. Wang Y, Hu J, Phillips D. A fingerprint orientation model based on 2D Fourier expansion and its application to singular-point detection and fingerprint indexing. IEEE Trans Pattern Anal Machine Intell. 2007; 29(4):573–85.View ArticleGoogle Scholar
  27. Bahgat GA, Khalil AH, Abdel Kader NS, Mashali S. Fast and accurate algorithm for core point detection in fingerprint images. Egypt Informatics J. 2013; 14:15–25.View ArticleGoogle Scholar
  28. Cao G, Mao Z, Sun QS. Core-point detection based on edge maps in fingerprint images. J Electron Imaging. 2009; 18(1):1–4.Google Scholar
  29. Khalil MS, Kurniawan F, Saleem K. Authentication of fingerprint biometrics acquired using a cellphone camera: a review. Int J Wavelets Multiresolut Inf Process. 2013; 11(5):1350033. doi:10.1142/S0219691313500331.View ArticleGoogle Scholar
  30. Hiew BY, Teoh ABJ, Ngo DCL. Automatic digital camera based fingerprint image preprocessing. In: Computer Graphics, Imaging and Visualisation, International Conference on. Australia: Sydney: 2006. p. 182–89.Google Scholar
  31. Chikkerur AC, Recognition Govindaraju V. Fingerprint Image Enhancement Using STFT Analysis. Pattern, Analysis, Image, LNCS 3687. Berlin Heidelberg: Springer; 2005.Google Scholar
  32. Weng D, Yin Y, Yang D. Singular points detection based on multi-resolution in fingerprint images. Neurocomputing. 2011; 74:3376–88.View ArticleGoogle Scholar
  33. Yang B, Li X, Busch C. Collecting fingerprints for recognition using mobile phone cameras. Proc. SPIE 8304, Multimedia on Mobile Devices 2012; and Multimedia Content Access: Algorithms and Systems VI, 83040L (February). 2012:1. [10.1117/12.909920].Google Scholar


© Khalil; licensee BioMed Central. 2015

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.