Erratum to: A method of extending the depth of focus of the high-resolution X-ray imaging system employing optical lens and scintillator: a phantom study

© 2016 The Author(s). This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. ERRATUM Li et al. BioMed Eng OnLine (2016) 15:68 DOI 10.1186/s12938‐016‐0199‐5 BioMedical Engineering OnLine


Introduction
Currently, high-resolution X-ray imaging systems such as Nano-CT based on the third-generation synchrotron sources and X-ray detector with transparent luminescent screen have been widely used to achieve a high spatial resolution in sub-micrometer or nanometer range [1,2]. So it can be effectively used to image tissues, cells and many other small samples in-vitro or in-vivo, especially the calcification in the vascular and in the glomerulus. The detector consists of scintillator (YAG:Ce or LuAG:Ce), microscope optics and low-noise CCD. The scintillator receives the x-ray and converts it into visible light, which is then magnified by the microscope optics and collected by the CCD camera [2]. To achieve good image quality, the depth of focus of the microscope optics must match the thickness of the scintillator. The final spatial resolution depends both on the thickness of the scintillator and the depth of focus. According to Rayleigh criterion, the microscope lens in detection systems that employ transparent luminescent screens usually has a large numeric aperture (NA) to extend the limits of resolution of certain wavelengths of visible light. However, with an increasing NA, the depth of focus will become thinner and thinner. Therefore if we want to focus the lens into the thin layer of the luminescent screen and to obtain an image of good quality, we must grind the luminescent screen very thin even down to several micrometers or less. The thinning of the scintillator lowers the efficiency of converting X-ray into visible light and makes the manufacture challenging. For example, if we desire that the detection system employing transparent luminescent screens should achieve the limit of resolution of visible light (~0.3 μm), the thickness of corresponding scintillator should be~0.5 μm according to Rayleigh criterion and formula of depth of focus. In the traditional microscope system, the resolution is proportional to 1/NA, and the depth of focus is proportional to 1/NA^2, so they are incompatible and usually we must compromise the two parameters to suit specific needs. Fortunately, in the detection system employing luminescent screens, there are some unique properties that we can employ to extend the depth of focus. One of them is everywhere in the scintillator is uniform, so the light intensity distribution across arbitrary layer in the scintillator is identical except the minor difference that may arise from the divergence of X-ray and the change in amplitude due to the x-ray attenuation. The divergence is negligible because the scintillator is relatively thin and the cone angle is very small for most high-resolution X-ray imaging systems [2]. For example, the scintillator installed before the objective with 5× magnification is about 20 μm thick, and the one matching with 20× objective lens is about 5 μm. And usually in those cases, the effect of divergence can not be seen. In order to avoid the effect of divergence, we just focus our method on extending the depth of focus of objective lens with higher magnification than 5× to about 20 μm. Because the 20 μm-thick scintillator is easy to process and if we can use it to take place of the much thinner scintillator needed in the higher resolution X-ray imaging systems, we can greatly improve the resolution without considering the influence of the defect of focus. Our method is similar to light field imaging [3,4] but without any other additional facilities. In the experiment, we first simulated the continuous fluorescencing process of the scintillator, and approximately got a composite point spread function model by using one property that point spread function of a certain luminous plane of a microscopic system follows Gaussian distribution. Subsequently, we simulated a blur projection image resulted from a little thicker scintillator which is not matching with the magnification scale of the optical lens. After that, we adopted POCS and total variation algorithm to recover the blur image. The results in the experiment show that we can effectively extend the depth of focus of the high-resolution X-ray imaging systems by using this algorithm.

Simulated imaging
In the high-resolution X-ray imaging system, the detector consists of thin scintillator material, optical lens with magnification and low noise CCD camera ( Figure 1). The thickness of the scintillator should be matched with the depth of focus of the objective lens, e.g., one objective with 20× magnification and resolution of 1 μm, its DOF is about 5 μm while the thickness of the scintillator would be 5 μm or less. When the scintillator is thicker than that, a point on some layer of the scintillator that is away from the focal plane is focused before or behind the sensor plane. This point becomes a circle called circle of confusion (CoC) in the sensor, just as shown in Figure 2. When this circle is smaller than a pixel, it cannot be seen, and that is why depth of focus exists. But when the thickness of the scintillator exceeds the depth of focus, all the circles of confusion from different planes will overlap and make the resultant image blur. The size of CoC can be derived from the basic geometric relationship among image plane, object plane and focal length, and this relationship can be expressed as Eq. (1).
Here, D is object distance, F is the conjugate distance of D and f is the focal length. The direction z is parallel to the optical axis. According to the imaging equation,  object plane at the distance of D 1 is focused before the immediate plane. It becomes CoC in the immediate plane. The diameter of the circle can be calculated by Eq. (2).
Here, d is the diameter of the aperture lens. When d CoC is bigger than the pixel size of the image sensor, the resolution of the image captured will decrease.
In the usual optical lens systems, image blur can be considered as the degradation model with a convolution filter h(x, y) and this model can be denoted by where, h(x, y) is the impulse response or point spread function (PSF) which is usually relative with CoC of the imaging system, n(x, y) is the additive noise and g(x, y) is the blur image. PSF of the defocused lens can be either modeled by the geometrical or physical optics [5]. The former ignores the effect of diffraction and it can be considered as uniform distribution when the aperture is circular. For the physical optics case, diffraction and aberration is taken into consideration and the signal intensity in the CoC is assumed to follow Gaussian distribution [6][7][8] as follow.
Here, the spread parameter s is proportional to the radius of the CoC [6][7][8]. Because the radiuses of the CoC formed by different scintillator planes are not identical, the response functions of different planes are not same, and because the divergence can be neglected, the final blur image generated by the photons transmitting from all the planes of the scintillator can be described as a form of integration as Eq. (5).
where, z 0 refers to the distance from the focal plane in the scintillator to the scintillator's left surface, z 1 represents the distance from the focal plane to the right surface, and ξ represents X-ray absorption coefficient. When assuming the image formed by the photons which come from the focal plane and transmit through the optical lens to be g 0 ( y), all other images formed by the photons coming from other scintillator planes can be expressed as follow.
Here, k is the scale factor and can be written as k = F (D + z) imaging formula of the final degraded blur image needed to be modifies as Its Fourier transformation can be described as where, and c is a constant which is relative with the system. Now we can discretize G(u, v) and get G(m, n) as stated below.
When λ m ≤ zm D < λ m + 1, λ n ≤ zn D < λ n + 1, and l m , l n are integers, we can get the approximate expression of F(m + zm D , n + zn D ) by using bilinear interpolation or nearest neighbor interpolation if z is quite small comparing with D. In our discussion, we adopt bilinear interpolation for more accuracy and assume that z 1 is 10 μm, -z 0 is -10 μm, D = 50 mm, sampling account of the blur image is 512*512, so where, dz , and e −M 2 2 (m 2 +n 2 )z 2 −ξ z 1 k 4 = g(z) .
If we set ψ to be the inverse transform of function Ψ, the expression of the blur image in spatial domain can be expressed as where, For now, we set up a relationship between the image f(k, l) generated by the luminous plane located on the focal plane and the blur image g(k, l). Here, and we will use this relationship to simulate the degraded blur image.

Recovery algorithm
When the pixel number of the detector is small, it is easy to invert the system of equations by conventional matrix theory. However, in practice there are usually as many as tens of thousands of pixels in one detector, so it precludes any possibility to invert the matrix directly. For solving the problems with large numbers of functions, there is an attractive iterative method for solving them. That is called projection onto convex sets algorithm, which was developed by Kaczmarz [9] and can be briefly described as follows.
The vector f (κ) in Eq. (13) is composed of all the variables f (k) (1), f (k) (2), . . . , f (k) (M) that are generated by the kth iteration, and w i = (w i1 , w i2 , · · ·, w iM ) is the weighting vector. In our model, we have a similar form W k,l · W k,l W k,l f (κ) = (f (0, 0), f (0, 1), · · · , f (i, j), · · · ) W k,l = (w 0,0,k,l , w 0,1,k,l , · · · , w i,j,k,l , · · · ) By using the POCS algorithm we can basically solve the equation sets. However, since various types of noises and errors may exist during the imaging procedure, an accurate unique solution to the system of equations may not exist. Rather we can only obtain a near solution. Hence, the restored image is degenerated to some extent. In order to improve the quality of the restored image, we adopt a compress-sensing algorithm [10] based on the minimization of the image total variation (TV) during the solution procedure. The TV algorithm is first developed for signal noise reduction, and recently brought in CT reconstruction for insufficient data problems [11][12][13]. In this paper we construct a TV expression similar to the one in CT reconstruction. Combining above iterative algorithm with the TV regularization, the new solution procedure can be viewed as an optimization problem as stated below.
In Eq. (15), the vector f represents an image and is composed of f(i, j) and the expression || f || TV means the l 1 norm of the gradient of the image represented by f . The latter two parts of Eq. (15) are treated as data consistency constraint and positive constraint, and the former part is a regularization term. The TV expression can be described as where ε is a small positive number. The minimization of TV of the image estimate can be achieved using many optimization algorithms such as gradient descent or conjugate gradient method. In this paper we choose the gradient descent method to search the minimization point that agrees with data consistency constraint. And the derivative of || f || TV can be expressed as an approximate discrete form as Here, we have specified all the parameters needed in the solution procedure. The solution procedure contains two phases: one is POCS and the other is TV gradient descent. The two phases are alternately executed iteratively. The POCS phase enforces the data consistency and the TV gradient descent phase reduces the TV of the intermediate image from the POCS phase.

Experiment and result
The simulated pattern (Figure 3) is 512*512 pixels in dimensions, and the pixel size is 1 μm. This pattern is composed of five parts and the finest feature size on it is 1 μm. The left top and the right bottom part contain a series of line pairs with different pitches from 1 μm to 128 μm. The left bottom part is a grid with pitch of 5 μm, and this grid together with the right middle Siemens star can be used to test the algorithm's performance on the complicated objects. The right top part contains line pairs with gradually varied grayscale ranging from 10 to 190. This part can be used to test the performance on the recovery of low contrast images. The specific parameters of the involved optical system are given in table 1. The focal length f is 40 mm. The object distance D is 50 mm and the image distance is 200 mm. The pixel size of detector is 1 μm, so the limited resolution of the optical system is 2 μm and the proper thickness of the scintillator should be about 1 μm. But the scintillator chosed in the experiment is 20 μm thick, and hence the resolution will greatly degrade. Given the parameters of the optical lens and the relationship between the degraded blur image and the original high-resolution image, we can approximately simulate a blur image caused by inappropriate thickness of scintillator and defect of focus. In the experiment, we assumed that the scintillator material was CsI and X-ray energy was 60 kV.
In order to testify the effectiveness of the recovery algorithm, the complicated pattern shown in Figure 3 was used as the wanted high-resolution image to simulate the degraded blur image (Figure 4). After the blur image preparation, we first demonstrated the performance of the proposed recovery algorithm on the recovery of the noise-free degraded blur image. Further, Poisson noise was added to the degraded blur image to study the robustness of this algorithm. To evaluate the resolution of the restored image, modulation transfer function (MTF) [14] was brought in the experiment. And besides, quantifications based on signal to noise ratio (NSR) and peak signal to noise ratio (PNSR) were used to evaluate the quality of the restored image.

Ideal projection data without noise
We first tested the performance of the proposed algorithm on an ideal model, that is, a noise-free degraded projection image. The blur image together with some magnified parts of it is shown in Figure 4(a). From the blur image, we obtained a clear image (Figure 4(b)) by using the recovery algorithm proposed above. The MTF curve in Figure 5(a) shows that the spatial resolution of the blur image is only about 160 lp/mm according to 10% MTF. And the MTF in Figure 5(b) indicates that the recovery algorithm has perfect performance on the noise-free model, and it can make the optical system achieve a somewhat limited resolution.  Projection data with noise From the experiment and analysis above, we can conclude that the proposed algorithm can perfectly recover the blur noise-free projection data. In the following experiment, we want to test whether this method is available under noisy condition. The noise having impact against recovery quality results from CCD camera. And the three primary sources of noise in a CCD imaging system are photon noise, dark noise, and read noise. Usually the involved CCD in the high-resolution X-ray imaging system is usually equipped with refrigeration unit to greatly reduce dark noise and utilizes design enhancement to reduce read noise, and besides, in practice the camera integration time is often increased to collect more photons and make photon noise exceed both dark noise and read noise, so the photon noise can be assumed the primary noise. The photon noise results from the inherent statistical variation in the arrival rate of photons incident on the CCD and the magnitude of signal containing it follows the Poisson statistical distribution of photons incident on the CCD at a given location. So in the experiment, we added Poisson noise to the noise-free blur projection image and we assumed the maximum transmitting photon number was 10 6 . From the recovery image in Figure 6(a) and the MTF curve in Figure 6(b), we can conclude that this algorithm can also recover the noisy blur image and achieve a somewhat limited resolution. But the only shortage is that this algorithm can not perfectly recover the low contrast parts (shown in part 3 of Figure 6(a)), and the quality of restored image depends on the signal to noise ratio of the degraded blur image.

Quantification based evaluation
In addition to the visualization based evaluation above, we performed measurements of local SNR and PNSR to quantitatively evaluate the quality of the recovery image. The local SNR is defined as And, PNSR is computed as PSNR = 10*log10( 255 2 MSE ).
Here, MSE represents the Mean Square Error between the true image and the recovery image. In the experiment, several different levels of Poisson noise were added to the noise-free degraded blur image. From the recovery image in Figure 7(a) we can see that when the incident photon number reaches to 2.5 × 10 6 , the corresponding Poisson noise has nearly no impact against the quality of the recovery image. And from the local SNR and global PSNR curves for different noise level shown in Figure 7(b) (c), we can quantitatively determine how the noise level impacts the recovery image quality, so the curves can help us in practice to determine which noise level is suitable according to the quality requirement.

Discussion
In this paper we simulated a luminous model of the scintillator which is thicker than the matching thickness. And based on this model, we developed a debluring algorithm to recover the blur image caused by defect of focus. From the analysis of MTF curves in the experiment, we demonstrated that the recovery algorithm can recover the blur image almost by a hundred percent and achieve a somewhat limited resolution. When the noise in real application is taken into consideration, this algorithm can also recover the blur image quite well, but the quality of the recovery image depends on the SNR level of the projection blur image acquired from CCD camera. That is to say, if we want to get a more accurate recovery image, we need higher SNR level of projection Figure 6 The recovery image and its MTF curve when Poisson noise was added to the projection blur image: (a) recovery image from noisy projection data and some parts of it with magnification, (b) MTF's for the recovery image in (a).
blur image. The stimulation in 3.3 can help us to choose which noise level is appropriate in a specific system. Of course, some techniques such as ML-EM algorithm, penalized weighted least-squares algorithm [15] which are respectively developed in PET and CT reconstruction may be modified here to suppress the noise and to improve the quality of the recovery image. And this modification is worthy for further study and discussion.

Conclusion
The results in the experiment indicated that the proposed method could effectively extend the depth of high-resolution X-ray imaging system. And if we want to get a recovery image of higher quality, we should control the noise level of the degraded blur image. In order to put this algorithm in practice, we should also know the system constant c and calibrate the scintillator to make it to be perpendicular to the primary optical axis, and these are our future work.