Skip to main content

Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares

Abstract

Background

In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction.

Methods

In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares).

Results

Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise.

Conclusion

The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.

Background

Nowadays, X-ray CT (computed tomography) is still one of the most important medical imaging technologies. Compared to other imaging methods, like ultrasonic imaging or magnetic resonance imaging, CT imaging has its own advantages to provide patients’ anatomical structure. CT images are of higher quality than the ultrasonic images. Compared to the magnetic resonance imaging, CT imaging is of faster imaging speed and CT images have a bit higher spatial resolution.

However, high quality CT images are now based on a noticeable X-ray radiation dose to the patient, which may result in a non-negligible lifetime risk of genetic or cancerous diseases [1]. This fact has become a major concern for clinical applications of CT scans. Therefore, this article focuses on reducing the radiation dose in CT and generating the clinically qualified image.

In order to reduce the radiation dose, one direct way is to lower mAs levels in CT data acquisition protocols. However, this approach will result in insufficient numbers of X-ray photons received by the detectors and hence increase the quantum noise level. This is a great challenge for these advanced methods which have taken the noise models into account. For example, PWLS (penalized weighted least-squares) based methods [2] can only deal with the noise-contaminated sinogram data to some extent. As a consequence, the radiation dose cannot be reduced evidently by this approach if the reconstructed images need to be qualified for clinical diagnosis. Another way to reduce imaging dose is to decrease the number of X-ray projections operated by fewer sampling angles. Yet, this will lead to serious streaking artifacts in the image reconstructed by the analytic-based algorithms like FBP (filtered backprojection algorithm) [3], since the analytic-based algorithms require that the number of projections should follow the Shannon/Nyquist sampling theorem [4].

To solve the under-sampled reconstruction problem, algebraic algorithms transform the problem to a series of linear equations and the reconstructed image is acquired by the iterative method. SART (simultaneous algebraic reconstruction technique) [5] is one of the typical algebraic iterative methods. But the images reconstructed by traditional algebraic algorithms do not satisfy the clinical image quality demand. Researchers are trying to improve the performance of the iterative reconstruction algorithms by introducing prior information of the reconstructed images to the reconstruction process.

Recently, the compressed sensing theory [6, 7] has been applied to the CT reconstruction problem. The reconstruction problem of the compressed sensing algorithm can be written as a constrained optimization

$$ \arg {\mathop{{\rm min} }\limits_{{\boldsymbol{\upmu}}}} ({\mathbf{A}}{\boldsymbol{\upmu }} - {\hat{\mathbf{g}}})^{2} + \lambda R({\boldsymbol{\upmu}}) $$
(1)

where \( ({\bf A}{\boldsymbol{\upmu}} - {\hat{\mathbf{g}}})^{2} \) is the data fidelity term, A is the projection matrix modeling the forward projection, μ is the image vector to be reconstructed, \( {\hat{\mathbf{g}}} \) is the projection vector, R(μ) is the regularization term including the prior information, λ is the regularization parameter adjusting the relative penalties on the regularization term and the data fidelity term. The regularization term is modeled by the prior information. One commonly used regularization term is the TV (total variation) norm, which is the sum of the absolute coefficients of the DGT (discrete gradient transform) of the reconstructed image. TV-based algorithm is usually used to solve the CT reconstruction problem since that most CT images are piecewise constant. So the TV norm is small enough to reflect the image sparsity. Algorithms like ASD-POCS (adaptive steepest descent projection onto convex sets) [8] and GPBB (gradient projection Barzilai Borwein) [9] are typical TV-based reconstruction algorithms. Besides the TV norm, the recent coming DL (dictionary learning) methods can generate the regularization term, which divide the CT image into many overlapped patches and calculate the sparse representations of the patches under the basis of an over-complete dictionary. While the TV-based methods take the image as a whole to measure the image sparsity, the DL methods have advantages over TV since they extract the sparse prior information from each overlapped image patch, which utilizes more sparse information than TV. Reference [10] combines the SIR (statistical iterative reconstruction) model with DL regularization term to deal with the low-dose reconstruction problem.

Although the compressed sensing algorithms behave well in the low-dose CT reconstruction problem, some low-contrast details of the reconstructed image are lost, especially when the sampling rate decreases further. In order to reserve more image information and satisfy the need of further radiation reduction, the TV-based algorithm developed a weighted TV regularization to help preserve the edge of the image [11]. When it comes to the DL-based algorithm, it is common that the regularization term is L2-norm error. Usually, algorithms based on L2-norm error minimization may lead to over-smoothing of the image, causing loss of detail. One way to alleviate this problem is to develop algorithms to minimize the L1-norm error. In this work, we develop a DL reconstruction algorithm with the L1-norm regularization term while the L1-minimizaiton problem is approximated by iteratively solving the weighted L2-minimization problem, known as IRLS (iteratively reweighted least squares) [12, 13]. The proposed L1-DL (L1 dictionary learning) algorithm is compared with ADSIR (adaptive dictionary based statistical iterative reconstruction), SART and GPBB to demonstrate the improvement of image quality based on the L1-norm regularization term.

The rest of the paper is organized as follows. In ‘‘Methods’’ section, firstly, the backgrounds of ADSIR and IRLS are reviewed. Then the L1-DL algorithm and its corresponding optimizing methods are described. After that, the workflow of the algorithm is provided. In section Simulation, a series of experiments are performed to demonstrate the proposed algorithm’s superiority. Finally, the section Conclusion with corresponding discussions and further analysis is provided.

Methods

Review of ADSIR

The previous work developed a DL based approach for low-dose X-ray CT with the statistical reconstruction model [10]. The related algorithm is reviewed in details as follows.

SIR model

Let I and N be integers and \( {\mathbf{\mathbb{R}}} \) be the real space. By assuming a monochromatic source, measured data follow the Poisson distributionphantoms with full display

$$ y_{i} \sim {\text{Poisson}\{b_{i} e^{{-g_{i}}} +\gamma_{i}}\},\quad i = 1, \ldots ,I $$
(2)

where \( {\mathbf{b}} = (b_{1} ,b_{2} , \ldots ,b_{I} )^{T} \in {\mathbf{\mathbb{R}}}^{I \times 1} \) is the entrance X-ray intensity, \( {\mathbf{y}} = (y_{1} ,y_{2} , \ldots ,y_{I} )^{T} \in {\mathbf{\mathbb{R}}}^{I \times 1} \) is the exit X-ray intensity, \( {\mathbf{g}} = (g_{1} ,g_{2} , \ldots ,g_{I} )^{T} \in {\mathbf{\mathbb{R}}}^{I \times 1} \) is the integral of the linear attenuation coefficient with \( g_{i} = [{\mathbf{A{\boldsymbol{\upmu}} }}]_{i} = \sum\nolimits_{j = 1}^{{N^{2} }} {a_{ij} \mu_{j} } \), \( {\mathbf{A}} = \{ a_{ij} \} \in {\mathbf{\mathbb{R}}}^{{I \times N^{2} }} \) is the system matrix, the reconstructed image \( {\varvec{\upmu}} = (\mu_{1} ,\mu_{2} , \ldots ,\mu_{{N^{2} }} )^{T} \) is a linear attenuation coefficient distribution, which transforms the initial image of N Ã— N pixels to a vector \( {\varvec{\upmu}} \in {\mathbf{\mathbb{R}}}^{{N^{2} \times 1}} \), γ i represents the read-out noise.

The objective function of the SIR model is as

$$ \arg \mathop {\hbox{min} }\limits_{{\varvec{\upmu}}} \sum\limits_{i = 1}^{I} {\frac{{\omega_{i} }}{2}\left( {\left[ {{{\bf A}\boldsymbol{\upmu}}} \right]_{i} - \hat{g}_{i} } \right)^{2} } $$
(3)

where \( \sum\nolimits_{i = 1}^{I} {(\omega_{i} /2)([{{\bf A}\boldsymbol{\upmu}}]_{i} - \hat{g}_{i} )^{2} } \) is the data fidelity term, \( {\hat{\mathbf{g}}}= (\hat{g}_{1} ,\hat{g}_{2} , \ldots ,\hat{g}_{I} )^{T} \in {\mathbf{\mathbb{R}}}^{I \times 1} \) is the measured data of g calculated by \( \hat{g}_{i} = \ln (b_{i} /(y_{i} - \gamma_{i} )) \), ω i  = (y i  âˆ’ Î³ i )2/y i is the statistical weight.

In the SIR model, the statistical weight reflects the confidence of the projection measurement along each path. The projection data through denser paths would have lower SNR (signal to noise ratios). Compared to the SART which minimizes a least square function, SIR deals with a statistically weighted least square function. However, this development is insufficient for the low-dose CT reconstruction that it is essential to introduce the regularization constraint.

DL model

Let N 0 and K be integers. The DL regularization term is represented as

$$ R\left( {\varvec{\upmu}} \right) = \sum\limits_{s = 1}^{S} {||{\mathbf{E}}_{s} {\varvec{\upmu}} - {\boldsymbol{D\alpha}}_{s} ||_{2}^{2} } + \sum\limits_{s = 1}^{S} {{\boldsymbol\nu}_{s} ||{\varvec{\upalpha}}_{s} ||_{0} } $$
(4)

where \( {\mathbf{E}}_{s} = \{ e_{nj}^{s} \} \in {\mathbf{\mathbb{R}}}^{{N_{o}^{2} \times N^{2} }} \) is an operator to extract patches with N 0 Ã— N 0 pixels from the image, the image patches are overlapping. With a sliding distance of one pixel, the total number of the patches is S = (N âˆ’ N 0 + 1) Ã— (N âˆ’ N 0 + 1). \( {\mathbf{D}} = ({\mathbf{d}}_{1} ,{\mathbf{d}}_{2} , \ldots ,{\mathbf{d}}_{K} ) \in {\mathbf{\mathbb{R}}}^{{N_{0}^{2} \times K}} \) is the training dictionary, whose column \( {\mathbf{d}}_{k} \in {\mathbf{\mathbb{R}}}^{{N_{0}^{2} \times 1}} \) is called an atom with the same size of a patch. Usually, the dictionary is redundant or over-complete (N 20 ≪ K). \( {\varvec{\upalpha}}_{s} \in {\mathbf{\mathbb{R}}}^{K \times 1} \) has few nonzero components as a sparse representation of the patch by the dictionary basis D. ν s is the regularization parameter different from λ.

ADSIR

By introducing the DL regularization term to the SIR model, the objective function of ADSIR is as

$$ \mathop {\hbox{min} }\limits_{{{{{\boldsymbol{\mu}} ,{\boldsymbol{\alpha}} ,{\mathbf{D}}}}}} \sum\limits_{i = 1}^{I} {\frac{{\omega_{i} }}{2}\left( {\left[ {{\mathbf{A{\boldsymbol{\upmu} }}}} \right]_{i} - \hat{g}_{i} } \right)^{2} } + \lambda \left( {\sum\limits_{s = 1}^{S} {||{\mathbf{E}}_{s} {\varvec{\upmu}} - {\mathbf{D{\boldsymbol\alpha }}}_{s} ||_{2}^{2} } + \sum\limits_{s = 1}^{S} {{\boldsymbol\nu}_{s} ||{\varvec{\upalpha}}_{s} ||_{0} } } \right) $$
(5)

Since μ, α s and D are all unknown, the algorithm is iterated by the alternating minimization scheme, which divides the primary problem into two recursive steps—update of the dictionary model and update of the image. During each iteration process, keep the image μ unchanged firstly when the dictionary model is updated. And the objective function (5) becomes

$$ \mathop {\hbox{min} }\limits_{{{{\boldsymbol{\alpha} ,\mathbf{D}}}}} \sum\limits_{s = 1}^{S} {||{\mathbf{E}}_{s} {\varvec{\upmu}} - {\mathbf{D{\boldsymbol\alpha }}}_{s} ||_{2}^{2} } + \sum\limits_{s = 1}^{S} {\nu_{s} ||{\boldsymbol{\alpha}}_{s} ||_{0} } $$
(6)

which is the dictionary learning and sparse representation problem. The objective function (5) can be transformed to

$$ \begin{aligned}& \mathop {\hbox{min} }\limits_{{{{\boldsymbol{\alpha} ,\mathbf{D}}}}} \sum\limits_{s = 1}^{S} {||{\mathbf{E}}_{s} {\varvec{\upmu}} - {\mathbf{D\alpha }}_{s} ||_{2}^{2} } \\ &||{\boldsymbol{\alpha}}_{s} ||_{0} \le L_{0}^{S} \end{aligned} $$

where the sparse level L S0 is set as a fixed number, usually from 5 to 10. Then the L0-norm problem as (5) is transformed to the OMP question, which has no need to determine the value of ν s . The dictionary D is updated by the classic K-SVD (K Singular Value Decomposition) algorithm [14]. Then, the sparse representation α s is updated by using the OMP (orthogonal matching pursuit) [15] algorithm based on recent dictionary. Once the dictionary model has been updated in the current iteration process, the image μ should be updated with α s and D invariable. In other words, the problem transforms to the form as

$$ \mathop {\hbox{min} }\limits_{{\varvec{\upmu}}} \sum\limits_{i = 1}^{I} {\frac{{\omega_{i} }}{2}\left( {\left[ {{\mathbf{A{\boldsymbol{\upmu} }}}} \right]_{i} - \hat{g}_{i} } \right)^{2} } + \lambda \sum\limits_{s = 1}^{S} {||{\mathbf{E}}_{s} {\varvec{\upmu}} - {\mathbf{D{\boldsymbol\alpha }}}_{s} ||_{2}^{2} } $$
(7)

which consists of the data fidelity term \( \sum\nolimits_{i = 1}^{I} {\omega_{i} ([{\mathbf{A{\boldsymbol{\upmu} }}}]_{i} - \hat{g}_{i} )^{2} /2} \) and the regularization term ∑  S s=1 ||E s μ âˆ’ Dα s || 22 . Since the regularization term is already a separable quadratic function. By replacing the data fidelity term with a separable paraboloid surrogate [16], the optimization can be iteratively solved by

$$ \begin{aligned} \mu_{j}^{t + 1} &= \left[ {\mu_{j}^{t} - \frac{{\sum\nolimits_{i = 1}^{I} {\left( {a_{ij} \omega_{i} \left( {\left[ {{\mathbf{A{\boldsymbol{\upmu} }}}^{t} } \right]_{i} - \hat{g}_{i} } \right)} \right) + 2\lambda \sum\nolimits_{s = 1}^{S} {\sum\nolimits_{n = 1}^{{N_{0}^{2} }} {e_{nj}^{s} \left( {\left[ {{\mathbf{E}}_{s} {\varvec{\upmu}}^{t} } \right]_{n} - \left[ {{\mathbf{D{\boldsymbol\alpha }}}_{s} } \right]_{n} } \right)} } } }}{{\sum\nolimits_{i = 1}^{I} {\left( {a_{ij} \omega_{i} \sum\nolimits_{k = 1}^{{N^{2} }} {a_{ik} } } \right) + 2\lambda \sum\nolimits_{s = 1}^{S} {\sum\nolimits_{n = 1}^{{N_{0}^{2} }} {e_{nj}^{s} \sum\nolimits_{k = 1}^{{N^{2} }} {e_{nk}^{s} } } } } }}} \right]_{ + } \\ &\quad j = 1,2, \ldots ,N^{2} \end{aligned} $$
(8)

IRLS

Consider a sparse signal x with length N (sparse means the signal has few nonzero components, that is ||x||0 ≪ N) is encoded by an M Ã— N measurement matrix Φ with M < N, and the encoded signal is y = Î¦x with length M. Referred to [12, 13], the objective function with Lp-norm minimization to solve the sparse signal is as

$$ \mathop {\hbox{min} }\limits_{x} ||x||_{p}^{p} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\text{subject}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\text{to}}{\kern 1pt} {\kern 1pt} {\kern 1pt} \rm\Phi x = y{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} (0 < p \le 1) $$
(9)

IRLS can be used for solving (9) by replacing the Lp-norm with a weighted L2-norm

$$ \mathop {\hbox{min} }\limits_{x} \sum\limits_{i = 1}^{N} {w_{i} x_{i}^{2} {\kern 1pt} {\kern 1pt} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\text{subject}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\text{to}}{\kern 1pt} {\kern 1pt} {\kern 1pt} \rm\Phi x = y} $$
(10)

where the weights are computed from previous iteration result x (t−1). To make the L2-norm approximate to the Lp-norm, the weights are calculated by

$$ w_{i} = \left|(x_{i}^{(t - 1)} )^{2} + \varepsilon \right|^{{\frac{p}{2} - 1}} $$
(11)

where a small É› > 0 is provided to ensure stability. Then the signal is iterated by

$$ x_{i}^{(t)} = Q_{t} \rm\Phi^{T} (\rm\Phi Q_{t} \rm\Phi^{T} )^{ - 1} y $$
(12)

where Q t  = diag(1/w 1, 1/w 2, â€¦, 1/w n ) is the diagonal matrix with entries \( 1/w_{i} = |(x_{i}^{(t - 1)} )^{2} + \varepsilon |^{{1 - \frac{p}{2}}} \)

L1-DL

In the ADSIR model, the regularization term ∑  S s=1 ||E s μ âˆ’ Dα s || 22 is the sum of the L2-norm of the difference between the image patch and its sparse representation. The L2-norm constraint tends to distribute the energy of ∑  S s=1 ||E s μ âˆ’ Dα s || 22 to each image patch E s μ âˆ’ Dα s uniformly. However, most CT images are piecewise constant, so that most image patches have small values of ||E s μ âˆ’ Dα s ||(equal or close to zero). A small part of the image patches have large values of ||E s μ âˆ’ Dα s || because they contain edge details and image information. In other word, the distribution of ||E s μ âˆ’ Dα s ||is sparse. So we propose the L1-DL method to make ||E s μ âˆ’ Dα s || converge to the sparse distribution by the L1-norm regularization term. The L1-DL utilizes more prior information of image sparsity (the sparse distribution of ||E s μ âˆ’ Dα s ||) than ADSIR.

Derived from the ADSIR algorithm, the objective function of the L1-DL method is generated by replacing the L2-norm of the regularization term with the L1-norm, which is

$$ \mathop {\hbox{min} }\limits_{{{\mathbf{{\boldsymbol{\upmu} ,\alpha ,D}}}}} \sum\limits_{i = 1}^{I} {\frac{{\omega_{i} }}{2}\left( {\left[ {{\mathbf{A{\boldsymbol{\upmu} }}}} \right]_{i} - \hat{g}_{i} } \right)^{2} } + \lambda \left( {\sum\limits_{s = 1}^{S} {||{\mathbf{E}}_{s} {\varvec{\upmu}} - {\mathbf{D\boldsymbol\alpha }}_{s} ||_{1} } + \sum\limits_{s = 1}^{S} {{\boldsymbol\nu}_{s} ||{\boldsymbol{\upalpha}}_{s} ||_{0} } } \right) $$
(13)

To make this optimization problem solvable, the patch-based weighted L2-norm similar to IRLS is introduced. Since the DL theory takes the sparse representation of the image patch as the sparse constraint, the weights are calculated by the information of each image patch, which is

$$ w_{s} = 1\left/\left[\left(\frac{1}{{N_{0}^{2} }}\sum\limits_{i = 1}^{{N_{0}^{2} }} {|[{\mathbf{E}}_{s} {\varvec{\upmu}}^{(t - 1)} ]_{i} - [{\mathbf{D}}^{(t - 1)} {\boldsymbol{\upalpha}}_{s}^{(t - 1)} ]_{i} |} \right) + \varepsilon \right]\right. $$
(14)

where N 20 is the dimension of E s μ (t−1) and D (t−1) α (−1) s , the script (t âˆ’ 1) means the corresponding terms are previous iteration results, a small É› > 0 is to ensure stability. The weights is not computed from each component of the vector E s μ âˆ’ Dα s . We firstly calculate the average absolute value of the components of the vector E s μ âˆ’ Dα s , and then take the reciprocal value of it as the weight of this patch. This strategy is different from the IRLS algorithm, which is in order to fit the nature of DL method that the image patch is treated as the basic unit.

By introducing the weights, the optimization problem becomes

$$ \mathop {\hbox{min} }\limits_{{{\mathbf{\boldsymbol{\upmu ,\upalpha ,D}}}}} \sum\limits_{i = 1}^{I} {\frac{{\omega_{i} }}{2}\left( {\left[ {{\mathbf{A{\boldsymbol{\upmu} }}}} \right]_{i} - \hat{g}_{i} } \right)^{2} } + \lambda \left( {\sum\limits_{s = 1}^{S} {w_{i} ||{\mathbf{E}}_{s} {\varvec{\upmu}} - {\mathbf{D{\boldsymbol\alpha }}}_{s} ||_{2}^{2} } + \sum\limits_{s = 1}^{S} {\nu_{s} ||{\boldsymbol{\upalpha}}_{s} ||_{0} } } \right) $$
(15)

The iteration is also operated by the alternating minimization scheme. When updating the dictionary model, the objective function (15) becomes

$$ \sum\limits_{s = 1}^{S} {w_{i} ||{\mathbf{E}}_{s} {\varvec{\upmu}} - {\mathbf{D{\boldsymbol\alpha }}}_{s} ||_{2}^{2} } + \sum\limits_{s = 1}^{S} {{\boldsymbol\nu}_{s} ||{\boldsymbol{\alpha}}_{s} ||_{0} } .$$
(16)

By the transformation \( {\mathbf{E}}_{s} {\mathbf{{\boldsymbol{\upmu}}^{\prime}}} = \sqrt {w_{s} } {\mathbf{E}}_{s} {\varvec{\upmu}} , \) \( {\mathbf{\boldsymbol\upalpha^{\prime}}}_{s} = \sqrt {w_{s} } {\varvec{\upalpha}}_{s} \) and the equation \( ||{\boldsymbol{\upalpha}}_{s} ||_{0} = ||\sqrt {w_{s} } {\varvec{\upalpha}}_{s} ||_{0}, \) (16) can be transformed to

$$ \sum\limits_{s = 1}^{S} {||{\mathbf{E}}_{s} {\mathbf{\boldsymbol{\upmu}^{\prime}}} - {\mathbf{D\boldsymbol\upalpha^{\prime}}}_{s} ||_{2}^{2} } + \sum\limits_{s = 1}^{S} {\nu_{s} ||{\mathbf{\boldsymbol\upalpha^{\prime}}}_{s} ||_{0} } $$
(17)

which is as the same form as (6), so that the dictionary model can be updated by K-SVD and OMP.

When updating the image, the objective function (15) becomes

$$ \mathop {\hbox{min} }\limits_{{\varvec{\upmu}}} \sum\limits_{i = 1}^{I} {\frac{{\omega_{i} }}{2}\left( {\left[ {{\mathbf{A\boldsymbol{\upmu} }}} \right]_{i} - \hat{g}_{i} } \right)^{2} } + \lambda \sum\limits_{s = 1}^{S} {w_{s} ||{\mathbf{E}}_{s} {\varvec{\upmu}} - {\mathbf{D\boldsymbol\upalpha }}_{s} ||_{2}^{2} }. $$
(18)

Similar to (8), the formula to update the image is

$$ \begin{aligned} \mu_{j}^{t + 1} &= \left[ {\mu_{j}^{t} - \frac{{\sum\nolimits_{i = 1}^{I} {\left( {a_{ij} \omega_{i} \left( {\left[ {{\mathbf{A\boldsymbol{\upmu} }}^{t} } \right]_{i} - \hat{g}_{i} } \right)} \right) + 2\lambda \sum\nolimits_{s = 1}^{S} {w_{s} \sum\nolimits_{n = 1}^{{N_{0}^{2} }} {e_{nj}^{s} \left( {\left[ {{\mathbf{E}}_{s} {\varvec{\upmu}}^{t} } \right]_{n} - \left[ {{\mathbf{D\alpha }}_{s} } \right]_{n} } \right)} } } }}{{\sum\nolimits_{i = 1}^{I} {\left( {a_{ij} \omega_{i} \sum\nolimits_{k = 1}^{{N^{2} }} {a_{ik} } } \right) + 2\lambda \sum\nolimits_{s = 1}^{S} {w_{s} \sum\nolimits_{n = 1}^{{N_{0}^{2} }} {e_{nj}^{s} \sum\nolimits_{k = 1}^{{N^{2} }} {e_{nk}^{s} } } } } }}} \right]_{ + } \hfill \\& \quad j = 1,2, \ldots ,N^{2} \hfill \\ \end{aligned} $$
(19)

The convergence of L1-DL is much more difficult to prove, and is considered beyond the scope of this paper. However, our experimental results to be reported below seem suggesting the convergence of our proposed algorithms.

Above all, the workflow of the developed algorithm is exhibited in Algorithm I. In addition, the ordered subsets convex (OSC) algorithm [17] is utilized to accelerate the convergence.

Simulation results

To verify the effectiveness of the proposed L1-DL algorithm on low-dose CT reconstruction, several simulation experiments are designed. All the simulations are performed in MATLAB on a dual-core PC with 3.10 GHz Intel Core i5-2400. The proposed algorithm is compared with SART, GPBB and ADSIR. The scanning geometry is the fan-beam geometry shown in Fig. 1. The size of the phantom is r = 20 cm, the radius of the scanning circle (or the distance from the radiation source to the central point of the phantom) is R = 40 cm. For each projection view, 512 detector elements were equi-angularly distributed with the field angle being 36.87°. The distance from the radiation source to the detector elements is 75.895 cm. During the simulation, the scanning circle covers 360° around the imaging phantom, and the size of the reconstructed image is 256 Ã— 256 pixels. The phantoms under simulations are respectively the Shepp–Logan phantom, and the human head slice from clinic, which are shown in Fig. 2 with full and part display window. The biomedical images are often observed by a proper window width to find more details. The number of different densities in the Shepp–Logan phantom is 6, and we set the density of water as 0.2.

Fig. 1
figure 1

The scanning geometry

Fig. 2
figure 2

The simulation phantoms. The first to third rows refer to the Shepp–Logan phantom (a, b), and the human head slice (c, d), the first column refers to the phantoms with full display windows and the second column refers to the phantoms with proper window widths

Some details about the proposed algorithm (like the setting of some variables) are explained as follows. The image patches are overlapping with a sliding distance of one pixel, and the patches are with N 0 Ã— N 0 pixels. The dictionary is consisted of K atoms. When updating the dictionary by K-SVD algorithm, the sparsity level is L D0 . The sparsity level of the sparse representation α s is set as L S0 . The iterative stopping criterions are É› 1 and É› 2. The values of all the corresponding variables are provided in Table 1. In the image processing field, K = 4 Ã— N 0 Ã— N 0 is a conventional choice to ensure the redundancy of the dictionary. The patch size also influences the quality of the image. If the patch size is too small, it could not effectively catch features in an image. On the other hand, a larger patch size may lead to an over-smoothed image and corresponds to a larger number of atoms in a dictionary, which would increase the computational cost. For image processing, N 0 = 8 is a proper value with considering both image quality and computational cost [18]. The sparsity level L D0 and L S0 are empirically determined according to the complexity of the image to be reconstructed.

Table 1 Summary of the parameter selections

In addition, all the experiments are simulated by photon number counting method. To perform the OSC algorithm, the transmission data can be calculated with \( y_{i} = b_{i} e^{{ - \hat{g}_{i} }} = e^{{ - \hat{g}_{i} }} \) by setting the entrance X-ray intensity b i as the number of photons. L1-DL and ADSIR set the initial guess of the image as a random matrix while GPBB and SART set all the values of the elements of the initial matrix as 1. OSC is utilized to accelerate the convergence of L1-DL and ADSIR, with 10 subsets. The number of iterations of OSC is 30. GPBB and SART are not accelerated by OSC. GPBB and SART stop the iteration when the number of iterations reaches 1000. ADSIR stops when the conditions below are met at the same time:

$$\left\{ \begin{array}{ll} \frac{|\delta^{(t)} - \delta^{(t - 1)} |}{\delta^{(t)} } < \varepsilon_{1} ,&\quad \delta^{(t)} = \sum\nolimits_{i = 1}^{I} \frac{\omega_{i}}{2}\left( {\left[ {\mathbf{A }}{\boldsymbol{\upmu}}^{(t)} \right]_{i} - \hat{g}_{i} }\right)^{2} \\ \frac{|\eta^{(t)} - \eta^{(t - 1)} |}{\eta^{(t)} } < \varepsilon_{2} , &\quad \eta^{(t)} = \sum\nolimits_{s = 1}^{S} ||{\mathbf{E}}_{s} {\boldsymbol{\upmu}}^{(t - 1)} - {\mathbf{D}}^{(t)} {\boldsymbol{\upalpha}}_{s}^{(t)} ||_{2}^{2} \end{array} \right.$$

Reconstruction of different sparse levels

In this simulation, the forward simulation and inverse reconstruction are all performed in 2D. During the simulation, the scanning circle covers 360° range around the phantom. The scanning step of tomographic angels of the Shepp–Logan phantom is set as 3°(120 views) and 6°(60 views) respectively. The scanning step of tomographic angels of the human head slice slice is set to 2°(180 views) and 4°(90 views) respectively.

We choose the SART, GPBB, and ADSIR algorithms to be the comparisons besides our proposed L1-DL algorithm. The regularization parameter of GPBB is chosen by tests (from 0.1 to 2, the length of step is 0.1). We choose the best one to perform GPBB. When it comes to the DL methods, a proper selection of the regularization parameter λ is a vital problem. A bigger λ weakens the effect of the data fidelity term, generating a loss of some fine details in the image while a smaller λ weakens the effect of the regularization term as the sparse constraint, resulting in more noise and streak artifacts in the reconstructed image. In this article, the regularization parameter of ADSIR is determined by a similar model as Ref. [19] creates. The model firstly reconstructs the image by setting λ as infinite, then calculates the difference between the forward projection and the scanning data. After that, the proper value of λ can be calculated by a fitting function based on the difference. In the proposed method (L1-DL), the weights introduced to the algorithm would influence the regularization parameter. To eliminate this effect, we multiply the weights with a constant which is the average value of \( (1/N_{0}^{2} )\sum\nolimits_{i = 1}^{{N_{0}^{2} }} {|[{\mathbf{E}}_{s} {\varvec{\upmu}}^{(t - 1)} ]_{i} - [{\mathbf{D}}^{(t - 1)} {\varvec{\upalpha}}_{s}^{(t - 1)} ]_{i} |} \). The weights are calculated by

$$ \begin{aligned} w_{s} &= C\left/ \left[\left(\frac{1}{{N_{0}^{2} }}\sum\nolimits_{i = 1}^{{N_{0}^{2} }} {|[{\mathbf{E}}_{s} {\varvec{\upmu}}^{(t - 1)} ]_{i} - [{\mathbf{D}}^{(t - 1)} {\boldsymbol{\alpha}}_{s}^{(t - 1)} ]_{i} |} \right) + \varepsilon \right]\right.\\ C &= \frac{1}{S}\sum\limits_{s = 1}^{S} {\frac{1}{{N_{0}^{2} }}\sum\limits_{i = 1}^{{N_{0}^{2} }} {|[{\mathbf{E}}_{s} {\varvec{\upmu}}^{(t - 1)} ]_{i} - [{\mathbf{D}}^{(t - 1)} {\boldsymbol{\alpha}}_{s}^{(t - 1)} ]_{i} |} } \end{aligned} $$
(20)

With this modification, we select the regularization parameter of L1-DL as the same as the one of ADSIR. The regularization parameters of different phantom simulations are shown in Table 2. The rationality and influence of the regularization parameter selection will be discussed later.

Table 2 Selections of the regularization parameters with different sampling rates

The reconstruction results of the simulations by using these four algorithms are shown in Figs. 3, 4 (Shepp–Logan phantom) and Figs. 5, 6 (human head slice). The negative values in reconstructed image during the iterative process are set to be zero. The results indicate the density of the phantoms. The images of human head slice are displayed by transforming the density to the CT attenuation value. The CT value is calculated by

$$ {\text{CT}}{\kern 1pt} {\kern 1pt} {\text{value = }}\frac{{\mu - \mu_{\text{water}} }}{{\mu_{\text{water}} }} \times 1000 $$
(21)

where μ water is the linear attenuation coefficient of water.

Fig. 3
figure 3

The reconstruction results of the Shepp–Logan phantom. a–d The image reconstructed by L1-DL, ADSIR, GPBB and SART respectively with 120 scanning views data; e–h the difference between the reconstructed image (a–d) and the original image (OI); i–l the image reconstructed by L1-DL, ADSIR, GPBB and SART respectively with 60 scanning views data; m–p the difference between the reconstructed image (i–l) and the original image (OI)

Fig. 4
figure 4

The horizontal intensity profiles through the center of the original and reconstructed images of the Shepp–Logan phantom. a L1-DL; b ADSIR; c GPBB; d SART

Fig. 5
figure 5

The reconstruction results of the human head slice. a–d The image reconstructed by L1-DL, ADSIR, GPBB and SART respectively with 180 scanning views data; e–h the difference between the reconstructed image (a–d) and the original image (OI); i–l the image reconstructed by L1-DL, ADSIR, GPBB and SART respectively with 90 scanning views data; m–p the difference between the reconstructed image (i–l) and the original image (OI)

Fig. 6
figure 6

The vertical intensity profiles through the center of the original and reconstructed images of the human head slice. a L1-DL; b ADSIR; c GPBB; d SART

By taking the original phantom as a gold standard to provide a numeric quantification of the results, the RMSE (root mean square error) of the reconstructed images is introduced to measure the difference between the reconstructed image and the original image in L2-norm. This criterion is defined as:

$$ {\text{RMSE}} = \sqrt {\frac{{\sum\nolimits_{i,j} {\left( {\mu_{ij} - \mu_{ij}^{truth} } \right)^{2} } }}{{N^{2} }}} $$
(22)

where μ truth ij means the gray-value of the original image. The unit of RMSE can be HU by transform the linear attenuation coefficient to CT value according to Eq. (21). It is easy to demonstrate that the smaller the value of RMSE is, the better quality of the image is. The quantitative results are shown in Table 3.

Table 3 RMSEs (HU) of reconstructed images of different sampling rates

In Figs. 3 and 5, all the reconstructed images of the same phantom are shown in the same proper display window. In Fig. 3, the results of SART are the worst among the four algorithms. Even with 120 sampling views, the result is ruined by the streak artifacts, which deteriorates seriously when the sampling rate decline to half. The reason for the bad results of SART is that this algorithm is a simple iterative algorithm that has no regularization term to utilize the prior information about the reconstructed image. Compared to SART, GPBB and ADSIR perform better with help of the regularization term utilizing the sparse constraint. Figure 3f, n contain some edge structures, which proves the over-smoothing effect of ADSIR. When projection views reduce to 60, the image reconstructed by ADSIR is influenced by some artifacts. L1-DL and GPBB perform well in the 120 views situation. The RMSE of these two methods are 1.647HU and 2.658HU respectively, which are both tiny. And Fig. 3e, g reveal that the difference is hard to recognize. When the sampling views reduce, tiny edge structures emerge in Fig. 3o and the RMSE of GPBB increases to 11.04. Figure 4 display the horizontal intensity profiles through the center of reconstructed images compared to the original image. It is shown that the effects of the reduction of the sampling views are arranged as: SART > ADSIR > GPBB > L1-DL. In Table 3, the RMSE of L1-DL under 60 views situation is much bigger than the RMSEs of ADSIR and SART with 120 views situation, which certifies that the proposed algorithm can adapt to further radiation dose reduction Table 4.

Table 4 Selections of the regularization parameters with different noise levels

When it comes to the human head slice in Figs. 5 and 6, the results are similar to the Shepp–Logan phantom. But the original image of the human head slice is more complex than the Shepp–Logan phantom, and is close to the real reconstruction process of the biomedical images. Although the RMSE of the proposed algorithm is a little bigger than GPBB in 180 views situation, L1-DL is better than GPBB when the number of views reduces to 90. In 90 views situation, the image of GPBB, displayed as Fig. 5k, has some artifacts and the spatial resolution is worse than L1-DL by checking the enlarged region of the images. In addition, the RMSEs of the proposed algorithm is much better than the ones of ADSIR. Considering the proposed algorithm is modified based on the reconstruction model of ADSIR, the improvement of image quality is obvious.

Robustness to the noise

In the practical applications, the measurements of the radiation projections are usually polluted by noise, which demands that the algorithm should be robust to the noise. To evaluate the tolerance to noise of the proposed algorithm and other three ones, we choose to add some Poisson noise to the projection data for test. The scanning step of tomographic angels of the Shepp–Logan phantom is set to 6°(60 views) and the simulation numbers of photons emitting from the X-ray source to each detector are 2 million and 1 million. The detected numbers of photons are polluted by Poisson noise. The scanning step of the human head slice is set to 2°(180 views) and 4°(90 views) respectively. The simulation number of photons is 2 million.

The reconstructed images of the simulations are shown in Figs. 7, 8, 9, 10 and the RMSEs are provided in Table 5. Compared to the results with no noise polluted in Figs. 3, 4, 5 and 6, the image quality of SART degrades fast with the noise level increasing while GPBB, L1-DL and ADSIR are more robust to the noise. In the Shepp–Logan experiments, L1-DL is better than GPBB and ADSIR comparing the RMSEs. The difference between the reconstructed image and original image shown in Fig. 7 indicates the image quality is arranged as: L1-DL > GPBB > ADSIR > SART. In the head slice simulations, the RMSE of SART is a bit smaller than ADSIR, but the image reconstructed by SART are ruined by noise and artifacts. When it comes to L1-DL and GPBB, the RMSE of GPBB is better than L1-DL in 180 views situation, and the RMSE of these two methods are same in 90 views situation. The results (Fig. 9i–l) of the 90 views simulation with noise indicate that all the four algorithms lose the structure in the yellow rectangle region. L1-DL is still better than ADSIR and has a bit higher spatial resolution than GPBB.

Fig. 7
figure 7

The reconstruction results of the Shepp–Logan phantom. a–d The image reconstructed by L1-DL, ADSIR, GPBB and SART respectively with 60 scanning views data simulated by 2 million photons; e–h the difference between the reconstructed image (a–d) and the original image (OI); i–l the image reconstructed by L1-DL, ADSIR, GPBB and SART respectively with 60 scanning views data simulated by 1 million photons; m–p the difference between the reconstructed image (i–l) and the original image (OI)

Fig. 8
figure 8

The horizontal intensity profiles through the center of the original and reconstructed images of the Shepp–Logan phantom with Poisson noise polluted projection data. a L1-DL; b ADSIR; c GPBB; d SART

Fig. 9
figure 9

The reconstruction results of the human head slice. a–d The image reconstructed by L1-DL, ADSIR, GPBB and SART respectively with 180 scanning views data simulated by 2 million photons; e–h the difference between the reconstructed image (a–d) and the original image (OI); (i–l) the image reconstructed by L1-DL, ADSIR, GPBB and SART respectively with 90 scanning views data simulated by 2 million photons; m–p the difference between the reconstructed image (i–l) and the original image (OI)

Fig. 10
figure 10

The vertical intensity profiles through the center of the original and reconstructed images of the human head slice with Poisson noise polluted projection data. a L1-DL; b ADSIR; c GPBB; d SART

Table 5 RMSEs (HU) of reconstructed images of different noise levels

Convergence rate

To explain the convergence rate of L1-DL compared to ADSIR, the RMSEs of the images reconstructed by these two algorithms are shown as functions of the number of iterations in Fig. 11. The projection data are simulated on the human head slice with 180 scanning views, which is not polluted by noise. L1-DL stops at the 72th iteration while ADSIR stops at the 49th iteration. Since the iterative process of L1-DL only has one additional step, which is updating the weight function, it takes almost the same time as ADSIR to perform one iteration. The times consumed by one iteration of ADSIR and L1-DL are 89 and 92 s respectively. By checking the first 20 iterations, it can be found that the RMSE decreasing rate of ADSIR is only a bit faster than L1-DL at first. So it is claimed that L1-DL needs more iterations than ADSIR for convergence because of the better reconstructed result, but the convergence rate of the two algorithms are almost the same. However, it takes several tens of minutes for both ADSIR and L1-DL to reach the stopping points, so that some accelerating methods and high computation efficiency are the expectations for the real time imaging.

Fig. 11
figure 11

The RMSE as function of the number of iterations depending on the iterative image reconstructed by L1-DL and ADSIR

Regularization parameter investigation

In this part, the rationality and influence of the regularization parameter are discussed in detail. To verify that the selections of this parameter in former text are rational and explore the influence to the results by other parameter values, the images are reconstructed by different regularization parameters. The phantom under simulation is the human head slice in Fig. 2. The projection data are 180 views with no noise, 180 views with noise simulated by 2 million photons. To each projection model, the regularization parameters are chosen as the one calculated by the fitting function (cited as λ), the one multiplied by 0.2 (cited as 0.2λ) and the one multiplied by 5 (cited as 5λ). The results of ADSIR are shown in Figs. 11 and 12. The RMSEs of the results are shown in Table 6.

Fig. 12
figure 12

The reconstruction results of the human head slice by ADSIR with different regularization parameter. a–c The image reconstructed by 180 scanning views data with no noise; d–f the difference between the reconstructed image (a–c) and the original image (OI); g–i the image reconstructed by 180 scanning views data simulated by 2 million photons; j–l the difference between the reconstructed image (g–i) and the original image (OI). The first to third columns refer to the results with λ, 0.2λ and 5λ respectively

Table 6 RMSEs (HU) of reconstructed images of different regularization parameters

From the data in Table 6, we can see the images reconstructed by a bigger regularization parameter (5λ) will have larger RMSEs, which means the reconstructed images are influenced by the smoothing effect and fewer image details are reserved. When this parameter is set smaller (0.2λ), the RMSEs might be a little smaller than the ones reconstructed by the proper regularization parameter, such as the data as italics in Table 6 shows. However, the images in Figs. 11 and 12 (the second column) are ruined by noise or streak artifacts in different extents. Therefore, it is important to select the proper value of the regularization parameter for the DL-based methods and the simulations in this section prove that the selections in this article are rational.

Discussion

The simulation results indicate that the proposed L1-DL algorithm is a useful and robust method for the sparse CT reconstruction. L1-DL utilizes more prior information of image sparsity than ADSIR benefited by the L1-norm DL regularization term. L1-DL is an improved method of ADSIR, and the simulation results demonstrate the image quality improvement of L1-DL than ADSIR.

By comparing the simulation results, another comparison algorithm, GPBB, is a little better than L1-DL in some situations when comparing the RMSEs. However, GPBB is not good at reserving the edge details and structures, especially when the sampling rate reduces further. It is claimed that L1-DL has a higher spatial resolution than GPBB.

In the simulation to explain the convergence rate of L1-DL, L1-DL stops at the 72th iteration while ADSIR stops at the 49th iteration and the time of one iteration is about 90 s. So it consumes about 108 and 74 min for the reconstruction of L1-DL and ADSIR. When it comes to the reconstruction processes of GPBB and SART with the same projection data, the time of one iteration of these two algorithm is about 1 s. Since GPBB and SART stops when the iteration number reaches 1000, the reconstruction time is about 17 min. In addition, the iteration process of GPBB and SART can be accelerated by GPU, so that the time can be reduced to less than a minute. Accelerating L1-DL, ADSIR and other DL-based methods is an important factor for the practical application of the DL-based methods.

Conclusion

In this work, we propose to replace the L2-norm regularization term with the L1-norm one to improve image quality reconstructed by the DL-based method. The new objective function is optimized by the adaptive weighted L2-norm strategy, which is similar to the IRLS algorithm. By involving this modification, the proposed L1-DL algorithm behaves better than the existing DL-based method (ADSIR), and other two comparing algorithms. Experimental results show that the proposed algorithm can satisfy the demand of further radiation reduction in CT scanning since it needs fewer scanning data for high-quality recovery. In addition, the proposed algorithm retains the robust characteristic to the projection noise as a DL-based algorithm. Our future work will focus on two aspects. One of them is accelerating the DL-based methods to make the real time imaging with low-dose radiation possible. The other is looking for some possible strategies to utilize more prior information and further improve the image reconstruction result. For example, by utilizing a proper way to distinguish structural information and noise in the image, the DL regularization term can be designed based on the distinguishing results, which is a promising method to preserve more structural information and improve the image quality.

Abbreviations

CT:

computed tomography

DL:

dictionary learning

IRLS:

iteratively reweighted least squares

ADISR:

adaptive dictionary based statistical iterative reconstruction

PLWS:

penalized weighted least-squares

FBP:

filtered backprojection algorithm

SART:

simultaneous algebraic reconstruction technique

TV:

total variation

DGT:

discrete gradient transform

ASD-POCS:

adaptive steepest descent projection onto convex sets

GPBB:

gradient projection Barzilai Borwein

IRLS:

iteratively reweighted least squares

SIR:

statistical iterative reconstruction

ADSIR:

adaptive dictionary based statistical iterative reconstruction

SNR:

signal to noise ratios

K-SVD:

k singular value decomposition

OMP:

orthogonal matching pursuit

RMSE:

root mean square error

References

  1. Berrington de González A, Mahesh M, Kim K, Bhargavan M, Lewis R, Mettler F, Land C. Projected cancer risks from computed tomographic scans performed in the United States in 2007. Arch Intern Med. 2009;169(22):2071–7. doi:10.1001/archinternmed.2009.440.

    Article  Google Scholar 

  2. Wang J, Li T-F, Lu H-B, Liang Z-R. Penalized weighted least-squares approach to sinogram noise reduction and image reconstruction for low-dose X-ray computed tomography. IEEE Trans Med Imaging. 2006;25(10):1272–83.

    Article  Google Scholar 

  3. Liao H-Y, Sapiro G. Sparse representations for limited data tomography, in biomedical imaging: from nano to macro, 2008. ISBI 2008. 5th IEEE international symposium on 2008; pp. 1375–78. doi:10.1109/ISBI.2008.4541261.

  4. Jerri AJ. The Shannon sampling theorem—its various extensions and applications: a tutorial review. Proc IEEE. 1977;65(11):1565–96.

    Article  MATH  Google Scholar 

  5. Song B, Park J, Song W. A novel, fast, variable step size gradient method for solving simultaneous algebraic reconstruction technique (SART)-type reconstructions: an example application to CBCT. Med Phys. 2011. doi:10.1118/1.3611782.

    Google Scholar 

  6. Candes EJ, Romberg J, Tao T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory. 2006;52(2):489–509.

    Article  MathSciNet  MATH  Google Scholar 

  7. Candes EJ, Tao T. Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans Inf Theory. 2006;52(12):5406–25.

    Article  MathSciNet  MATH  Google Scholar 

  8. Sidky EY, Pan X-C. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Phys Med Biol. 2008;53(17):4777–807.

    Article  Google Scholar 

  9. Justin CP, Song B-Y, Kim JS, Park SH, Kim HK, Liu Z-W, Suh TS, William YS. Fast compressed sensing-based CBCT reconstruction using Barzilai–Borwein formulation for application to on-line IGRT. Med Phys. 2012;39(3):1207–17.

    Article  Google Scholar 

  10. Xu Q, Yu H-Y, Mou X-Q, Zhang L, Hsieh J, Wang G. Low-dose X-ray CT reconstruction via dictionary learning. IEEE Trans Med Imaging. 2012;31(9):1682–97.

    Article  Google Scholar 

  11. Tian Z, Jia X, Yuan K, Pan T, Jiang S-B. Low-dose CT reconstruction via edge-preserving total variation regularization. Phys Med Biol. 2011;56(18):5949–67.

    Article  Google Scholar 

  12. Endra O, Gunawan D. Comparison of l1-minimization and iteratively reweighted least squares-l p-minimization for image reconstruction from compressive sensing. In advances in computing, control and telecommunication technologies (ACT), 2010 second international conference on 2010. doi:10.1109/ACT.2010.31.

  13. Miosso CJ, Borries RV, Argaez M, Velazquez L, Quintero C, Potes CM. Compressive sensing reconstruction with prior information by iteratively reweighted least-squares. IEEE Trans Signal Process. 2009;57(6):2424–31.

    Article  MathSciNet  Google Scholar 

  14. Aharon M, Elad M, Bruckstein A. K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans Signal Process. 2006;54(11):4311–22.

    Article  Google Scholar 

  15. Tropp JA, Gilbert AC. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory. 2007;53(12):4655–66.

    Article  MathSciNet  MATH  Google Scholar 

  16. Elbakri IA, Fessler JA. Statistical image reconstruction for polyenergetic X-ray computed tomography. Medical Imaging, IEEE Transactions on. 2002;21(2):89–99.

    Article  Google Scholar 

  17. Kamphuis C, Beekman FJ. Accelerated iterative transmission CT reconstruction using an ordered subsets convex algorithm. IEEE Trans Med Imaging. 1998;17(6):1101–5.

    Article  Google Scholar 

  18. Julien M, Sapiro G, Elad M. Learning multiscale sparse representations for image and video restoration. Multiscale Model Simul. 2008;7(1):214–41.

    Article  MathSciNet  MATH  Google Scholar 

  19. Zhang C, Zhang T, Zheng J, Li M, Lu Y-F, You J-L, Guan Y-H. A model of regularization parameter determination in low-dose X-ray CT reconstruction based on dictionary learning. Computat Math Methods Med. 2015. doi:10.1155/2015/831790.

    MathSciNet  Google Scholar 

Download references

Authors’ contributions

Study concept and design (CZ and ML); drafting of the manuscript (CZ); critical revision of the manuscript for important intellectual content (CZ and JZ); obtained funding (JZ); administrative, technical, and material support (CZ, JZ and TZ); study supervision (CP and ZL). All authors read and approved the final manuscript.

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No.61301042 and No.61201117), the Natural Science Foundation of Jiangsu Province (No. BK20151232), the Science and Technology Program of Suzhou (No. ZXY2013001), and the Youth Innovation Promotion Association of the Chinese Academy of Sciences (No.2014281).

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jian Zheng.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, C., Zhang, T., Li, M. et al. Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares. BioMed Eng OnLine 15, 66 (2016). https://doi.org/10.1186/s12938-016-0193-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12938-016-0193-y

Keywords