Compressed sensing MRI via fast linearized preconditioned alternating direction method of multipliers

Background The challenge of reconstructing a sparse medical magnetic resonance image based on compressed sensing from undersampled k-space data has been investigated within recent years. As total variation (TV) performs well in preserving edge, one type of approach considers TV-regularization as a sparse structure to solve a convex optimization problem. Nevertheless, this convex optimization problem is both nonlinear and nonsmooth, and thus difficult to handle, especially for a large-scale problem. Therefore, it is essential to develop efficient algorithms to solve a very broad class of TV-regularized problems. Methods In this paper, we propose an efficient algorithm referred to as the fast linearized preconditioned alternating direction method of multipliers (FLPADMM), to solve an augmented TV-regularized model that adds a quadratic term to enforce image smoothness. Because of the separable structure of this model, FLPADMM decomposes the convex problem into two subproblems. Each subproblem can be alternatively minimized by augmented Lagrangian function. Furthermore, a linearized strategy and multistep weighted scheme can be easily combined for more effective image recovery. Results The method of the present study showed improved accuracy and efficiency, in comparison to other methods. Furthermore, the experiments conducted on in vivo data showed that our algorithm achieved a higher signal-to-noise ratio (SNR), lower relative error (Rel.Err), and better structural similarity (SSIM) index in comparison to other state-of-the-art algorithms. Conclusions Extensive experiments demonstrate that the proposed algorithm exhibits superior performance in accuracy and efficiency than conventional compressed sensing MRI algorithms.

respiratory movement) during lengthy scans can cause motion and streaking artifacts on the reconstructed image. This degrades image quality, which could lead to misdiagnosis. Thus, accelerating the sampling speed and reducing or eliminating artifacts have always been the aims of many studies.
With the rapid development of the novel compressed sensing (CS) theory [1,2], compressed sensing MRI (CS-MRI) has attracted much attention, as it can reduce imaging time considerably. Compressed sensing theory claims that by using random projection, a small number of data points can be directly sampled at a sampling frequency that is far below the Nyquist frequency. In CS-MRI, the imaging time can be significantly reduced by reconstructing an image of good quality from highly undersampled k-space data. Specifically, if the signal or image is sparse in a certain domain, we can obtain perfect reconstruction with sufficient measurements. MR images are generally sparse in some transform domain, such as the wavelet domain. Consequently, the CS technique can be easily combined with MRI. The original signals or images can be recovered by using the nonlinear reconstruction algorithm under the restricted isometry property (RIP) [3,4]. Nevertheless, owing to the limitations of MR physics, MRI cannot achieve two-dimentional random sampling.
In 2007, Lustig et al. [5] proposed the SparseMRI algorithm, which selects wavelet transform as a sparse basis, and uses variable density random sampling and the conjugate gradient descent method for image recovery. This was the first application of CS to MRI. However, due to a high time complexity, the SparseMRI is too slow to be put into practical use. Since then, a variety of nonlinear algorithms have been proposed for CS-MRI reconstruction. The alternating direction method of multipliers (ADMM) algorithm has been studied extensively [6][7][8], and has been widely used in optimization problems that arise in machine learning, image processing, etc. Recently, the fast alternating direction method of multipliers (FADMM) [9] has incorporated a predictor-corrector acceleration scheme into the simple ADMM, when a strongly convex condition is satisfied. This algorithm cannot guarantee a global convergence when weakly convex problems are encountered. Another fast method, referred to as the accelerated alternating direction method of multipliers (ALPADMM) [10], was proposed to deal with a class of affine equality constrained composite optimization problems. Although ALPADMM is capable of handling saddle point problems, its convergence rate largely depends on the Lipschitz constant of the smooth component.
In this article, we propose an efficient algorithm to solve an augmented total variation (TV)-regularized model that adds a quadratic term to the classical TV-regularized model to enforce smoothness of the image. The proposed method applies a linearization strategy to two quadratic terms and divides the original convex problem into two subproblems, both of which are easy to solve. For all subproblems, the augmented Lagrangian function, which combines both the Lagrangian function and the quadratic penalty function, is applied to update each variable and gain more reconstruction accuracy than the Lagrangian function approach alone. Furthermore, we utilize a multistep weighted technique to improve the accuracy of reconstruction. Numerical experiments have been conducted to compare the proposed algorithm with previous algorithms on various MR images. The experimental results show that the proposed approach can achieve a higher signal-to-noise ratio (SNR), lower relative error (Rel.Err), and better structural

Related work
In this section, we briefly review the conventional CS-MRI reconstructed algorithms. Many nonlinear algorithms, e.g., the iterative shrinkage/thresholding method (IST) [12], two-step IST (TwIST) scheme [13], fast IST algorithm (FISTA) [14], split augmented Lagrangian shrinkage algorithm (SALSA) [6], wavelet tree sparsity MRI (WaTMRI) [15], total variation augmented Lagrangian alternating direction method (TVAL3) [16], total variation based compressed magnetic resonance imaging (TVCMRI) [17], reconstruction from partial Fourier data (RecPF) [18], and fast composite splitting algorithm (FCSA) [19], have been proposed to improve reconstruction speed and accuracy. IST is an operator-splitting algorithm that can be applied to an optimization problem with a simpler regularization term. The global acceleration of IST may be very slow especially when the stepsize is quite small or the optimization problem is extremely ill-conditioned. TwIST, which is a variant of IST, utilizes two or more previous iterates to update the current values, and does not depend on the previous iterate alone. TwIST gains higher speed than IST on reconstruction problem; however, the global convergence rate of TwIST has not been thoroughly studied. FISTA is another accelerated variant of IST that also takes advantage of two previous iterates. Unlike TwIST, FISTA can achieve global convergence with a splitting scheme.
SALSA transforms a constrained problem into an unconstrained problem by adding a penalty term that can be dealt with, using an augmented Lagrangian approach. Many image linear inverse problems, including image deblurring, MRI image reconstruction, and image inpainting, can be handled with SALSA. According to structured sparsity theories, WaTMRI exploits the tree sparsity to improve CS-MRI, which combines standard wavelet sparsity with total variation. To our knowledge, TVAL3 was proposed to solve a set of equality-constrained nonsmooth problems, and integrated an alternative minimization technique with a nonmonotone line search to optimize an augmented Lagrange function. Both TVCMRI and RecPF utilize a splitting strategy to minimize objective function. The former uses operator-splitting technology, and the latter adopts a variable splitting method to obtain optimal solution. In FCSA, the original optimization problem is divided into two easier subproblems that can be easily solved by FISTA. However, these algorithms are not necessarily easily implemented, or even sufficient to solve CS problems with TV-regularization, especially when the measurement matrix is considerably ill-conditioned. Therefore, it is necessary to develop algorithms that are both accurate and efficient to solve large-scale problems.

Problem formulation
Generally, the classical TV-regularized model for CS-MRI reconstruction problems can be written as: where x ∈ R n is the image to be reconstructed, b ∈ R m denotes the undersampled k-space data from the MR scanner, and A ∈ R m×n (m < n) is the measurement matrix. The expression δ > 0 represents the noise level in the measurement data and where ∇ is the discrete gradient operator and |∇x| denotes the TV regularization of x. A variant of (1) is the following TV-regularization problem: where τ > 0 is a positive regularization parameter to balance between the two objectives. To enforce image smoothness, we add a quadratic term γ 2 �∇x� 2 2 in the objective function to give the new TV-regularized model: where γ is a smoothing parameter. The augmented term |∇x| + γ 2 �∇x� 2 2 can yield accurate solutions by using a proper value of γ. In addition, the dual problem is continuously differentiable and facilitates effective use of gradient information. To split the variable x, an auxiliary variable z is introduced by ∇x − z = 0, and the unconstrained optimization problem (3) is transformed into: The augmented Lagrangian function for problem (4) is given by: where ∈ R m is the Lagrangian multiplier, µ is a positive penalty parameter, and � , ∇x − z� denotes the inner product of the vectors and ∇x − z. The classical ADMM minimizes the convex optimization problem (5) with respect to x and z, using the nonlinear block-Gauss-Seidel technique. After minimizing x and z alternatively, can be updated by: The non-linear block-Gauss-Seidel iteration of ADMM can be written as: Suppose z k and k are given, then x k+1 can be obtained by: When x k+1 and k remain fixed, z k+1 can be minimized by: The ADMM algorithm above, used to solve (4), is expressed in Algorithm 1.

Proposed algorithm
Based on the above analysis, firstly, the minimization of (8) is given by: Hence, (10) is transformed into: Because the measurement matrix A is neither the identity matrix, nor is it typically fully dense, it is impossible to derive the exact solution with respect to the x vectors [21]. In addition, the computational cost to handle (11) is extremely heavy. In order to reduce the computational burden and get closed-form solutions, some variants could be considered. The linearization of the quadratic term 1 2 �Ax − b� 2 2 is used to update x k+1 as follows: x k+1 = arg min is the gradient of 1 2 �Ax − b� 2 2 at the current point x k , and η is a positive proximal parameter. Then, the x subproblem in (10) can be iterated by: Considering the quadratic term µ 2 �∇x − z k � 2 2 can also be linearized, we also linearize µ 2 �∇x − z k � 2 2 at x k . This variant is a fast linearized preconditioned ADMM(FLPADMM) algorithm that generates the iterates x k+1 by: The negative divergence operator −div can be used to solve (14) as follows: Secondly, for a given x k+1 and k , z k+1 is computed by solving: Hence, the solution to (16) obeys: where soft(·, T ) is the soft thresholding function that is defined as: Finally, the Lagrangian multiplier is updated by k+1 = k − µ(∇x k+1 − z k+1 ).
x k+1 = arg min The FLPADMM algorithm utilizes the gradient descent method and one soft thresholding operator to update variables at each iteration. In addition, this method is a variant of the classical ADMM algorithm framework. The proposed FLPADMM algorithm is presented in Algorithm 2.
The stopping criterion in all of the algorithms above is the relative change of x between two successive iterations, which is small enough, i.e.: where tol is usually a range chosen from 10 −5 to 10 −3 . In the FLPADMM algorithm, α k represents a weighted parameter and Grad(x m k ) is the gradient of 1 2 �Ax − b� 2 2 at the point x m k . Furthermore, x m k is the first weighted value that is used to update x k+1 . It should be noted that the appropriate parameter α k can improve the rate of convergence that has been proven in a previous study [10]. Moreover, the superscript m stands for "middle, " and w stands for "weight. " Before the gradient descent method in line 5 is applied, x is updated by the weighted sums of all previous iterates. Furthermore, after the gradient method is applied, x is updated again by the same weighting technique, that is, x is weighted twice at each iteration. Specifically, when the weighted parameter α k is set to 1, the x subproblem is simply the current point x k+1 . At this point, FLPADMM becomes another variant of the ADMM algorithm [10]. The accelerated strategy of FLPADMM incorporates a multistep acceleration scheme with middle point x m and weighted point x w , which was first applied in a previous study [10] and derived from the accelerated gradient method [22,23]. Moreover, the optimal rate of convergence of FLPADMM is O 1

Experimental setup
A series of numerical experiments were conducted to compare the performance of the proposed FLPADMM with two state-of-the-art algorithms, namely FADMM [9] and ALPADMM [10], for MR image reconstruction from undersampled k-space data. The four typical MR datasets (Shepp-Logan phantom, human brain1 MR data, human brain2 MR data, and human spine MR data) were used to evaluate our algorithm. All test images had the same matrix size of 256 × 256, as shown in Fig. 1a-d. The first Shepp-Logan phantom was a piecewise smooth image with pixel intensities ranging from 0 to 1. The complex k-space data of the human brain1 was acquired from a 3T GE MR750 scanner using the FRFSE sequence (TR = 6000 ms, TE = 101 ms). The human brain2 data was also obtained from the 3T GE MR750 system (TR/TE = 2500/96.9 ms, field of view = 280 × 280 mm, slice thickness = 5 mm). The human spine MR data was fully sampled k-space data acquired on a 3T GE MR750 system with a FRFSE sequence (TR/ TE = 2500/110 ms, field of view = 240 × 240 mm, slice thickness = 5 mm). To achieve fair comparisons, codes of all compared algorithms were downloaded from the authors' websites. All experiments were executed, using Windows 10 and MATLAB 2015b (64bit), on a desktop computer with a 3.2GHz Intel core i5-4460 CPU and 8GB of RAM. Each experiment was repeated 10 times, and the average image metric results of 10 experiments were recorded. For most of the MR images, the k-space signals with a large magnitude were generally localized in the central region. Since a non-Cartesian sampling matrix is incoherent, the results on the Cartesian masks were far less favorable than those on the non-Cartesian mask. Therefore, two non-Cartesian masks were chosen as the sampling masks. One was a pseudo-Gaussian mask, displayed in Fig. 2a, that was implemented by following the sampling strategy of collecting more low-frequency signals in the central part of the k-space, and less high-frequency signals in the peripheral part of the k-space. The other mask presented in Fig. 2b, was a pseudo-radial mask that was applied by following the rule of RecPF [18]. The sampling ratio was defined as the number of sampled points divided by the total size of the original image.
In the present study, we compared our algorithm with two state-of-the-art algorithms under similar conditions. To explore the influence of the regularization parameter τ, we a b c d Sampling masks. a pseudo-Gaussian mask at 15% sampling rate, b pseudo-radial mask at 18% sampling rate used human brain1 data as an example, to analyze the changes of image quality when τ was changed. In Fig. 3, the SNR attained the maximum value when τ was 10 −3 . Thus, we chose this optimum value to achieve favorable reconstruction. Similar searches were adopted for the other datasets. For all tests, we also found that when γ = 2τ, our algorithm maintained the most favorable reconstruction performance. Furthermore, the default maximum of all three methods was set to 300.
To quantitatively evaluate the result of the proposed algorithm, three objective metrics were adopted to measure the quality of the recovered images. The first was SNR, defined as: where x is the original image, ∧ x is the reconstructed image, and M and N represent the number of rows and columns, respectively, in the input image. The quality of the reconstructed image is directly proportional to the value of SNR. The second metric was the Rel.Err, defined as: A smaller value meant that the reconstructed image had little error and more favorable reconstruction in comparison to the original image.
The last metric was the SSIM index that was used to measure the similarity between two images, in terms of structure, brightness, and contrast, among other aspects, and defined as: where µ p and θ p are the mean and variance, respectively, of the original image; µ q and θ q are the mean and variance, respectively, of the reconstructed image, θ pq is the covariance of these two images; and c 1 and c 2 are fixed constants that prevent unstable phenomena when the denominator is close to zero. When the value of SSIM was increased, the image showed greater similarity to the original.

Experimental results
In this section, we first compare our proposed FLPADMM algorithm with FADMM [9] and ALPADMM [10] algorithms on the Shepp-Logan phantom, with Gaussian white noise of a standard deviation of 0.01. The proposed FLPADMM was applied to the Shepp-Logan phantom under pseudo-Gaussian mask with 20% k-space data undersampled. Figure 4a shows the original Shepp-Logan phantom, and Fig. 4b-d presents the reconstructed images recovered by the FADMM, ALPADMM, and FLPADMM algorithms, respectively. Compared with the original Shepp-Logan phantom image, FADMM yielded noticeable artifacts and failed to suppress background noise. The image recovered by ALPADMM contained fewer artifacts and was evidently more favorable than that recovered by FADMM. As the Shepp-Logan phantom is extremely piecewise smooth and sparse, ALPADMM also provides good reconstruction. As shown in Fig. 4c, d, visible artifacts are not easily observed when both ALPADMM and FLPADMM are used. However, for experiments on in vivo data as we will show, FLPADMM would perform much more accurately and stably than ALPADMM.
For enhanced visualization, Fig. 4e the smallest error. The proposed FLPADMM exhibited superior performance in suppressing noise without significant artifacts, and yielded the best reconstruction. All experiments on in vivo data were corrupted with Gaussian white noise with zero mean and a standard deviation of 0.01. Experimental results of these in vivo human brain data are displayed in Figs. 5 and 6 at a sampling ratio of 25%. Figure 5a presents the original human brain1 image. The reconstructed images illustrated in Fig. 5b-d, were obtained by FADMM, ALPADMM, and our proposed FLPADMM, under a pseudo-Gaussian sampling scheme. Figure 5e-g was produced by FADMM, ALPADMM, and FLPADMM, respectively, under a pseudo-radial sampling pattern. The SNR of the human brain1 image under the pseudo-Gaussian mask using FLPADMM was 25.0685 dB, whereas those recovered by FADMM and ALPADMM were 20.9921 and 22.3231 dB, respectively. We can clearly see that the FLPADMM reconstruction suppressed background noise. The recovery result in Fig. 6 is similar to that in Fig. 5. Figure 7 gives the comparison results of human brain1 data among the state-of-the-art MR image reconstruction algorithms, using different sampling masks, when the sampling ratios were 0.1, 0.2, 0.3, 0.4, and 0.5, respectively. As seen in Fig. 7a, b, the proposed FLPADMM achieved high image quality with high SNR and low Rel.Err. When the sampling ratio was 0.1, the three methods performed relatively poorly. That is because when sampling ratio is too low, the sampled data is insufficient to obtain a faithful image. It is notable that as the sampling ratio increased for all algorithms under consideration, the SNR was also increased, whereas Rel.Err was gradually reduced. That is, the reconstructions of higher quality could have been obtained by taking more measurements. In addition, when the sampling ratio was increased, the FLPADMM algorithm exhibited superior performance in recovering the sampling image. Specifically, a sampling ratio of 30% was sufficient to reconstruct the human brain1 image effectively.  Fig. 7 Comparison results of human brain1 data using a pseudo-Gaussian mask (first row) and pseudo-radial mask (second row). a, c SNR (dB) vs sampling ratio, b, d Rel.Err vs sampling ratio of the human brain1 image. The SNR of FLPADMM was slightly larger than that of ALPADMM when a pseudo-radial mask was applied. The Rel.Err of ALPADMM was very close to that of FLPADMM (see Figs. 7d, 8d), indicating that both ALPADMM and our proposed FLPADMM show similar recovery performance under the pseudo-radial mask.
The reconstructed results in Fig. 9 were consistent with those of Fig. 5. Compared to FADMM and ALPADMM, our proposed FLPADMM reconstructed better images without visual artifacts. For example, when the sampling ratio was 25% under pseudoradial sampling, FADMM had significant artifacts and ALPADMM had slight artifacts. However, images reconstructed by FLPADMM were the closest to the original image of the human spine. These results further validate the superiority of FLPADMM in comparison to other algorithms and are consistent with the results of the two human brain experiments. It is clear that regardless of the sampling scheme, FLPADMM achieved the highest SNR and lowest Rel.Err. From Fig. 10, we can see that all reconstruction results showed steady improvement as the sampling ratio increased. Moreover, FLPADMM showed superior performance in comparison to other algorithms. As the tissue structure of the human spine MR data was extremely complex, the quality of the reconstructed image was not as good as that of the human brain tests.
For further comparison, the results of quantitative image metrics, SSIM and CPU time (s), are listed in Tables 1 and 2 to demonstrate structural similarity and running time of FADMM, ALPADMM, and FLPADMM algorithms for all MR images at various sampling ratios. As seen in Table 2, the running time of FLPADMM was faster than that of ALPADMM, but slower than that of FADMM by approximately 1 s. Since the

Discussion
We proposed a novel algorithm for CS-MRI reconstruction, referred to as FLPADMM. Except for the TV-regularization term in the classical MR model, we added a quadratic term to this classical model to make the image smoother. Using augmented Lagrangian function, FLPADMM effectively divides the original convex problem into two subproblems, both of which can be easily dealt with. To further enhance image reconstruction, a strategy that incorporated a multistep weighted scheme was adopted in FLPADMM. Several parameters need to be tuned in our proposed algorithm. In general, the requirement on stepsize η obeys η > µ A T A . When the regularization parameter τ is 10 −3 (see Fig. 3) and γ = 2τ, our proposed algorithm yields the best result. The other parameters can be manually set for different test data under a fixed sampling scheme. When different sampling schemes (i.e., pseudo-Gaussian mask and pseudo-radial mask) are applied, our proposed FLPADMM can also produce very impressive results. Experiments validate that the performance of this proposed method is superior to those of FADMM and ALPADMM. It particularly shows the best performance in suppressing background noise, even at a low sampling ratio. Some algorithms that combine parallel MRI and CS have been proposed to accelerate MRI reconstruction [24][25][26]. Our method can also be applied to parallel MRI with minor revisions.

Conclusions
The consideration of TV-regularization for CS-MRI has been studied within recent years, largely because MR images can be recovered from its partial Fourier samples, and TV shows better performance in preserving image edges. In this paper, we first briefly reviewed nonlinear algorithms for CS-MRI, and then introduced an augmented TV-regularized model with an additional quadratic term to enforce image smoothness. An efficient, inexact, but unique algorithm has been proposed to handle this novel TVregularized model. The proposed algorithm, referred to as FLPADMM, belongs to the classical ADMM framework that decomposes the objective function into two subproblems by adding new variables and constraints. FLPADMM minimizes the TV-regularized objective function by an augmented Lagrangian minimization function technique. Furthermore, this method effectively adopts a multistep weighted scheme to improve the accuracy of reconstruction. Moreover, FLPADMM could also solve both constrained and unconstrained convex optimization problems. Numerous experiments demonstrate the superiority of the proposed FLPADMM method in comparison to the previous FADMM and ALPADMM algorithms. Our future work would combine this algorithm with parallel MRI to further accelerate the imaging time.