- Open Access
Optimal compressed sensing reconstructions of fMRI using 2D deterministic and stochastic sampling geometries
© Jeromin et al; licensee BioMed Central Ltd. 2012
- Received: 6 June 2011
- Accepted: 2 May 2012
- Published: 20 May 2012
Compressive sensing can provide a promising framework for accelerating fMRI image acquisition by allowing reconstructions from a limited number of frequency-domain samples. Unfortunately, the majority of compressive sensing studies are based on stochastic sampling geometries that cannot guarantee fast acquisitions that are needed for fMRI. The purpose of this study is to provide a comprehensive optimization framework that can be used to determine the optimal 2D stochastic or deterministic sampling geometry, as well as to provide optimal reconstruction parameter values for guaranteeing image quality in the reconstructed images.
We investigate the use of frequency-space (k-space) sampling based on: (i) 2D deterministic geometries of dyadic phase encoding (DPE) and spiral low pass (SLP) geometries, and (ii) 2D stochastic geometries based on random phase encoding (RPE) and random samples on a PDF (RSP). Overall, we consider over 36 frequency-sampling geometries at different sampling rates. For each geometry, we compute optimal reconstructions of single BOLD fMRI ON & OFF images, as well as BOLD fMRI activity maps based on the difference between the ON and OFF images. We also provide an optimization framework for determining the optimal parameters and sampling geometry prior to scanning.
For each geometry, we show that reconstruction parameter optimization converged after just a few iterations. Parameter optimization led to significant image quality improvements. For activity detection, retaining only 20.3% of the samples using SLP gave a mean PSNR value of 57.58 dB. We also validated this result with the use of the Structural Similarity Index Matrix (SSIM) image quality metric. SSIM gave an excellent mean value of 0.9747 (max = 1). This indicates that excellent reconstruction results can be achieved. Median parameter values also gave excellent reconstruction results for the ON/OFF images using the SLP sampling geometry (mean SSIM > =0.93). Here, median parameter values were obtained using mean-SSIM optimization. This approach was also validated using leave-one-out.
We have found that compressive sensing parameter optimization can dramatically improve fMRI image reconstruction quality. Furthermore, 2D MRI scanning based on the SLP geometries consistently gave the best image reconstruction results. The implication of this result is that less complex sampling geometries will suffice over random sampling. We have also found that we can obtain stable parameter regions that can be used to achieve specific levels of image reconstruction quality when combined with specific k-space sampling geometries. Furthermore, median parameter values can be used to obtain excellent reconstruction results.
- Compressive sensing
- Numerical optimization
Using compressive sensing (CS), we can recover certain noise-free signals and images exactly from limited numbers of k-space samples. In theory, to reconstruct images from a limited number of samples, we require that the signal exhibits sparsity and incoherence (lack of correlation) between the sensing basis and the transform basis used for reconstruction. For one-dimensional signals with N samples that are composed of T "spikes", perfect reconstuction can be obtained from O(T ⋅ log(N)) frequency-domain samples .
In what follows, we demonstrate a reconstruction paradigm that explores several k-space sampling geometries. We consider both stochastic and deterministic Fourier plane sampling geometries. A limitation of our study is that it is based on retrospectively downsampled data from an original fully-sampled EPI sequence. Thus, for practical application of our findings for in-vivo experiments, the EPI sampling pattern will have to be modified to reflect the proposed downsampling patterns. Here, we note that the use of stochastic geometries that result in non-consecutive EPI phase encoding is prone to artifacts. This should not lead to problems with the SLP geometry that is found to perform best in our experiments. On the other hand, this observation can lead to reconstruction artifacts in stochastic geometries if they are implemented by moving around k-space randomly. We also note that actual MRI imaging time depends on how many phase encoding lines are acquired. Random geometries are included to comply with the conventional (sparsity and incoherence) compressive sensing requirements.
To measure the quality of the reconstructions, we use an optimization process that solves for the compressive sensing objective function parameters that maximize the peak signal to noise ratio (PSNR) or the structural similarity index matrix (SSIM) obtained by each geometry. Here, we note that the structural similarity index matrix (SSIM)  provides for a recent (and arguably far more reliable) image reconstruction quality metric.
k-space sampling geometry class examples
% Retained samples
Dyadic Phase Encoded (DPE)
Random Phase Encoded
Random Samples on a PDF
Spiral Low Pass
Alternatively, an optimal regularization parameter can be determined by the L-curve method . Here, the “L” refers to the shape of the curve of plotting the solution norm versus the corresponding residual norm. The optimal regularization parameter is then determined by the point that gives the corner of the “L”-shaped plot. At the corner point of “L”, we have the point that minimizes the solution norm while also minimizing the residual norm. In this way, the L-curve approach avoids producing a regularization parameter that overly depends on the noise in the image .
We note that our parameter optimization approach is direct over the mean structural similarity index (SSIM)  and the PSNR. Here, we note that the PSNR is related to the residual error and it appears that the L-curve method could be applied to find robust estimates of the regularization parameters. However, the emphasis of this paper is on finding the optimal parameters with respect to the mean-SSIM . We refer to  for a description of the problems associated with the use of the mean-squared error for digital images. On the other hand, the mean-SSIM provides for a robust metric that better correlates with human visual perception of the images that are being examined.
Our approach finds the optimal regularization parameters that minimize the mean-SSIM over a training set of images. Here, we note that each image will produce a different set of optimal parameters depending on the noise variance and how the sampling geometry captures the structure of the activity region. In the section titled “A Fourier Model for Brain Activity,” we provide a frequency-domain model for brain activity. In this section, we carefully show how brain activity favors the 2D spiral low pass geometry (SLP) in the sense that it captures most of the image energy within the effective bandwidth, while it also allows for TV-norm optimization to reconstruct the activity region.
Estimates of the optimal regularization parameters are determined by taking median values of the optimal values estimated over the training set. The effectiveness of this approach is then evaluated over an independent testing set. As we shall describe in the Results section, the approach works very well giving mean-SSIM values above 0.93 for the SLP geometry. Furthermore, note that the median is considered to be a robust, non-parametric statistic that avoids outliers . The median provides for robust estimates of the regularization parameters that maximize the mean-SSIM of the reconstructed image. Here, we note the conflicting goals. Clearly, the best reconstructions will require the largest number of k-space samples. Thus, we are interested in determining the minimum acceptable quality that also yields acceptable reconstruction with the minimum number of required frequency-domain samples. To this end, we determine effective PSNR ranges and associate them with different image reconstruction qualities. For the average SSIM, we consider fixed values (SSIM > 0.75 for good results, SSIM > 0.90 for excellent results).
We limit our study to optimizing for the best MRI image reconstructions, and for the best quality activity maps based on the difference image between the “ON” and “OFF” MRI images. For detecting activity regions, we focus on the parameters that lead to highest possible quality in the difference images (activation versus no-activation images). Here, we will avoid issues associated with post-processing the difference images to better delineate between active and inactive regions of the brain. Clearly though, when the difference image is of the highest possible quality, artifact removal will simply improve upon our results. Furthermore, for very large average SSIM values (SSIM > 0.93), reconstruction quality should be considered to be excellent. For the use of post-processing methods, we refer to the body of work described in [8–15], which utilize a wide variety of signal detection, correlation analysis, and statistical tests to infer brain activity in fMRI difference images.
CS methods have been developed that resulted in improved spatial resolution from reduced k-space sampling. The work presented here falls into this category of k-space undersampling reconstruction methods, and is closely related to that of Lustig, Donoho, and Pauly . Lustig et al. also provides a general framework for k-space sampling and reconstruction using CS theory. While the sampling geometries presented here are designed to satisfy the CS criteria, the random and pseudo-random sampling of k-space could result in slower acquisition due to scanner programming constraints.
Most of the subsequent literature on CS applications to MRI imaging explores the merging of CS theory and other fast MRI acquisition techniques, unique k-space sampling methods, and novel reconstruction algorithms. Gamper, Boesiger, and Kozerke meet the sparsity condition of CS theory by applying the Fourier transformation along the temporal dimension, assuming that some regions of the filed-of-view change at a high temporal rate while other parts remain stationary or change slowly. Their methods show the effectiveness of CS reconstruction for accelerated dynamic (continuous sampling) MRI reconstruction by comparing them to k-t BLAST reconstructions over the same data sets . Their sampling scheme can be described as randomly skipping phase-encoding lines in each dynamic frame. Jung et al. developed k-t FOCUSS to provide a general framework beyond k-t SENSE and k-t BLAST for model-based dynamic MRI by applying CS theory to randomly sampled reconstruction of the prediction and residual encoding that are significantly sparse .
Recent studies have focused on extending the work in  to non-Fourier bases. Haldar, Hernando, and Liang use tailored spatially selective RF pulses to better satisfy the incoherence requirement of CS theory . Liu, Zou, and Ying apply CS theory to parallel reconstruction using sensitivity encoding. Their extension of SENSE to CS is based upon a reconstruction method using Bregman iteration . Trzakso, Manduca, and Borisch present a CS method that minimizes the L0 semi-norm using a re-descending M-estimator functions instead of L-1 norms typically found in CS literature . The extension of the sparsity measure to multi-scale form allows for rapid reconstructions compared to the non-trivial solutions described in [17–21].
Methods that exploit the spatial and/or temporal redundancy of k-space data provide a way of describing another class of k-space undersampling reconstructions. A recent study by Lindquist et al. describes methods for obtaining a 3D echo-volumnar fMRI imaging sequence by sampling the small central portion of k-space at a high temporal rate . The sampling trajectory is sampled successively across the fourth dimension in a 4D acquisition instead of successively over each temporal slice, and is constrained to a spiral pattern. Other MRI processing techniques that utilize information across slices collected at successive time intervals can be found in [22–26]. Alternatively, the authors in  provide a simple iterative algorithm, termed deconvolution-interpolation gridding (DING) for reconstructing MRI image from arbitrarily-sampled k-space. Compared to the Compressive Sensing methods discussed here, it is important to note that DING does not require regularization, and can also work with k-space trajectories that are not known a priori. Thus, DING is also suitable for cases where patient motion occurs during the scan.
The primary contributions of this paper can be summarized as follows:
• Comprehensive comparisons of 36 stochastic and deterministic frequency sampling geometries:
While the theory of compressing sensing suggests the use of stochastic geometries, we show that superior results can be achieved with deterministic frequency sampling geometries. Furthermore, we investigate the use of different sampling rates in order to determine the minimum sampling density that can still provide acceptable reconstruction results. As a consequence of this research, we provide the optimal geometry that can lead to the fastest reconstruction time among all candidate sampling geometries. This represents a contribution over existing methods that are based on a limited number of mostly stochastic sampling geometries.
• Optimal reconstruction parameter ranges for guaranteeing a minimum quality in the reconstruction images:
Current compressive sensing methods do not provide sufficient details on how to set reconstruction parameters for different applications. As a result, in current methods, there is no guidance as to which parameter values will provide reconstruction of sufficient quality. Here, we first provide a direct search optimization framework that is used to provide parameter ranges that can guarantee reconstruction of acceptable image quality. In particular, we provide parameter ranges for reconstruction BOLD fMRI ON & OFF images, as well as difference images that can be used for activity detection. Here, as evidence for the need of the proposed approach, we show that a completely different parameter range is needed for activity detection as opposed to parameters for reconstructing individual ON and OFF images. Furthermore, we found that the fast, spiral low pass sampling geometry can reach PSNR levels above 40 dB after a few iterations. In addition, the spiral low pass sampling geometry gives superior results with all reconstructions yielding mean SSIM values above 0.93.
Optimal TV Norm and Wavelet Transform Penalty
In what follows, we provide a summary of the proposed optimization method. As we describe below, the basic approach involves a nested optimization approach.
where A denotes the accuracy of the reconstructed image. Here, we will consider two measures of accuracy: (i) the average SSIM index, and (ii) PSNR. To solve (1), we will need to solve the optimization problem of reconstructing f from a limited number of k-space samples.
where ε denotes a small tolerance value, and denotes the l 2 norm. In (2), F k takes the 2D discrete Fourier transform of the input image and forces the k-space samples in y to match the ones available in the k-th sampling geometry.
where x denotes the column vector of the wavelet coefficients and Ψ is the wavelet transform operator that contains the wavelet basis functions along its columns.
In (4), TV(Ψx) denotes the total variation of the reconstructed signal. In (4)-(5), we have a constrained optimization problem in three parameters. This is converted to an unconstrained optimization problem through the use of a Lagrange multiplier as indicated in [16, 28]. In what follows, we will consider different values for α, β and use the software described in  to solve (4)-(5) for the optimal wavelet coefficients. We provide details for parameter optimization in a separate section.
Data set and fMRI activity detection
In blood oxygenation-level dependent (BOLD) fMRI, neural activity is detected by changes in the T2* relaxation time due to changes in blood oxygenation levels in response to local activation. All images were acquired on a 3 T Siemens TIM Trio system with a 12-channel radio frequency (RF) coil. The fMRI experiment used a standard Siemens gradient-echo EPI sequence modified so that it stored real and imaginary data separately. We used a field-of-view (FOV) = 240 mm, slice thickness = 3.5 mm, slice gap = 1 mm, number of slices = 32, matrix size = 64 × 64, TE = 29 ms, and TR = 2 s. The fMRI experiment used a block design with periods of 30 s OFF and 30 s ON. The subjects who participated in this study tapped the fingers of their right hand during the ON period. There were five and a half cycles, starting with OFF and ending with the OFF period.
The BOLD fMRI data was then preprocessed to account for motion artifacts and spatially normalized into the standard Montreal Neurological Institute space. This spatial normalization was then sub-sampled to 3 × 3 × 4 mm, resulting in 53 × 63 × 46 voxels. An individual slice was then selected that ensured measureable regions of activity based on the task being performed by the test subjects.
Temporal smoothing also tends to better localize detected activity across all temporal slices within a single run. Instead of utilizing the temporal information, our reconstructions are of two individual ON and OFF BOLD images. The less dense collection of temporal samples provides a worst-case scenario for detecting neural activity. We detected neural activity in fMRI by calculating the difference image of the reconstructed ON and OFF images. Individual slices were reconstructed with optimal parameters computed for different sampling geometries. At this juncture, the neural activity detection problem becomes a segmentation problem. The pre-processed sum of squares imagery was used to generate the k-space data in this study by applying the 2D Fast Fourier Transform to the multi-coil data.
As such, we will not consider optimization over a variety of different activity detection algorithms. We do note however that the activity detection algorithms that employ low-pass filtering will tend to favor zero-filling over interpolation by compressive sensing or any other method. The reason for this is that low-pass filtering attenuates high-frequency components that are estimated by the interpolation/reconstruction method. Ultimately, any segmentation for BOLD fMRI images will be governed by the accuracy of the reconstructed images themselves. Thus, for considering activity detection, we present the reconstructed difference images for qualitative comparisons. Thus, a more practical implementation of our approach would be studies involving structural MRI data, in which the benefit of a reduced acquisition time provides relief of the anxiety and discomfort patients may experience within an MRI scanner.
A Fourier model for brain activity
In this section, we introduce an analytical model for modeling the activity (ON) image. As we shall derive in this section, the lower-frequency components can provide for an accurate model for the Fourier-Energy concentration as well as a low-TV norm image reconstruction.
Thus, our objective in reconstructing the MRI image is to effectively reconstruct the stimulated region that differentiates f ON (.) from the rest. From Figure 2, we provide models for each function using a rectangular approximation for each function. The use of a rectangular approximation allows us to develop a model that is separable in the x-y coordinate system. The assumption allows us to provide analytical models and evaluate the properties of the resulting Fourier expansions for (7). Also, note that we can always express images as a sum of separable functions (e.g., via the use of the singular value decomposition or the 2D Fourier Transform).
where we have A≫a, to signify that the oscillatory components are much smaller than the average image intensity. Similarly, B≫b for the stimulated region. We can verify that both of these assumptions are valid by visual inspection of the corresponding regions in Figure 2.
where W smallest refers to the smallest stimulated region that we wish to recover in the Fourier domain. The concept of width can easily be extended to non-rectangular regions.
In (15), it is important to note that we have a function that has a very small TV-norm. In fact, it is an ideal case since we expect that the TV-optimized reconstruction will be well represented by the right-hand-side of (15). The observation here is that we can use the low-frequency components to capture most of the Fourier-transform energy of the ideal model as well as provide for a good initialization for optimization for low TV.
The k-Space downsampling geometries
We consider both deterministic and stochastic sampling geometries. For deterministic geometries, we consider the spiral low-pass and dyadic downsampling along the phase encoding direction. Stochastic methods are based on random sampling.
First, we consider geometries that restrict down sampling in the phase-encoded dimension only. Here, we wish to compare k-space sampling geometries in just the phase-encoded dimension to k-space undersampling in both dimensions around the center.
As noted in the literature review, the central region of k-space is essential in obtaining reconstruction performance that is acceptable. Almost all non-CS reconstruction of k-space undersampling included the center of k-space in the data that was sampled. A dyadic sampling geometry class was developed that considers sparse sampling along the phase-encoded dimension of k-space. All geometries of this class are shown in the first row of Table 1. This geometry includes samples from a collection of contiguous frequency encoded vectors centered over the center of k-space. The width of the central region can be varied to generate a number of unique masks that are members of this mask class. Nine unique geometries are generated based on a central region that sample 1/2, 1/3, 1/4, 1/6, 1/8, 1/10, 1/12, 1/16, and 1/32 of the phase encoded samples. Beyond the bounds of the central region of the geometry, additional frequency encoded vectors are sampled every 2nd, 4th, 8th, etc. phase encoded sample until the entire support of the fully sampled k-space has been included. The dyadic characteristic of the gap size between subsequent high-frequency samples was intentionally designed to sample more densely near the center of k-space, and in hopes that such sampling in k-space might coincide with the wavelet transform our reconstruction method utilized. This class will be referred to as the dyadic phase encoded (DPE) geometry class in the remainder of this paper.
We also consider two additional geometry classes which attempt to increase the incoherence between the sensing and transform bases by introducing an element of randomness in how the samples were selected. The first class, which is referred to as the random phase encoded (RPE) geometry class, samples frequency-encoded vectors along random phase encoded samples. This sampling method was selected as a comparison to the dynamic CS MRI in . In two images, single fMRI study, such sampling would not expect to provide acceptable reconstruction due to the possibility of excluding a portion of the central region of k-space. Each of the random phase encoded geometries contains the same number of samples as one of the dyadic phase encoded geometries. Due to the one-to-one geometry correspondence across classes, there are nine total geometries in the RPE class. All of the random phase encoded geometries are shown in the second row of Table 1.
The second geometry class that incorporates randomness into the sampling scheme can be described as random sampling along a 1-D probability distribution function across the phase encoded samples at each frequency-encoded sample. This geometry class will be referred to the random sampling PDF (RSP) class.
where U and V are the number of k-space samples in the frequency and phase encoded dimension. This function mathematically defines the probability density function on which the random samples are being selected. Our motivation for this geometry was to attempt a compromise between the inclusion of the center of k-space and also imparting incoherence into the CS problem through a pseudo-random sampling geometry. Representative geometries of the RSP class are depicted in the third row in Table 1.
Lastly, we include a geometry class that restricts k-space sampling in both the phase encoded and frequency-encoded dimensions. Here, we generate the nine geometries by a sampling along Cartesian spiral emanating from the center of k-space. Such sampling geometries are more typical of the classical k-space undersampling reconstruction techniques found in literature. This class is referred to as the spiral low pass (SLP) geometry class (see Table 1).
Optimization of CS penalty parameters
Parameter optimization is performed using two separate approaches. In the first approach, we want to estimate the optimal parameter ranges for α, β that can provide a certain level of image quality. This first approach is based on maximizing the PSNR in the reconstruction images. Our second approach is more direct. It is based on maximizing the average SSIM index. In our second approach, we want to recommend specific parameter values for each sampling geometry. The second approach is validated using leave-one-out.
Both approaches assume that we can find optimal parameters when ground truth is available. When ground truth is not available, our expectation is that our training sets will reflect the new datasets that will be collected. To initialize optimization, we first zero pad the images from the original size of 53 × 63 to be of size 64 × 64. We then use the 2D FFT to generate k-space data for our experiments. A baseline reconstruction is obtained by applying the inverse-Fourier transform to the k-space undersampling matrix. This initial reconstruction will exhibit the aliasing artifacts that the work presented in the introduction seeks to suppress. We then use the Nelder-Mead search algorithm to estimate optimal reconstruction parameters of (1) . Here, we note that the Nelder-Mead algorithm does not require that we evaluate the derivatives. It belongs to the class of direct search methods . This approach is used to generate a collection of optimal parameter values α, β associated with each image.
Then, in our first approach, we estimate the optimal parameter ranges for α, β that can provide a certain level of image quality. To understand how this works, note that minimum image quality requirement can be expressed as a minimum requirement for the PSNR level. We then determine the reconstructed images that exceed the minimum level. The optimal parameter region can then be estimated from the range of values of α, β that correspond to these images. To avoid outliers, the optimal regions need only be met by 75% of the total number of the considered images. We also consider the complexity of each optimal parameter region based on the inner and outer bounding boxes. The inner bounding box is the largest rectangle that can be totally contained inside the optimal parameter region. On the other hand, the outer bounding box represents the smallest rectangle that contains the optimal parameter region. A detailed example of our first approach is also provided in the results section.
Our second approach is more direct. Our goal is to provide specific recommendations for α, β that can be expected to exceed a minimum level of image quality. For this direct approach, we use the average SSIM index value to determine the required level of image quality. We consider an image with an average SSIM index value above 0.9 to represent excellent reconstruction quality. In this second approach, the image reconstruction quality is estimated using a leave-one-out method. In leave-one-out, we construct training sets consisting of all but one of the ON/OFF image pairs. We then report the reconstruction image quality on the remaining ON/OFF image pair that serves as our testing set. The method is then applied over all image pairs. Here, we use the median parameter values estimated over each training set for reconstruction of the testing set. In this approach, we report our results for each image serving as a member of the testing set. Furthermore, for simplicity, we provide the median parameter values over the entire dataset. Also, for completeness, we also provide optimal parameter values for each image pair. This allows the readers to see if the discrepancy between the median and the optimal parameter values will affect the reconstructed image quality.
We provide optimization results independently for the ON, the OFF images, as well as for the difference image. Here, we consider optimization of the difference image for activity detection.
Optimal parameter regions for achieving different image quality bounds
Inner bounding box
Outer bounding box
Our experiments reveal that the DPE and SLP geometry class outperform the RPE and RSP classes. It was expected that the RPE class exhibit poor reconstruction, since the geometries do not contain sufficient samples from the center of k-space. And only a single geometry, the RSP geometry including 62.5% of samples resulted in a suggested operating parameter space for achieving “acceptable” reconstruction.
On the other hand, the DPE class achieved excellent reconstruction from two geometries (62.5% and 48.5%) and the SLP class achieved excellent reconstruction from the three geometries that retained the most number of k-space samples (60.5%, 48.5%, and 40.6%). When lower-quality reconstructions are acceptable, we can use sampling geometries that use fewer k-space samples. Twelve additional DPE geometries were found to satisfy the parameter space threshold and intersection procedure. All eighteen (nine each from the ON and OFF imagery) SLP geometries that were included in this experiment satisfied our requirement for acceptable reconstruction.
For activity detection, we also compared the difference images of the original ON and OFF samples, the CS reconstructed ON and OFF samples, and also the reconstruction values of zero-filling the missing k-space samples. Based on our investigations, we have found that it was possible to obtain excellent reconstruction results with very fast k-space sampling geometries. In what follows, we present our best results, obtained with the SLP geometry using only 20.3% of k-space.
Dyadic phase geometry: Mean SSIM results (OFF & ON) over each image pair and median over all pairs
Spiral Low Pass geometry: Mean SSIM results (OFF & ON) over each image pair and median over all pairs
Random sampling on PDF geometry: Mean SSIM results (OFF & ON) over each image pair and median over all pairs
Random phase encoding geometry: Mean SSIM results (OFF & ON) over each image pair and median over all pairs
Optimal alpha values for dyadic phase encoding geometry (SSIM optimization)
Optimal alpha values for spiral low-pass geometry (SSIM optimization)
0.0E + 00
0.0E + 00
Optimal alpha values for random sampling on PDF geometry (SSIM optimization)
0E + 00
0E + 00
Optimal alpha values for random phase encoding (SSIM optimization)
Optimal beta values for dyadic phase encoding geometry (SSIM optimization)
Optimal beta values for spiral low-pass geometry (SSIM optimization)
Optimal beta values for random sampling on PDF geometry (SSIM optimization)
Optimal beta values for random phase encoding (SSIM optimization)
For each sampling geometry, we provide the median α, β-values in the last column of Tables 7, 8, 9, 10, 11, 12, 13, 14. The recommendation to use the median α, β-values for reconstruction was validated using leave-one-out. Here, note that the mean SSIM values reported in Tables 3, 4, 5, 6 are based on performing the image reconstruction using the median values from the training set of the remaining 11 image pairs. There is thus a mismatch between the median α, β values and the optimal values that can only be estimated when the entire image is available. Yet, despite this mismatch, we find that the second approach worked very well. For example, all of the SLP reconstructions for 20.3% sampling gave average SSIM index values above 0.93.
We next discuss the relationship between PSNR and mean SSIM values. Numerically, an SSIM value of 1 corresponds to an infinite value for the PSNR. Furthermore, an SSIM value of 0 corresponds to a negative infinity value for PSNR. Ideally, an increase in the SSIM index will also correspond to an increase in PSNR. We have verified this fact for the mean-SSIM values of Table 4 and their corresponding PSNR values. On the other hand, this property does not hold for individual images. For example, image #2 for sampling at 20.30% has a PSNR value that is above the median, while its SSIM value is below the median. In such cases, the SSIM trend is considered to be superior to the PSNR. Overall though, all of the SLP reconstructions gave mean SSIM values above 0.93. Thus, the SLP geometry reconstructions are considered to be excellent.
We have found that parameter optimization achieved significant image quality improvements using a relatively small number of iterations. Typically, five iterations were required to improve PSNR by over 10 dB from the initial values of α=0, β=0 and achieve reconstructions that were within 5% of the optimal quality value.
We take a closer look at the α and β ranges for select geometries from the DPE and SLP classes. We selected both the DPE and SLP geometries with the least number of samples that still provided reasonable performance regions in the optimal quality regions for 30 dB, 35 dB, 40 dB, and 45 dB. Examining the range of the parameters provides a more detailed description of the parameter space, and which constraint in (3) is essential for acceptable reconstructions. The minimum and maximum parameter values that define the vertices of the inner and outer bounding boxes are listed in Table 2.
Three instances in the above table stand out because the minimum alpha and/or beta parameters are negative. These values are a by-product of the interpolation and intersection method used to calculate the parameter ranges. When negative parameter values were inserted into the optimization algorithm, the reconstruction algorithm breaks down. In these cases, we replace the negative value with zero to achieve an acceptable result.
It is imperative that our discussion turn to the effect of a zero parameter value for either constraint in the reconstruction problem. Typically, the effect of α=0 is a more prominent presence of high-frequency errors in the spatial domain, as well as a higher amount of artifacts from the sampling geometry. Conversely, the effect of β=0 is a loss of high-frequency spatial components in the spatial domain reconstruction. In this case, the TV-norm tends to drive the solution to result whose finite difference in each dimension is minimized. While removing one of the penalty terms in the CS objective function may result in a usable reconstruction, our observations indicated that this case will not be optimal.
When both parameters are zero, the proposed optimization framework breaks down. It is possible to achieve acceptable image quality when one of the two parameters is zero, but not both. While completely removing one of the penalty terms in (3) may result in acceptable reconstructions in terms of PSNR, an increased PSNR can be achieved by using the (non-zero) optimal values.
For activity detection, we have found that optimization of the difference image led to excellent results. In particular, the SLP geometry gave excellent average SSIM values of 0.9747 (max = 1). The use of the leave-one-out validation with mean SSIM optimization also verified the fact that the SLP family of geometries gives the best result. Furthermore, recommended parameter values for the SLP geometries are given by the median values of Tables 8 and 12.
In conclusion, we have found that CS parameter optimization can dramatically improve fMRI image reconstruction quality. Furthermore, deterministic SLP scanning geometries (similar to the 3D geometries of ) consistently gave the best image reconstruction results. The implication of this result is that less complex sampling geometries will suffice over random sampling, provided the TV-Norm and Transform penalty parameters are selected from the range of values we have calculated in our proposed methodology. We have also found that we can obtain stable parameter regions that can be used to achieve specific levels of image reconstruction quality when combined with specific k-space sampling geometries. More importantly, median parameter values can be used for obtaining excellent reconstructions. This observation has been validated using leave-one-out and optimization based on the mean SSIM index.
- Candès E, Romberg J, Tao T: Robust uncertainty principles: exact signal recovery from incomplete frequency information. IEEE Trans. Information Theory 2006, 52: 489–509.View ArticleGoogle Scholar
- Chan T, Esedoglu S, Park F, Yip A: Recent developments in total variation image restoration. Springer Verlag: Mathematical Models of Computer Vision; 2005.Google Scholar
- Wang Z, Bovik AC, Sheikh HR, Simoncelli EP: Image quality assessment from error visibility to structural similarity. IEEE Transactions on Image Processing 2004, 13: 600–612. 10.1109/TIP.2003.819861View ArticleGoogle Scholar
- Lindquist MA, Zhang C, Glover G, Schepp L: Rapid three-dimensional functional magnetic resonance imaging of the initial negative BOLD response. J of Magnetic Resonance 2008, 191: 100–111. 10.1016/j.jmr.2007.12.016View ArticleGoogle Scholar
- Hansen PC: The L-curve and its use in the numerical treatment of inverse problems. 2012. Available Retrieved http://www.unimed.sintef.no/project/eVITAmeeting/2005/Lcurve.pdf Google Scholar
- Girod B: What's wrong with mean squared error? In Visual Factors of Electronic Image Communications. Edited by: Watson AB. MIT Press; 1993:207–220.Google Scholar
- Huber PJ: The 1972 Wald Lecture: Robust Statistics: A Review. Ann Math Stat 1972, 43(4):1041–1067. 10.1214/aoms/1177692459View ArticleGoogle Scholar
- Friman O, Cedefamn J, Lundberg P, Borga M, Knutsson H: Detection of neural activity in functional MRI using canonical correlation analysis. Magn Reson Med 2001, 45: 323–330. 10.1002/1522-2594(200102)45:2<323::AID-MRM1041>3.0.CO;2-#View ArticleGoogle Scholar
- Nan F, Nowak R: Generalized likelihood ratio detection for fMRI using complex data. IEEE Transactions on Med Imaging 1999, 18(4):320–329. Available Retrieved http://dsp.rice.edu/sites/dsp.rice.edu/files/publications/journal-article/1999/generalize-tmi-1999.pdf 10.1109/42.768841View ArticleGoogle Scholar
- Berstein MA, Thomasson DM, Perman WH: Improved detectability in low signal-to-noise ration magnetic resonance images by means of a phase-corrected real reconstruction. Med Phys 1989, 16: 813–817. 10.1118/1.596304View ArticleGoogle Scholar
- Frank LR, Buxton RB, Wong EC: Probabilistic analysis and functional magnetic resonance imaging data. Med Phys 1998, 39: 132–148.Google Scholar
- Lai S, Glover GH: Detection of BOLD fMRI signals using complex data. Proc. Int. Society Magnetic Resonance Medicine Meeting 1997. 1671.Google Scholar
- Sekihara K, Koizumi H: Detecting cortical activities from fMRI time-course data using MUSIC algorithm with forward and backward covariance averaging. Magn Reson Med 1996, 35(6):807–813. 10.1002/mrm.1910350604View ArticleGoogle Scholar
- Siewert B, Bly BM, Schlaug G, Darby DG, Thangaraj V, Warach S, Edelman R: Comparison of the bold- and epistar- technique for functional brain imaging using signal detection theory. Magn Reson Med 1996, 36: 249–255. 10.1002/mrm.1910360212View ArticleGoogle Scholar
- Strother SC, Kanno I, Rottenberg DA: Principal component analysis, variance portioning, and functional connectivity. J Cerebral Blood Flow Metabol 1995, 15: 353–360. 10.1038/jcbfm.1995.44View ArticleGoogle Scholar
- Lusting M, Donoho D, Pauly JM: Sparse MRI: the application of compressed sensing for rapid MR Imaging. Magnetic Resonance in Medicine 2007, 58(6):1182–1195. Available http://groups.bme.gatech.edu/groups/cfmg/group/JC_Fall2010/Chris.pdf Retrieved. Associated code available at http://www.stanford.edu/~mlustig/SparseMRI.html, retrieved April 9th, 2012. 10.1002/mrm.21391View ArticleGoogle Scholar
- Gamper U, Boesiger P, Kozerke S: Compressed sensing in dynamic MRI. Magn Reson Med 2008, 59: 365–373. 10.1002/mrm.21477View ArticleGoogle Scholar
- Jung H, Ye JC, Kim EY: Improved k-t BLAST and k-t SENSE using FOCUSS. Phys Med Biol 2007, 52: 3201–3226. 10.1088/0031-9155/52/11/018View ArticleGoogle Scholar
- Haldar JP, Hernando D, Liang Z: Compressed-Sensing MRI with Random Encoding. IEEE Transactions on Medical Imaging 30:893–903, 2011. 10.1109/TMI.2010.2085084View ArticleGoogle Scholar
- Liu B, Zou YM, Ying L: Sparsesense: an application of compressed sensing in parallel MRI. Proc. of the 2008 Int Conf on Technology and Information in Biomedicine 2008, 127–130.Google Scholar
- Trzasko J, Manduca A, Borisch E: Sparse MRI reconstruction via multiscale L0-continuation. Proc. of the IEEE/SP 14th Workshop on Statistical Signal Processing. 2007, 176–180.Google Scholar
- Greiser A, von Kienlin M: Efficient k-space sampling by density-weighted phase-encoding. Magn Reson Med 2003, 50: 1266–1275. 10.1002/mrm.10647View ArticleGoogle Scholar
- Peters DC, Korosec FR, Grist TM, Block WM, Holden JE, Vigen KK, Mistretta CA: Undersampled projection reconstruction applied to MR angiography. Magn Reson Med 2000, 43: 91–101. 10.1002/(SICI)1522-2594(200001)43:1<91::AID-MRM11>3.0.CO;2-4View ArticleGoogle Scholar
- McGibney G, Smith MR, Nichols ST, Crawley A: A Quantitative evaluation of several partial Fourier reconstruction algorithms used in MRI. Magn Reson Med 1993, 30: 51–59. 10.1002/mrm.1910300109View ArticleGoogle Scholar
- Pruessman KP, Weiger M, Scheidegger MB, Boesiger P: SENSE: sensitivity encoding for fast MRI. Magn Reson Med 1999, 42: 952–962. 10.1002/(SICI)1522-2594(199911)42:5<952::AID-MRM16>3.0.CO;2-SView ArticleGoogle Scholar
- Sodickson DK, Manning WJ: Simultaneous acquisition of spatial harmonics (SMASH): fast imaging with radiofrequency coil arrays. Magn Reson Med 1997, 38: 591–603. 10.1002/mrm.1910380414View ArticleGoogle Scholar
- Gabr RE, Aksit P, Bottomley PA, Youssef AM, Kadah YM: Deconvolution-Interpolation Gridding (DING): accurate Reconstruction for Arbitrary k-Space Trajectories. Magn Reson Med 2006, 56(6):1182–1191. 10.1002/mrm.21095View ArticleGoogle Scholar
- Donoho D: Compressed sensing. IEEE Trans Inform Theory 2006, 52(4):1289–1306.MathSciNetView ArticleGoogle Scholar
- Nelder JA, Mead R: A simplex method for function minimization. J Comput 1965, 7: 308–313. Available Retrieved http://comjnl.oxfordjournals.org/content/7/4/308.full.pdf+html 10.1093/comjnl/7.4.308View ArticleGoogle Scholar
- Powell MJD: Direct search algorithms for optimization calculations. In Acta Numerica. Edited by: Iserles A. Cambridge, UK: Cambridge University Press; 1998:287–336.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.