Forward Model and Finite Element Formulation
FMT acquisitions are obtained through a two-step image formation model [11]. In the first step, sources at several locations are used to illuminate the tissue. This step, in frequency domain, is driven by the diffusion equation [12]
where the subscript x denotes the excitation wavelength; ∇ is the gradient operator; S
x
(W/cm3) is the excitation light source; Φ
x
(W/cm2), D
x
(cm), and k
x
(cm-1) represent the photon fluence, the diffusion coefficient, and the decay coefficient, respectively; Ω denotes the bounded domain of reconstruction.
In the second step, the fluorophores are excited to emit the fluorescence. The second step can be modelled by a second diffusion equation
where the subscript m indicates the emission wavelength, ω(rad/s) denotes the modulation frequency of the source. S
m
is the emission light source. The diffusion coefficient D
x,m
(cm), and the decay coefficient k
x,m
(cm-1) are defined, respectively, as[6]
where μ
ax,mi
(cm-1) denotes the absorption coefficient due to endogenous chromophores; μ
ax,mf
(cm-1) represents the absorption coefficient due to exogenous fluorophores;
is the reduced scattering coefficient; q is the quantum efficiency of the fluorophore; τ(s) is the lifetime of fluorescence; and finally, c(cm/s) is the speed of light in the medium.
Here, the Robin-type boundary conditions are implemented on the boundary ∂Ω of domain Ω to solve the above diffusion equations
where n is a vector normal to the boundary ∂Ω, b
x,m
is the Robin boundary coefficient.
To solve the forward problem within the finite element method (FEM) framework, the domain Ω is divided into P elements and joined at N vertex nodes. The solution Φ
x,m
is approximated by the piecewise linear function
, with φ
i
(i = 1...N) being basis functions [13]. Hence, equations (1) and (2) can be rewritten as
where
The elements of finite element matrix A
x,m
can be obtained from the formula
with Ω
h
and Γ
h
being the bounded domain and its boundary, respectively.
Inverse Process of FMT
The inverse process of FMT is to estimate the spatial distribution of the optical or fluorescent properties of the tissues from measurements [14]. In the discrete case, the reconstruction problem can be defined as the optimization of the objective function
where G is the forward operator, || || is L2-norm, x and y are the calculated optical or fluorescent properties of the tissues and the detector readings, respectively.
Suppose that the objective function E attains its extremum at x + Δ x, expanding the gradient of the objective function E' about x in a Taylor series and keeping up to the first-order term leads to
Equation (13) can be further written as [15]
where T denotes the transpose, Δ y = y - G(x) is the residual data between the measurements and the predicted data. The Jacobian matrix J is a measure of the rate of change in measurement with respect to the optical parameters. It describes the influence of a voxel on a detector reading. H is the Hessian matrix, whose entries are the second-order partial derivatives of the function with respect to all unknown parameters describing the local curvature of the function with respect to many variables [16].
Introducing the Tikhonov regularization term to tackle the ill-posedness of the inverse problem and ignoring the Hessian matrix, the solution to the linearized reconstruction problem can be described as follows
where λ is a regularization parameter, which can be determined by the Morozov discrepancy principle [17], I is an identity matrix.
Adaptive Regularization Scheme
The problem of image reconstruction for FMT is ill-posed [18]. The Tikhonov regularization technique, as mentioned above, is one of the major methods to reduce the ill-posedness of the problem [19]. However, there exists one protrudent difficulty for this technique in the determination of the regularization parameter. A general unexpected characteristic of the NIR imaging is that the resolution and contrast of the reconstructed images degrade with the increased distance from the sources and the detectors [20]. Considering the fact that the value of the regularization parameter has important effect on the contrast and resolution of the resultant images, one strategy to solve this problem is to use a spatially variant regularization parameter. Meanwhile, it can be inferred that the objective function is related to the regularization parameters [15]. During the process of minimizing the objective function, decreasing λ will speed up the convergence if the value of objective function is decreasing, otherwise increasing λ can enlarge the searching area (trust-region). Upon the basis of these considerations, we propose a modified regularization method both adaptive to the spatial variations and the objective function.
Suppose that the number of measurements and the number of the vertex nodes are M and N, respectively. Thus, we have for the matrices in equation (15): Δ x∈RN × 1, Δ y∈RM × 1, J∈RM × N, and I∈RN × N. To construct a spatially variant regularization framework, the inverse term of (JT
J + λ I)-1 in equation (15) is replaced with (JT
J + λ)-1, which results in the following equation
where
is a diagonal matrix. Equation (16) can be rewritten as
with Δ x
i
(i = 1, 2, ..., N) being the component of the vector Δ x. It can be easily seen that each node p
i
(i = 1, 2,...,N) in the reconstructed domain is regularized by a corresponding regularization parameter λ
i
(i = 1, 2,...,N) respectively. Obviously, the above mentioned Tikhonov regularization can be regarded as a special case of equation (17) when λ
1 = λ
2 = ⋯λ
N = λ.
It was pointed out in [21] that, the resolution and contrast of the images decrease with the increment of the regularization parameters and vice versa. Therefore, the adaptive regularization parameter λ
i
can be defined as follows to compensate the decrease of the resolution and contrast with the increased distance from the sources and detectors:
where r
i
is the position of node p
i
, r
s
and r
m
respectively denote the positions of the source and detector closest to the node p
i
, c
1 and c
2 are two positive parameters determined empirically in our paper.
To make the regularization parameter adaptive to the objective function as defined in equation (12), we propose to incorporate it in the regularization as follows
In equation (19), the arctan function is used to guarantee a relatively small fluctuation range of the regularization parameters and avoid too large values of them. Obviously, regularization parameters determined from equation (19) relate to the objective function in a similar manner to that as pointed out before. In such a way, the regularization parameters are adaptive not only to the spatial variations but also to the variations of the objective function to accelerate the convergence.
Reconstruction Based on the Schur Complement System
As has been pointed out previously, the iterative methods based on the Schur complement system can be more efficient to solve large-scale problems. Hence, we propose to reconstruct the tomographic image of FMT with level-by-level decomposition in the Schur complement system.
For convenience of discussions, equation (16) can be further rewritten in a more compact form as
where k = JT
J + λ and b = JT
Δ y.
To solve the inverse problem of FMT in the Schur complement system, the solution space Rnis firstly decomposed into two subspaces of U and V with dimensions m and n-m, respectively. Let [Γ Ψ] be an orthonormal basis of the solution space Rn. The basis of the m-dimensional coarse subspace U is formed by the columns of Γ ∈ Rn × mand the columns of Ψ ∈ Rn×(n-m)form the basis of the (n - m) dimensional subspace V.
Therefore, the solution to equation (20) can be expressed with the bases of the two subspaces as follows
where u and v are the projections of Δ x on the subspaces U and V, respectively.
Because both the condition number and the scale of the system can be reduced after Schur complement decomposition, we propose to further decompose both the projections u and v level by level with the Schur complement decomposition along two paths in a tree structure, and then solve the subsystems in the Schur complement systems. Our approach is different from that proposed in [10], where only the projection v is solved in the Schur complement system. The level-by-level Schur complement decomposition can be schematically illustrated as in Figure 1. We derive the iterative system in the following discussions.
Suppose that the subsystem at the i th level is as follows
where S
(i, j) is the Schur complement matrix with the subscript (i, j) being the j th (j = 0, 1,..., 2i) term at the i th (i = 0, 1,..., L) level in the tree structure as illustrated in Figure 1. Particularly, S
(0,0) is the global matrix k as defined in equation (20). To solve this system in the Schur complement system, equation (22) will be further decomposed at the i+1th level. Thus, the solution Δ x
(i,j) is firstly expressed with the bases of the two subspaces as
where Δ x
(i+1,2j-1) and Δ x
(i+1,2j) are the projections of Δ x
(i,j) on the subspaces formed by the columns of Γ
(i,j) and Ψ
(i,j), respectively.
Substituting equation (23) into equation (22) yields
Multiplying both sides of equation (24) from the left by [Γ
(i,j)
Ψ
(i,j)]T, we can obtain
Thus, equation (25) can be further rewritten into a two-by-two block system
where
,
,
, and
, while the two components on the right-hand side (RHS) of equation (26) are
, and
. From equation (26), it can be seen that S
(i,j)11 and S
(i,j)22 correspond to the equations for the unknowns of Δ x
(i+1,2j-1) and Δ x
(i+1,2j), respectively, while S
(i,j)12 and S
(i,j)21 define the coupling between these two sets, which will be eliminated in the following discussions.
Applying block Gaussian elimination to equation (26) leads to [22]
where
, which is called the Schur complement with respect to S
(i,j)11
[7],
. From equation (27), we have
It can be found that the condition number of matrix S
(i+1,2j) is smaller than that of matrix S
(i,j)
[9]. Hence, solving the inverse problem in the Schur complement system at the i+1th level will be more efficient than solving it at the i th level. We herein solve equation (29) using the biconjugate gradient method [23]. Its advantage is that it does not square the condition number of the original equations [24]. Basically, the biconjugate gradient method can be used to solve the large-scale systems with the fastest speed among all the generalized conjugate gradient methods in many cases [25]. The algorithm for solving equation (29) can be summarized as follows
Algorithm 1
-
1.
Input an initial guess Δ x (i+1,2j)0;
-
2.
Initialize d 0 = f 0 = r 0 = p 0 ← b (i+1,2j) - S (i+1,2j) Δ x (i+1,2j)0;
-
3.
For k = 0, 1, 2... until convergence do
End for
After the derivation of Δ x
(i+1,2j) from equation (29) with algorithm 1, the next task is to obtain the other component of Δ x
(i+1,2j-1) for the synthesis of the solution Δ x
(i,j). Here, Δ x
(i+1,2j-1) is also solved in the Schur complement system due to its low condition number.
Eliminating the block S
(i,j)12 in equation (26) using block Gaussian elimination with S
(i,j)22 as pivot block, we have
where
and
. Thus S
(i+1,2j-1) is the Schur complement with respect to S
(i,j)22.
From equation (30), we can obtain
Thus, the solution Δ x
(i+1,2j-1) can be obtained in a same manner as in Algorithm 1, and the only difference is that Δ x
(i+1,2j), S
(i+1,2j), and b
(i+1,2j) should be replaced with Δ x
(i+1,2j-1), S
(i+1,2j-1), and b
(i+1,2j-1), respectively. Solving equation (31) is computationally efficient because of the reduced condition number in the Schur complement system [7]. Moreover, such a strategy of deriving both Δ x
(i+1,2j-1) and Δ x
(i+1,2j) in the Schur complement system can be implemented in a parallel manner, since equations (29) and (31) are decoupled. Therefore the subsystem at the i th level as in equation (22) can be decomposed into the two linear subsystems at the i+1th level, i.e., Schur complement systems as in equations (29) and (31). After obtaining Δ x
(i+1,2j-1) and Δ x
(i+1,2j), they are then substituted into equation (23) to yield the solution Δ x
(i,j) at the i th level. The whole reconstruction algorithm is summarized as follows
Algorithm 2
-
1.
Set x 0 to an initial guess;
-
2.
x ← x 0, calculate b and k at x in equation (20) with the adaptive regularization scheme as in equation (19);
-
3.
The global system of equation (20) is decomposed with the Schur complement system level by level in a same manner as the decomposition of equation (22) into equations (29) and (31) to obtain the subsystem S (i,j) Δ x (i,j) = b (i,j) at the i th level for i =1,..., L and j =1,..., 2i, the subspaces at the i th level are formed by the columns of Γ (i,j) and Ψ (i,j), respectively;
-
4.
Set i = L;
For j = 1,..., 2ido
Combining equations (26), (27), and (30), solve S
(i+1,2j)
Δ x
(i+1,2j) = b
(i+1,2j) and S
(i+1,2j-1)
Δ x
(i+1,2j-1) = b
(i+1,2j-1) with Algorithm 1, where Δ x
(i+1,2j-1), S
(i+1,2j-1), and b
(i+1,2j-1) are used instead of Δ x
(i+1,2j), S
(i+1,2j), and b
(i+1,2j) when Algorithm 1 is employed for the latter case;
End for
-
5.
While i ≥ 0
{
For j = 1,..., 2ido
Substitute the solutions Δ x
(i+1,2j) and Δ x
(i+1,2j-1) into equation (23) to obtain the solution Δ x
(i,j) at the i th level;
End for
i = i - 1;
}
-
6.
x 0 ← x 0 + Δ x (0,1);
-
7.
If ||Δ x (0,1)|| > ε
go to 2;
Else
x ← x
0, output x.
As mentioned before, the Schur complement system has a smaller condition number than that of the system from which it is constructed [7]. As a result, iterative methods based on the Schur complement systems can be more efficient than the methods based on the global matrix as in equation (20) due to its reduced scale and the smaller condition number. Therefore, the proposed algorithm can be expected to be more efficient than the conventional ones, as the results demonstrated in the next section.