 Review
 Open Access
 Published:
Clutter suppression in ultrasound: performance evaluation and review of lowrank and sparse matrix decomposition methods
BioMedical Engineering OnLine volume 19, Article number: 37 (2020)
Abstract
Vessel diseases are often accompanied by abnormalities related to vascular shape and size. Therefore, a clear visualization of vasculature is of high clinical significance. Ultrasound color flow imaging (CFI) is one of the prominent techniques for flow visualization. However, clutter signals originating from slowmoving tissue are one of the main obstacles to obtain a clear view of the vascular network. Enhancement of the vasculature by suppressing the clutters is a significant and irreplaceable step for many applications of ultrasound CFI. Currently, this task is often performed by singular value decomposition (SVD) of the data matrix. This approach exhibits two wellknown limitations. First, the performance of SVD is sensitive to the proper manual selection of the ranks corresponding to clutter and blood subspaces. Second, SVD is prone to failure in the presence of large random noise in the dataset. A potential solution to these issues is using decomposition into lowrank and sparse matrices (DLSM) framework. SVD is one of the algorithms for solving the minimization problem under the DLSM framework. Many other algorithms under DLSM avoid full SVD and use approximated SVD or SVDfree ideas which may have better performance with higher robustness and less computing time. In practice, these models separate blood from clutter based on the assumption that steady clutter represents a lowrank structure and that the moving blood component is sparse. In this paper, we present a comprehensive review of ultrasound clutter suppression techniques and exploit the feasibility of lowrank and sparse decomposition schemes in ultrasound clutter suppression. We conduct this review study by adapting 106 DLSM algorithms and validating them against simulation, phantom, and in vivo rat datasets. Two conventional quality metrics, signaltonoise ratio (SNR) and contrasttonoise ratio (CNR), are used for performance evaluation. In addition, computation times required by different algorithms for generating clutter suppressed images are reported. Our extensive analysis shows that the DLSM framework can be successfully applied to ultrasound clutter suppression.
Background
Angiology, which concerns vesselrelated diseases, is one of the most important branches of medical science since vascular diseases are very common and cause death to a large number of people every year [1]. Vascular diseases can primarily be divided into several categories based on the type of vessel. Arterial diseases include aneurysms, thrombosis, vasculitides, and vasospastic disorders. Venous diseases include venous thrombosis, chronic venous insufficiency, and varicose veins. There are also diseases associated with capillaries. One such example is the capillary hemangioma. Currently, the most accepted classification of vascular abnormalities is tumors and deformities which were adopted in 1996 by the International Society for the Study of Vascular Anomalies [2]. Therefore, many major clinical diseases have been shown to cause vascular growth abnormalities. For example, many cardiovascular diseases are related to aneurysms or other vascular variations [3, 4]; the growth of many tumors in cancer is also highly dependent on angiogenesis [5, 6]. Similarly, angiogenesis is also an important feature of diabetesrelated diseases [7,8,9] and endometriosis [10]. Therefore, blood vessel imaging is indispensable in clinical fields and medical research [11], including but not limited to diagnosis, treatment planning, surgery, and followup treatment results.
Several medical imaging modalities such as duplex ultrasound (DUS), computed tomography (CT), magnetic resonance imaging (MRI), and digital subtraction angiography (DSA) have been employed thus far to ensure a proper visualization of blood vessels. Among different vascular imaging modalities, ultrasound has become the primary choice, for it is safe, economical, easytouse, and most importantly, realtime [11]. Duplex ultrasound is the combination of color flow imaging (CFI) and grayscale/brightness mode (Bmode) imaging, whereas the CFI is used to observe the blood flow direction and velocities, and the Bmode ultrasound is used to visualize twodimensional anatomy images simultaneously. By simultaneous processing frequency, phase, and amplitude of the backscattered ultrasound signal, CFI can rapidly identify the flow direction and velocities in the region of interest. Moreover, CFI can be used to mark flow abnormalities, including stenoses and occlusions [12]. The comparison between ultrasound and other vascular imaging methods is shown in Table 1.
Due to the excellent performance, ultrasound CFI has been increasingly used for the diagnosis of vesselrelated diseases [14]. However, as one of the most promising and widely applicable methods with low cost and no risk, CFI still has some obvious disadvantages. First, due to the tissue scattering of the ultrasound beam, the intensity of the blood backscatter is several orders of magnitude less than that of the tissue backscatter, which makes it hard to image blood flow clearly [12, 15]. Second, more than three pulses are needed to estimate the velocity because of the stochastic behavior of blood signals and the impact of tissue clutters [12]. The requirement for multiple pulses limits frame rates and the number of scan lines. Third, CFI is limited by the insonation angle which is the angle between the ultrasound beam and the flow direction [16, 17]. Generally, an accurate measurement requires Doppler angles ranging from 30° to 60°, where smaller angles will result in lower speeds and greater angles will produce a significant overestimation of the velocity [16, 17]. Last but not least, blood signals and clutter signals will possess a significant overlap, that is, when the blood flow rate is very slow (such as in small blood vessels) or when the tissue movement is obvious. The overlap will be harmful to blood vessel visualization [18, 19]. Most of these disadvantages are caused by clutter, as a consequence, clutter suppression is particularly important in ultrasound blood flow imaging. Figure 1 shows the clutter in two CFI images and illustrates the importance of clutter filtering.
The main purpose of clutter filtering is to suppress grossmoving tissue clutter and beam sidelobe leakages [19]. An efficient clutter suppression is a prerequisite for CFI to present accurate and clear blood flow maps. The most significant effect of clutter reduction is an increase in the signaltonoise ratio (SNR) of the blood signal, which enables clearer blood flow maps and reduces erroneous moving tissue signals. Meanwhile, pure blood flow signals also help reduce the number of pulses needed to estimate the speed, thereby increasing the frame rate. In addition, the overlapping frequency spectra of slow blood flow and fastmoving tissue will no longer hinder the microvascular flow detection or add bias to highvelocity flow [19, 20].
However, the perfect removal of clutter signals is still impossible for now since clutter signals are 40 to 100 dB stronger than blood signals and they exhibit similar properties [15].
In early development of CFI clutter filtering, tissue signals and blood signals were assumed to have completely different frequency characteristics. This assumption holds that tissue and blood signals exhibit nonoverlapping frequency spectra since the tissue is considered to be nearly stationary whereas red blood cells are rapidly moving [18]. Based on this assumption, finite impulse response (FIR) and infinite impulse response (IIR) highpass filters were used to filter clutter signals and enhance the sensitivity of blood flow [15, 18]. Nowadays, it is recognized that FIR and IIR filters have distinct drawbacks. FIR filters require a high order to separate blood from clutter, whereas IIR filters take a long time to settle [18, 21]. Furthermore, both types of highpass filters suffer from the insufficient number of slow time samples, which leads to inefficient suppression of clutter [19, 22]. Another clutter removal approach introduces linear regression filters [23,24,25]. The regression filter eliminates clutter signals by taking the least square fitting of signals from the signal model [18]. Studies suggest that polynomial regression filters and IIR filters have better performance than FIR filters. In the case of contrastenhanced ultrasound vascular imaging, pulse inversion technique has been introduced toward the end of clutter rejection [26,27,28]. In this approach, the linearity property of tissue echo is exploited for distinguishing tissue from blood [26, 29, 30]. Although these methods significantly improve the SNR of blood signals, they are not considered in this paper because of their invasiveness.
The aforementioned traditional clutter suppression algorithms, such as FIR and IIR, have at least one of the following issues: (1) long settling time, (2) inability to adaptively suppress the clutter based on data property, (3) inadequate temporal sample or resolution. Besides, two main reasons are resulting in the imperative innovation of ultrasound clutter filtering. First, new ultrasound technologies like plane wave ultrasound have brought a higher frame rate and imaging speed. Traditional filters cannot meet the higher clutter filtering performance requirements, though they do not suffer from the settling time due to the high frame rate. Second, the underlying assumption of traditional filters does not hold in the presence of significant tissue motion stemming from the sonographer’s sinusoidal hand movement or the patient’s breathing and heartbeat [31, 32]. In such a scenario, the frequency bands corresponding to tissue and blood overlap with each other without a definite boundary between them. Hence, highpass filters fail to separate blood from tissue when the clutter signal dominates with nonzero Doppler frequency caused by substantial tissue movements.
To resolve these issues, eigenbased filters [33,34,35] have been proposed which take both spatial and temporal samples into consideration to develop an adaptive clutter suppression scheme. The techniques related to these eigenbased filters have been widely applied in the field of computer science which is mainly used for processing highdimensional data. Meanwhile, these techniques are not based on incomplete traditional assumptions. Matrix decomposition is the principal idea behind these algorithms and it is assumed that clutter and blood signals lie in different subspaces. Therefore, eigenbased filters are considered adaptive to gross motions induced by the sonographer or the subject being examined. Based on different assumptions, research proves that eigenbased filters perform better than traditional methods [20, 22].
Most of the eigenbased filters for ultrasound clutter suppression are based on singular value decomposition (SVD) or eigenvalue decomposition and improve upon it [36,37,38,39]. To perform the subspace separation task, slowtime temporal ultrasound frames are stacked as columns of data matrix, known as the Casorati matrix [40]. The SVD of this Casorati matrix provides the opportunity to distinguish blood from clutter. It has been reported in the literature that the most dominant singular values and vectors correspond to clutter, the next few represent blood and the least significant ones correspond to noise [41]. In these eigenbased approaches, the eigen or singular values representing clutter and noise are set to zero to find the blood component of the echo signal [41, 42].
Many SVDbased techniques have been proposed which work with conventional linebyline scanning [20, 43,44,45]. These methods suffer from lacking an adequate number of temporal samples due to low frame rate associated with focused ultrasound imaging [21]. Recent clutter suppression algorithms [18, 42, 46,47,48,49] have resolved this issue by incorporating ultrafast planewave imaging. However, the blood signal in planewave ultrasound is even weaker than normal ultrasound due to the unfocused wave [50, 51]. The sidelobe in planewave imaging is also much higher than that in conventional imaging due to the same reason. Therefore, planewave ultrasound has a higher and more urgent filtering requirement than traditional CFI. Recent methods have extended SVDbased clutter suppression to a higher order by analyzing a data tensor instead of a twodimensional matrix [42, 48, 52]. Since the first few singular values do not necessarily correspond to the clutter signal in the presence of a large temporal misalignment among the frames, the motion correction step has been introduced in SVDbased clutter rejection [53]. Since SVD was initially combined with planewave imaging in 2015, almost all the clutter suppression research have been based on planewave ultrasound since SVD can reach its full potential on large datasets [18].
Although SVDbased techniques are promising for suppressing clutter optimally, they have two major drawbacks. First, there is still no uniform and efficient standard for rank selection which presents boundaries of tissue and blood flow [42]. Proper subspace rank selection, which is done by extensive manual intervention, is crucial for the optimality of clutter rejection. Recent methods suggest different criteria for selecting the optimal ranks [54]. In addition, [21] proposes Kmeans clustering of the decomposed components as a criterion for selecting singular values and vectors corresponding to clutter and blood. Though different ideas are proposed for automatic rank selection [55], there is still no efficient and standard method. An example that briefly explains the problem of SVD threshold selection is shown in Fig. 2. The selected rank will affect blood signals. A large threshold range cannot effectively filter clutter and noise, and a small range will lose part of the blood signals. The second drawback is that SVD is sensitive to noise. It fails to obtain the optimal result while processing data with large random noise [56].
The aforementioned issues can potentially be resolved by taking the framework called decomposition into lowrank and sparse matrices (DLSM) [21] into account. SVD is one of the calculation methods in DLSM framework and there are also approximate SVD or SVDfree algorithms. This is a wellestablished framework in the field of computer vision due to its robustness to large noise and information corruption [56]. The underlying assumption of this approach is that steady tissue is a lowrank component and moving blood exhibits sparsity [50]. It has been noticed that both temporal and spatial information can be used to separate tissue and blood signals since tissue signals have a higher temporal–spatial coherence than blood signals (e.g., the blood scatterers are unique and constantly changing). A convex optimization problem is usually solved to decompose the data matrix into lowrank clutter and sparse blood components. A recent technique has used this model for the concurrent removal of clutter and noise [57]. Furthermore, recent work has incorporated deep learning with lowrank and sparse decomposition for improved clutter suppression performance [21].
The main purpose of this work is to demonstrate the feasibility of 106 established lowrank and sparse decomposition algorithms in ultrasound clutter suppression and to provide suggestions for most suitable DLSM models, optimization methods, and algorithms for ultrasound clutter suppression. The paper is organized as follows. “Decomposition into lowrank and sparse matrices (DLSM) framework” section illustrates DLSM framework including decomposition types, loss functions, and relationships with tensor decomposition. “Experiments” section includes detailed experimental settings and results on simulation, phantom, and in vivo rat datasets. Finally, the paper reveals our discoveries in “Discussion” and “Conclusion” sections including the appropriate DLSM algorithms for clutter suppression and the shortcomings of the remaining algorithms. Comments and possible solutions are also proposed in response to different shortcomings.
Decomposition into lowrank and sparse matrices (DLSM) framework
Lowrank and sparse structures are attractive since they usually represent part of the large and highdimensional data which we are most interested in. Noise and data corruption can be fixed when decomposing matrices into lowrank and sparse components. Methods like sparse representation and lowrank modeling have achieved great success in computer vision, natural language processing, system identification, bioinformatics, etc. [58,59,60]. So far, many different models, optimization methods, and algorithms are proposed aiming at solving the lowrank and sparse matrix recovery problems. Meanwhile, many classifications have been proposed [58, 61,62,63] according to linearity, convexity, number of subspaces, or number of addition matrices.
Decomposition into lowrank and sparse matrices (DLSM) is one of the relatively detailed and comprehensive frameworks [61] which classifies various models of matrix decomposition according to the number of constrained component matrices. DLSM framework provides a suitable framework for signal processing, system identification, computer vision, machine learning, etc. This decomposition idea is becoming more popular and widely used in recent years, especially after the robust principal component pursuit (RPCP) was purposed in papers of Candes et al. [56], and Chandrasekharan et al. in 2009 [64]. In the beginning, these algorithms are designed to deal with highdimensional data which are often regarded as an extremely highdimensional data matrix. Since many dimensions are usually independent, it is possible to recover the matrix from corruption or noise. These ideas are based on the assumption that the uncorrupted information matrix is highly correlated within the observing timewindow and therefore lies in the lowrank subspace. At the same time, the moving foreground objects, noise, or other special signals constitute the correlated sparse outliers.
Based on similar assumptions, several algorithms under DLSM framework have been validated that they can be successfully applied to ultrasound clutter suppression [18, 22, 36, 38, 55, 65]. In medical ultrasound, tissue and blood flow also lie in different subspace. In terms of temporal information, tissue signals and blood signals have different spectral features due to the different movement patterns of blood and tissue. As for spatial features, the blood signal has an extremely lower spatial coherence than tissue signal because the irregular movement and arrangement of red blood cells produce constantly changing scatterers, whereas the tissue movement is overall patterned. Therefore, they gain a low rank and sparsity characteristics, respectively, and lie in different subspaces. Due to the robust and efficient performance of DLSM frameworks in separating lowrank and sparse components, it can show great potential in the field of ultrasound clutter suppression.
Overall, DLSM framework is divided into decomposition problems, minimization problems, loss function and solvers (algorithms used to solve the optimization problems) [63] as Fig. 3 shows. The permutations and combinations of models and optimization methods and solvers lead to various algorithms, which is the origin of the DLSM framework. DLSM framework and its application in the ultrasound clutter suppression will be briefly illustrated in the following subsections.
Preprocessing and notations
Preprocessing of ultrasound data is necessary for integration into an input matrix or tensor in a special shape when applying DLSM algorithms. In general, the input of the DLSM algorithm consists of a sequence of n consecutive ultrasound data (\(F_1\dots F_n\)) with the original size of \(F\in \mathfrak {R}^{i_1\times i_2}\). For a twodimensional DLSM algorithm, the input M (\(M\in \mathfrak {R}^{m\times n}, m=i_1\times i_2\)) is in matrix form in most cases which consists of n resized ultrasound data frames (\(F\in \mathfrak {R}^{m\times 1}\)) arranged in order. In terms of highorder DLSM algorithms, the input is typically an Norder tensor T (\(T\in \mathfrak {R}^{t_1\times t_2\dots t_n}\)). T is generally third order and concatenated by original size ultrasound frames, where \(T=[F_1^{i_1\times i_2},\dots F_n^{i_1\times i_2}], T\in \mathfrak {R}^{i_1\times i_2\times n}\). Next, the input M (or T) is decomposed into several components through the DLSM algorithm as follows:
where \(0\le X \le 3\) and \(K_1,K_2,K_3\) typically represent lowrank L, sparse S, and noise components E, respectively. The specific components \(K_x\) and the number of X depend on the purpose (interested in sparse or low rank components) and the decomposition formulation.
Decomposition formulations
Implicit decomposition
Implicit decomposition \((X=1)\): Under the condition that x is equal to 1, matrix M is approximately equal to a target lowrank matrix L under the constraint condition, because the information that people interested in mainly lies in the low rank component in most cases. Sparse matrix S can be obtained from the difference between M and S (e.g., \(S=ML\)). However, this processing is the opposite in the application of ultrasound clutter suppression because the blood signal is relatively sparse. The formulation of this problem is as follows:
where \(M\approx L\), f(.) is a loss function used for the minimization term which depends on specific solvers or algorithms. Models like principal component analysis (PCA), nonnegative matrix factorization (NMF), and matrix completion (MC) are in this category.
For the applications targeted to sparse components, implicit decomposition sets the target matrix \(K_1\) as a sparse matrix S. Then lowrank matrix L is the difference between M and S which can be calculated as \(L=MS\). Sparse dictionary learning [66], sparse linear approximation, and compressive sensing [66,67,68], etc. are built under the same idea.
where \(M\approx S\), and the difference contains noise and other information. In this case, implicit decomposition can be used in the compressed sensing and signal recovery similar to unsupervised clustering [69] and image recognition [70].
Before more robust explicit decomposition method was proposed, the main development of ultrasound clutter suppression was based on PCA or SVD or eigenvalues, which belong to implicit decomposition [19, 22, 35, 38, 39, 45]. Although many experiments have proved that these eigenbased filters greatly improve the performance than traditional IIR and regression filters, many authors realize that the filtering method based on implicit decomposition is not robust to accelerated tissue movements and different kinds of noise [22, 35, 39]. Moreover, their expensive computational complexity is not suitable for realtime processing.
Explicit decomposition
Explicit decomposition \((X=2)\): Under this condition, M is usually decomposed into a lowrank matrix \(K_1=L\) and a sparse matrix \(K_2=S\) (\(M\approx L+S\)). This is called explicit decomposition because there are two constraints. One is sparse constraint over S and the other is lowrank constraint over L. Therefore, explicit decomposition is more robust than implicit decomposition. The formulation of explicit decomposition is as follows:
where \(M\approx L+S\) and f(.) represents loss function. The explicit decomposition includes robust principal component analysis (RPCA), robust nonnegative matrix factorization (RNMF), robust matrix completion (RMC), and robust subspace tracking (RST) [59, 71].
These methods generally work better and are more robust than implicit decomposition because of the additional constraints [71]. In this way, RPCA has been used as a powerful tool in MRI, CT, and ultrasound imaging [72,73,74]. Many optimization algorithms have been proposed for cluster suppression in ultrasound imaging using RPCA, RMC [21, 55, 75].
Stable decomposition
Stable decomposition \((X=3)\): Due to the fact that there are always noise and corruption caused by special cases in the real world, an additional matrix \(K_3\) is added to represent unexpected components. \(K_3\) could represent distortion, shadows, and noise according to special situations (\(M\approx S+L+N\)). It is more stable than the explicit decomposition since more detailed information is separated and taken into account. The stable decomposition can handle more complex situations in the real life such as dynamic videos and maritime monitoring videos which are corrupted by complicated noise.
Stable decomposition methods include Stable Principal Component Analysis (Stable PCA) or Stable Nonnegative Matrix Factorization (Stable NMF) and Three Term Decomposition (TTD). These methods can deal with more complex situations. In terms of US imaging, it is usually assumed that signal M contains clutter signals L (lowrank), blood signals S (sparse), and noise N. Since ultrasound signals have complex noises and dynamic clutter signals, this assumption \(M=S+L+N\) is more acceptable when there are meticulous requirements such as microvascular imaging. Although some literature mentions the stable decomposition of blood (L, S, E respectively represent blood flow signals, clutter signals, and noise), they do not illustrate whether constraints are added to noise component. Therefore, the stable decomposition formulation is still a promising research area for ultrasound clutter suppression.
Models under DLSM framework
As of today, many models, also called problem formulations, have been proposed. According to different math formulations and features, methods are usually classified under families such as robust principal component analysis (RPCA), nonnegative matrix factorization (NMF), matrix completion (MC), and Subspace Tracking (ST). Different models have different functions and aims. However, it has been proved that the solutions of many robust models can be mutually expressed in closed forms [76]. For instance, RPCA via principal component pursuit [56] can be considered as MC models using \(l_1\)norm loss function [63]. In addition, these models can be flexibly generated in any decomposition formulations. For example, adding constraints on noise components based on the RPCA will change it from explicit decomposition to stable decomposition.
Robust principal component analysis
Principal component analysis (PCA) generates a set of linearly uncorrelated variables which is called principal components, from a set of observations by orthogonal transformation. Similar mathematical tools include SVD and eigenvalue decomposition. RPCA is based on the extension of PCA (expansion from implicit decomposition to explicit decomposition), which aims to recover lowrank components and reduce the impact of grossly corrupted data. RPCA can be approached by principal component pursuit (PCP) [56, 64], Bayesian RPCA [77,78,79], and so on. RPCA problem is generally expressed as follows:
where L is lowrank matrix and S is sparse matrix. According to the nature of L and S, the most intuitive way to solve the RPCA problem is to minimize the rank of L and the \(l_0\)norm of S:
where \(\lambda\) is a balanced parameter. However, this formulation is NPhard. Therefore, optimization problems like PCP are needed.
The convex optimization principal component pursuit (PCP) was first proposed by Candes et al. [56, 63, 80] to address the RPCA problem. It becomes one of the most famous methods of face recognition and background modeling in recent years. PCP uses the following formula to convexly optimize RPCA problem:
where \(\Vert .\Vert _{*}\) and \(\Vert .\Vert _{l_1}\) are the nuclear norm and \(l_1\)norm, respectively. Although this method excels in computer vision, there are still some limitations to sparse components recovery. First, it requires expensive computational algorithms. Second, it is a batch method which is not suitable for realtime applications, especially for planewave ultrasound with high frame rates. Third, it has very high requirements for low rank and sparse properties; however, the complex blood flow or noise may make it difficult for ultrasound data to meet such requirements. To accelerate the algorithms and achieve higher precision, different solvers have been developed [81,82,83]. Solvers for realtime implementations have also been proposed [84, 85].
The stable principal component pursuit (SPCP) is a stable expanded form based on PCP, which mainly aims at reducing the impact of noise. SPCP adds noise term E based on PCP and constrains it by Frobenius norm.
Matrix completion
The main purpose of matrix completion (MC) is to recover lowrank observation matrix of its missing entries. The Netflix movie rating matrix recover problem is one of the most classic examples. The classic lowrank matrix completion problem can be seen as finding the lowest rank of the matrix L which matches the matrix M, for all the measured entries in set \(\Omega\). The mathematical formulation of MC problem is as follows:
Due to the implicit decomposition of MC is not robust to noise which only affects a smallscale data [86, 87], MC is generally extended to explicit decomposition by adding restrictions, which is called robust matrix completion (RMC). The common RMC obtains stronger robustness than MC by adding sparse constraints, and its formulation after convex optimization is as follows:
where \(P_\Omega (M)\) is the projection of the complete dataset on the measured entries \(\Omega\). Although the form of decomposition is the same as PCP, the unique constraints of RMC make it supervised while the PCP is unsupervised learning [63], which is consistent with the purpose of RMC.
Nonnegative matrix factorization
The nonnegative matrix factorization (NMF) is also a widely used matrix factorization and dimension reduction model under DLSM framework. The main unique feature of NMF is that lowrank factor matrix is subject to nonnegative constraints consistent with the physically natural features in many fields [88, 89]. The NMF problem is generally expressed as follows:
where \(W\in \mathfrak {R}^{m\times k}\) and \(H\in \mathfrak {R}^{n\times k}\) are two nonnegative matrices, and \(k<\min \{m,n\}\) due to the goal of dimension reduction. The most common formulation for the optimization problem of NMF is as following:
where \(\Vert .\Vert _F^2\) is the Frobenius norm. The problem (14) is a nonconvex problem and it is NPhard to find its global minimum [88, 90]. Consequently, optimization algorithms and solvers are developed for the local minimum.
Subspace tracking
The subspace tracking (ST) can be regarded as the dynamic RPCA designed to handle increasing new data or dynamic subspaces. The data at each moment t are processed as the increments and then discarded. This idea addresses the problem when new observations come in asynchronously in online streaming environments. It makes subspace tracking more efficient and less computationally expensive on extremely long data sequences [91]. Since ST can recover subspaces from incomplete frame vectors, it has the potential to further improve efficiency by downsampling the input frames [63]. The general formulation for the ST problem is as follows:
where \(m_t\) is input frame data at time t, and \(l_t\), \(s_t\), \(e_t\) are lowrank, sparse, and noise components of \(m_t\). The number of k is determined according to different decomposition forms, and the constraint conditions and approximate approximations on each component are determined according to different optimization methods.
Lowrank representation
Lowrank representation (LRR) can also be called lowrank optimization or lowrank minimization. Other unclassified models can be regarded as LRR. LRR is a minimization problem in mathematics. In LRR, the cost function measures the fit between the input matrix M and the approximation matrix L [63]. The mathematical formulation of LRR problem is as follows:
where M is the input matrix, \(\hat{M}\) the approximate matrix, \(\Vert .\Vert _F\) the Frobenius norm, and r the rank. The basic form of LRR is similar to other models; therefore, most of the other unclassified models can be regarded as a category in LRR. For instance, RPCA and NMF can be obtained by similar architectures. Constraints other than rank constraints can be added for specific applications. LRR can be extended into an explicit or stable form by adding constraints on the sparse and noise components.
The extension to tensor
In DLSM framework, only some of the singledimensional information is used when images are pretreated into data matrix M as vectors. This means that some multidimensional information is not taken into account in the process of decomposition. To improve the results, the tensor decomposition is proposed.
Tensor DLSM
When it comes to tensor, the most intuitive idea is to change all matrices to tensors directly since a tensor can be seen as a combination of several matrices. It is very similar to DLSM framework which subjects to \(T=L+S+E\). The tensor DLSM extends all components to a tensor form as Fig. 4.
where T, L, S and N represent the data tensor, lowrank tensor, sparse tensor and noise tensor, respectively. Similar to the matrix DLSM framework, it can be optimized and solved by a minimization problem. Some other classic matrix decomposition optimization methods have also been extended to tensor. The tensor robust principal component method [92] has been proposed based on tensor singular value decomposition (tSVD) [93]. It has been demonstrated the effectiveness of image denoising. Another robust lowrank tensor recovery model based on RPCA has also been published for complex multilinear data analysis [94]. Rank sparsity tensor decomposition (RSTD) [95] and some other ideas based on stable principal component pursuit (PCP) also have been utilized in image processing.
Tensor decomposition
There are two classical tensor decomposition forms which are CANDECOMP/PARAFAC (CP) decomposition and Tucker decomposition [96]. Given a tensor \(T\in \mathfrak {R}^{t_1\times t_2\times \cdots t_n}\), the CP decomposition and Tucker decomposition can be modeled as follows:

Tucker decomposition
$$\begin{aligned} T=g\times \prod _{i=1}^N U_i+\varepsilon \end{aligned}$$(16)
where \(g\in \mathfrak {R}^{r_1\times r_2\times \cdots r_n}\) is the core tensor, r is the rank of factor matrix \(U_i\in \mathfrak {R}^{t_i\times r_i}\), and \(\varepsilon\) represents the residuals. Figure 5 is a schematic representation of the Tucker decomposition. The Tucker decomposition is usually regarded as a nonconvex optimization problem [63]. Two most famous and widely used solvers for Tucker decomposition are TuckerALS based on alternating least squares [96] and TuckerADAL based on alternating direction augmented Lagrangian [94]. SVD based on Tucker decomposition is generally called highorder singular value decomposition (HOSVD) [97, 98], which calculates the singular values of the three expansions \(U_1,U_2,U_3\) of a threedimensional tensor under Tucker decomposition. HOSVDbased ultrasound clutter optimization has been proposed [52, 99] and proved to be more robust to low sampling rates than SVD.

CP decomposition
$$\begin{aligned} T=U_1\circ U_2 \cdots \circ U_R+\varepsilon \end{aligned}$$(17)
where R is the number of the components, \(U_i\in \mathfrak {R}^{t_i\times r_i}\), \(\varepsilon\) represents the residuals, and \(U_1\circ U_2 \cdots \circ U_R\) is the CP model [71]. Figure 6 is a schematic representation of the CP decomposition. CP decomposition is similar in form to Tucker decomposition since the number of components in the factor matrices is the same [96]. The original CP problem is NPhard. Therefore, the Frobenius norm is generally used to relax the lowrank constraint. Similar to Tucker decomposition, CP decomposition problem can also be solved by alternating least squares, called CPALS. To the best of our knowledge, there is currently no wellknown article applying CP decomposition to ultrasound clutter filtering.
Minimization problems
The decomposition problems generally turn into minimization problems or optimization problems in its original form or its Lagrangian form [63].
where \(\lambda _i\) are the regularization parameters, \(f_i(.)\) are the loss functions for lowrank, sparse, and noise components, \(C_i\) are the constraints on \(K_i\). Consistent with the decomposition problems, the minimization problems can be divided into three categories according to the number of constraints and loss functions imposed.

\(x=1\) is the case of implicit decomposition: \(\min _L \, \lambda _1f_1(L) \quad \text {s.t.} \; C_1\)
where \(C_1\) is \(\Vert ML\Vert _2=0\) or other forms. For sparse decomposition, the lowrank components are replaced by sparse components. This problem can be NPhard, nonconvex, or under specific constraints. Therefore, other formats of the loss functions are applied to relax the constraints when the problem is NPhard or nonconvex. For example, the loss function f is rank loss function in original MC model as \(\min _{L}\, rank(L) \quad \text {s.t.} \; \Vert ML\Vert _2=0\).

\(x=2\) is the case of explicit decomposition: \(\min _{L,S} \; \lambda _1f_1(L)+\lambda _2f_2(S) \quad \text {s.t.} \; C_2\)
where \(C_2\) is \(\Vert MLS\Vert _2=0\) or other forms. For example, the \(f_1\) and \(f_2\) loss functions are rank and \(l_0norm\) loss functions in original RPCA model as \(\min _{L,S}\, rank(L)+\lambda \Vert S\Vert _{l_0} \quad \text {s.t.} \;\Vert MLS\Vert _2=0\).

\(x=3\) is the case of stable decomposition: \(\min _{L,S} \, \lambda _1f_1(L)+\lambda _2f_2(S)+\lambda _3f_3(N) \quad \text {s.t.} \; C_3\)
where \(C_3\) is \(\Vert MLSE\Vert _2=0\) or other forms. For example, the \(f_1\) and \(f_2\) loss functions are rank and \(l_0norm\) in original RPCA model as \(\min _{L,S}\, rank(L)+\lambda \Vert S\Vert _{l_0} \quad \text {s.t.} \; \Vert MLS\Vert _2=0\). The stable decomposition is generally adding constraints on the noise component based on the robust decomposition. The Frobenius norm loss function (\(\lambda \Vert MLS\Vert _F^2=0\)) is used in most cases.
Although there are some algorithms that can solve nonconvex problems through mathematical approximation [100], in general, nonconvex problems are difficult to solve with weak convergence. This is also an important role that minimization problems play.
Loss functions
The loss function can be seen as a constraint of the decomposed matrices. In DLSM framework, loss functions are used on the minimization matrices as norm formats. For example, in implicit decomposition, explicit decomposition, and stable decomposition, the functions f(S), f(L), f(E), represent the loss functions or norms on sparse component, lowrank component, and noise component, respectively. However, in most cases, the original loss function will be replaced by other forms of the loss function to optimize and solve the problem. The common loss function forms (or norm forms) can be listed as follows:

\(l_0 \,\) norm loss function \((\Vert M\Vert _{l_0})\) is the number of nonzero entries.

\(l_1 \,\)norm loss function \((\Vert M\Vert _{l_1} = \sum _{i,j}M_{i,j})\) is the Manhattan distance.

\(l_2 \,\)norm loss function \((\Vert M\Vert _{l_2} = \sqrt{\sum _{i,j}M^2_{i,j}})\) is also called the Frobenius norm (\(l_{F} \,\)norm loss function \((\Vert M\Vert _{l_{F}} = \sqrt{\sum _{i,j}M^2_{i,j}})\) ).

\(l_{\infty } \,\)norm loss function \((\Vert M\Vert _{l_{\infty }} = max_{i,j}M_{i,j})\) is also called the max norm (\((\Vert M\Vert _{max} = max_{i,j}M_{i,j})\)).

\(l_{*} \,\)norm loss function \((\Vert M\Vert _{l_{*}})\) is the sum of singular values.
Solvers
The models are solved by specific algorithms, which are called solvers in DLSM [63, 71] framework. Solvers are generally applied to the models after the minimization problem has been optimized and the loss function has been relaxed. Solvers can be broadly divided into regularizationbased approaches and statisticalbased approaches [101]. As for regularization approaches, the data matrices are regularized by convex surrogates with different features [63]. Typical regularization approaches include singular value thresholding (SVT) [102], accelerated proximal gradient (APG) [103], and augmented Lagrange multiplier (ALM) [83]. In terms of statisticalbased approaches, prior distributions are used to capture lowrank or sparse properties and predict the joint distribution of the measured entries and unknown variables. Meanwhile, posterior distributions of the unknown variables can be approximated by Bayesianbased methods [63].
Although many solvers are proposed to solve the optimization problems under DLSM framework, most of the mainstream algorithms for ultrasound clutter suppression are based on SVD. SVDbased clutter suppression algorithms that are proposed and reviewed [19, 20, 22, 38, 43] based on traditional CFI before 2011. In these algorithms, SVD is used as one of the steps or iterations within many of the algorithms we evaluated. After 2015, with the rapid development of ultrasound technologies like planewave ultrasound, SVD was combined with ultrafast planewave imaging, which can provide a huge amount of data at a high frame rate, to improve the effectiveness of SVD and overcome the limitation of low frame rate [18, 51, 104, 105]. Due to the excellent performance of SVD on large datasets, SVDbased clutter suppression algorithms based on the planewave ultrasound have become a popular and mainstream research area. The SVDbased algorithms have been used in functional ultrasound [106, 107], super resolution ultrasound localization microscopy [104, 108] and highsensitivity microvessel perfusion imaging [18, 51] due to its excellent performance in the ultrasound clutter suppression and the microvascular imaging [21].
Due to the obvious disadvantages of SVD, DLSM framework contains many approximate SVD and nonSVD algorithms for higher efficiency and lower computational cost, which have the potential for realtime ultrasound clutter suppression.
Experiments
DLSM framework has been successfully utilized to video surveillance, face recognition, texture modeling, video inpainting, audio separation, and latent semantic indexing [109]. However, only a few algorithms under DLSM framework have been applied to ultrasound clutter suppression. Herein, we apply DLSM algorithms as the clutter filter for CFI. To that end, we test if DLSM algorithms can be used for clutter suppression and conduct simulation experiments, phantom experiments, and in vivo experiments. Finally, we will conclude a list of algorithms that are suitable for ultrasound clutter suppression.
Experiment data
Three datasets are used in this experiment which are simulation data, phantom data, and in vivo rat data. For each dataset, raw RFdata, complex envelope data, and Bmode data formats are used for analysis. The specific parameters and obtaining process of three datasets and a brief introduction of three data formats are given in the following subsections.
Simulation data
The simulation data include a set of ultrasound simulation frames as Fig. 7 shows. The ultrasound simulation data are generated by the Field II simulation program implemented in MATLAB [110, 111]. A cube \(A\in \mathfrak {R}^{60 \times 60 \times 60 }\) is built to represent the tissue filled with scatterers given the fact that each voxel is \(1\, \text{mm}^3\). A vessel through the middle of the cube with a radius of \(20\, \text{mm}\) is generated by scatterers flowing to the right. The max velocity in the center of the vessel is \(15\, \text{mm}/\text{s}\). Assuming sound waves travel from the top to the bottom and focus on the center. Probe frequency and sampling frequency are set to \(7.27\, \text{mHz}\) and \(40\, \text{mHz}\), respectively. The frame rate is set to \(1000\, \text{fps}\) and 64 active elements are used for beamforming.
Phantom data
The phantom was created to simulate a cube of tissue including one blood vessel which travels across the cube in the middle. Knox unflavored gelatin, water, and sugarfree Metamucil psyllium fiber supplement were gently heated and mixed to prepare the phantom gel which represents soft tissue. An intravenous tube simulating a venous structure model runs through the gel cube. Probe frequency and sampling frequency are set to 10 MHz and 40 MHz, respectively. The Alpinion ECube R12 ultrasound system is used in ultrasound data collection with an L312H linear array probe. Figure 8 briefly illustrates the phantom experiment.
Rat data
The acquisition of the rat data was under the supervision of the Animal Care Facility of Concordia University. A 27weekold Sprague–Dawley male rat was anesthetized for ultrasound scanning. The experiment followed the guidelines of the Canadian Council on Animal Care and was approved by the Animal Ethics Committee of Concordia University \((\#30000259)\). The probe frequency and the sampling frequency were set to 10 MHz and 40 MHz, respectively. Similarly, as with phantom data, the Alpinion ECube R12 research ultrasound system with an L312H linear array probe was used. The schematic diagram of the in vivo rat experiment is shown in Fig. 9.
Data formats
Both real and simulated ultrasound data are available in three formats, which are raw RF data, complex envelope data, and Bmode data. Common ultrasound probes generally consist of a piezoelectric transducer array that emits and receives signals. The backscatter signal which is processed by the preamplifier and the time gain compensation is referred as radiofrequency (RF) signal. The RF signal then processed by an envelope detector becomes complex envelope data. Last, the complex envelope data are logcompressed into a grayscale format. And the data are further passed through intensity mapping and postprocessing filtering. The final readable image is commonly called brightness mode (Bmode) image. RF frames generally have a very large size since the sampling rate of the RF data is usually extremely high. This high sampling rate is not necessary for envelope data as it does not have highfrequency contents. Therefore, envelope and Bmode images can be downsampled by a large factor. RF data may also be downsampled by a small factor, but the Nyquist sampling rate should be considered to avoid aliasing.
Experiment methods
In “Decomposition into lowrank and sparse matrices (DLSM) framework” section, DLSM framework is introduced and built as Fig. 3 shows. The DLSM algorithms are classified in four groups which are implicit decomposition, explicit decomposition, stable decomposition, and tensor decomposition. In this experiment, all algorithms are selected from LRSLibrary [61, 63, 71] which provides a group of lowrank and sparse matrix decomposition algorithms in moving object detection. In LRSLibrary, these algorithms are further subdivided into robust PCA (RPCA), subspace tracking (ST), matrix completion (MC), threeterm decomposition (TTD), lowrank representation (LRR), nonnegative matrix factorization (NMF), nonnegative tensor factorization (NTF), and standard tensor decomposition (TD) according to the models. Due to the flexible conversion between models and their similar mathematical formulations, in this paper, TTD can be a subcategory in stable PCA under stable decomposition. Similarly, NTF belongs to the subcategory under the tensor decomposition (TD) model.
In the first step, the DLSM algorithms are applied to three formats of simulation data to verify the performance of all algorithms compared to sparse component with ground truth and give a computing time contrast. Then, all algorithms are used on phantom data to find out if DLSM suits ultrasound data with real ultrasound features. In the third step, rat data are used for verification and comparison. The acquired data have three formats which are RF data format, complex envelope data format, and Bmode data format. The results of different data formats and different datasets are grouped for comparison to find the optimal conditions of ultrasound clutter suppression.
All experiments are processed by a normal desktop computer with an i74770 CPU @ 3.40 GHz and 16.0 GB RAM.
Evaluation metrics
Two main indicators are used to evaluate the performance of various algorithms, which are signaltonoise ratio (SNR) [112] and contrasttonoise ratio (CNR). The SNR and CNR are calculated as follows:
where \(\mu _1\) and \(\sigma _1\) are the mean intensity value and the standard deviation of the background window, \(\mu _2\) and \(\sigma _2\) are the mean intensity value and the standard deviation of the target window.
Experiment results
The results of 106 DLSM algorithms on three datasets and their three formats are reported in this section. The results of all algorithms include the SNR, CNR, calculation times, and images for visual observation. Since all the output images are sparse components of the same data and are very similar, we classify the results according to their performance and report the number of algorithms in each category instead of SNR and CNR of all algorithms.
The results of all algorithms are divided into several categories. The results which fall in the first category are considered to be good results as they give the correct sparse matrix with a pure blank background which means high robustness to noise and dynamic background and strong decomposition ability. The cases when the output sparse component is more than 100 times higher than background pixel values are also regarded as good results. The results which fall in the second category are considered to be defective. These results either contain background noise which is supposed to be part of the lowrank components, or are noisy and algorithms failed to decompose. The results in the third category are not considered because some algorithms failed to run due to some limitations like nonnegative limitations or real input limitations. Algorithms with this type of results are called restricted algorithms in this section.
The information that all algorithms, including their model classifications, are from LRSLibrary [61, 63, 71], and they have all been proved to be successfully applied to moving object detection on traffic video.
Simulation experiments
The experiments first applied simulation data to verify the availability and approximate performance of all algorithms. Meanwhile, the computation cost and time of these algorithms on ultrasound clutter suppression are also tested. The first experiment applied all 106 DLSM algorithms to the RF simulation data. Among 106 algorithms, 11 of them were out of memory and failed to run. These algorithms cannot deal with the large size of simulation data because they use the full singular value decomposition or QR decomposition and require a huge memory to initialize (7.9 GB). Meanwhile, there are 6 algorithms that require nonnegative input and cannot take RF data as input. Consequently, a total of 17 of these two kinds of algorithms are classified as restricted algorithms.
In terms of the remaining 89 algorithms, only 19 of them are able to output relatively pure sparse components that match the ground truth without any processes of RF simulation data. To be precise, only three algorithms (abbreviation: LRRROSL, RPCAIALM, RPCAIALMBLWS) give a truly pure background as zero matrices (all entries in sparse matrices except the ones presenting simulated vessel are 0). The other 16 results highlight simulated vessel with a nonzero background. Since the value of the background pixels is 1000–10,000 times less than the value of sparse component, we consider it to be a pure result without lowrank components. The possible reason is the particular small values of RF simulation data and low dynamic range. In general, these results with the CNR values above 1.6 are classified as good results in Table 2. The results of the other 44 algorithms are very noisy with the CNR values less than 1.1. As for these algorithms, the sparse parts in simulation data are not clearly determined and the clutter is not well suppressed. The remaining 24 algorithms give blank output due to low dynamic range and other reasons. Almost all the DLSM algorithms give an SNR of about 0.759, so SNR is not reported in detail here.
Due to the extremely small data values and dynamic ranges, a large number of algorithms are invalidated. Therefore, in the second step, the order of magnitude and dynamic range of RF simulation data are expanded to reexamine the performance of all algorithms.
After processing RF data, 56 algorithms show good results. Among these algorithms, 16 of them give correct sparse components with a zerovalued background, others give sparse components 1000–10,000 times greater than background pixel values. The results of the remaining 33 algorithms are noisy. These algorithms either do not correctly isolate sparse components or contain inseparable background noise with similar values. Most good results have a CNR greater than 1.3, while noisy results generally have a CNR less than 1. Similarly, almost all the DLSM algorithms with good results give an SNR of about 0.759. There are a few good results with a CNR less than 1. The algorithms with such results only highlight the sparsest parts which reduce the mean intensity values of the target window. However, these results are considered to be good because the unhighlighted sparse components still have higher intensities than backgrounds. The results after increasing dynamic range are listed in Table 3. Examples of different kinds of results in simulation exp
eriments are shown in Fig. 10.
The complex envelope simulation data are obtained by Hilbert transform based on the RF data. For this reason, the complex envelope data do not have the problem of miniature pixel values and low dynamic range. However, the SNR and CNR of the complex envelope simulation data are lower than the SNR and CNR of RF simulation data. Except for the 11 algorithms that are limited by frame size, 24 of the remaining algorithms show good results. In addition, 13 algorithms are affected by complex numbers generated by the Hilbert transform and thus failed to run. The results of other algorithms are noisy. After enlarging the dynamic range of the complex envelope simulation data, 8 algorithms which failed on original simulation data give good results on the preprocessed data. These algorithms are sensitive to the changes of dynamic ranges. The results on complex envelope simulation data are shown in Table 4. Obviously, the CNR in the results of complex envelope simulation data is far less than the CNR on the RF data.
The third step of the simulation experiment is using Bmode data. As for the results of Bmode simulation data, 40 DLSM algorithms have successfully detected the simulate vessel on original Bmode simulation data. Meanwhile, 12 algorithms are affected by high peak values in the background and keep static peaks into sparse components. These algorithms give pure sparse matrices after suppressing peak values. After enlarging the dynamic range of the original Bmode data, another 10 algorithms successfully detect correct sparse components. Therefore, 62 algorithms can successfully separate the correct sparse components. The other algorithms which give very noisy results may need parameter adjustment and threshold process. The results of simulation experiment on Bmode data are reported in Table 5.
Phantom experiments
The next set of experimental data used for testing is phantom data. The phantom data are used to test whether DLSM framework is suitable for ultrasound clutter suppression with real ultrasound noise and other ultrasound features. The phantom data also consist of three formats, which are RF phantom data, complex envelope phantom data, and Bmode phantom data. Except for 11 inapplicable algorithms due to size limitation, the other 95 algorithms have a huge difference in computing time ranging from less than 0.1 s to more than 500 s.
As for RF phantom data, the order of magnitude of all pixels is first adjusted into the range of \(10^{\pm 3}\). However, the structured peak pixels that are caused by bright structure generated at the rebound reflection interface still affect many algorithms. 36 algorithms clearly show the simulated vessel with a pure background with an average CNR of 3.5. Meanwhile, 43 algorithms only highlight bright edges as the sparse components with an average CNR of 0.4. The structured peak pixels of RF phantom data can compromise the calculation of some algorithms when these bright edges have a pixel value \(10^3\) times larger than the remaining pixel values. Therefore, the peak values are processed logarithmically to achieve the gray balance and reduce the dynamic range. After logarithmic processing, 22 additional algorithms are able to display sparse components correctly excluding bright and static edges. Among them, 19 algorithms are previously affected by peaks, and 3 algorithms are defective on the original data. The results of remaining algorithms are still noisy, and parameter adjustment should be applied to these algorithms for better performance. The results of RF phantom experiments are shown in Table 6.
The complex envelope phantom data are then used for experiments. The number of good results of the complex envelope phantom data is less than the number of good results of RF phantom data. 26 algorithms successfully detected the simulated vessel and 33 algorithms only showed bright edges, which is an intravenous (IV) tube representing the vessel wall. The CNR of all results is less than 0.4. After suppressing the edge brightness logarithmically, 11 of these algorithms that have been affected by edges can separate pure sparse components. It shows that the extremely high bright structures can affect the sensitivity to sparse components in many algorithms. However, there are still some algorithms that give noisy results. At the same time, 19 algorithms cannot take complex numbers as input. The results on complex envelope phantom data are shown in Table 7.
The last format of data to be applied is Bmode data. As for Bmode phantom data, 49 algorithms successfully give good results with a CNR higher than 2. However, the results of four of these algorithms contain bright edges which are considered to be lowrank components. The results of 35 algorithms only show the edges. Among them, a few sensitive algorithms can also partly detect sparse partition with obvious motion. However, only several bright pixels with motion can be detected. After reducing the dynamic range, no algorithm is affected by edges and 75 algorithms give sparse components with pure background. Examples of good results and noisy results in phantom experiments are shown in Fig. 11. The results of Bmode phantom data are shown in Table 8.
In vivo experiments
In the third step, rat data are used to test the performance of these algorithms on real ultrasound data with small vesselslike tissues. The RF rat data, complex envelope rat data, and Bmode rat data are used to be compared.
In terms of RF rat data, 82 algorithms give very good and similar results, and 7 algorithms show noisy and meaningless results. The other 17 algorithms are restricted due to the size limitation or nonnegative limitation. The complex envelope rat data remains to share similar results with RF rat data. As for Bmode rat data, 92 algorithms successfully detected vessellike tissues and only 3 algorithms failed to show any part of the sparse components. Examples of good results and noisy results in in vivo rat experiments are shown in Fig. 12. Since most algorithms give results with similar SNR and CNR, the evaluation of results combine subjective observations and numerical analysis. Due to the unknown in vivo structure, we lack ground truth for the accuracy of the assessment results. Only algorithms with pure backgrounds are shown in Table 9 due to similar results and limited space.
Discussion
A total of 106 algorithms were tested in this paper. Analyzing the results obtained from simulation, phantom, and in vivo experiments, we found that 11 algorithms require huge memory (7.9 GB for frame size \(250\times 125\), 20 frames) due to the singular value decomposition or QR decomposition process. Since typical ultrasound frames are large in size, the left unitary matrix in full singular value decomposition demands an excessive amount of memory, e.g., ADM. There are two possible solutions to this problem. First, the approximate SVD can be calculated and stored in every iteration instead of full SVD [55, 113]. Second, small overlapping patches from the ultrasound frames can be considered to formulate the data matrix which will substantially reduce the size of the Casorati matrix and eventually the memory footprint. Another advantage of using this windowing technique is that it can automatically equalize uneven noise distribution by normalizing the power locally [51]. The 11 algorithms with size limitation are listed in Table 10.
Moreover, there are 6 algorithms that require nonnegative input. Since ultrasound RF data usually contain both positive and negative values, these algorithms are not suitable for working with RF data for clutter suppression. 20 algorithms giving good results on RF simulation data are tested with the absolute value of RF data to confirm the impact of nonnegative requirements on ultrasound clutter suppression. Although all of these 53 algorithms are still capable of showing show highcontrast vessel structures, the SNR obtained with absolute value RF data (0.81) is slightly greater than the original SNR (0.76) showing a significant increase of background noise in sparse components. At the same time, the CNR obtained with absolute value RF data (1.69) is slightly lower than the original CNR (1.73), which proves that nonnegative requirement has only limited effect on the accuracy of the DLSM decomposition. The 6 algorithms with size limitations are listed in Table 11.
Another type of restricted algorithms is affected by complex inputs. From the experiment results, it is obvious that complex envelope data are not suitable for ultrasound clutter suppression since it takes longer calculation time and gives poor performance. Also, 13 algorithms are affected by complex value and cannot separate lowrank and sparse components well. Among them, 13 algorithms cannot take complex numbers as input, and some algorithms are stuck in a longer loop that requires more than 300 s. Algorithms which failed due to complex numbers are listed below in Table 12. In addition, the extremely small CNR obtained from the envelope data is only onehundredth of the ones obtained from other datasets which indicates that envelope data are not suitable as an input form of ultrasound clutter suppression.
Among the remaining algorithms, 17 algorithms are easily affected by outliers. These algorithms cannot denoise the peak values when the dynamic range is roughly greater than \(11.5\, \text{bits}\), which is the natural logarithm of difference between maximum and minimum. These 17 algorithms have performed well on preprocessed data and showed good results in simulation experiments and phantom experiments. However, they are not robust to outliers. In the simulation experiments, these algorithms divided the background peak pixels into sparse components, resulting in a noisy background. Similarly, they are not robust to largeshaped structured outliers and divide bright static edges into sparse components in phantom experiments. These 17 algorithms that are susceptible to outliers are listed in Table 13. Since complex envelope data are not suitable for ultrasound clutter suppression, the performance of the algorithms on complex envelope data has not been considered.
In addition, a pure background (0 dB) is of great significance for vascular image segmentation and process and analysis of other medical images [46, 114]. However, this is a difficult goal due to the probe jitter, dynamic backgrounds, noise, shadows, and many other reasons. Therefore, only a few results have pure background on simulation data and phantom data. Furthermore, no result has pure background on in vivo rat data because of the complex tissue motions and the harsh conditions. Some algorithms have a strong ability dealing with these challenges and give pure backgrounds on simulation data and phantom data. These algorithms are listed in Table 14.
Overall, in terms of calculation time, DLSM algorithms take the longest time to run complex envelope data in comparison with RF data and Bmode data. Due to its large amount of calculations, complex envelope data take twice as long as RF data do to run. This confirms again that complex envelope data are not suitable for ultrasound clutter suppression. Meanwhile, RF data require slightly less computation time than Bmode data. This may be caused by the extra information RF data contain. Meanwhile, we can find that DLSM algorithms use a slightly longer time on preprocessed data than on original data. However, the algorithms separate sparse components more accurately. Table 15 lists the average time taken by the fastest 20 algorithms on different datasets and different data formats.
The experimental results prove that ultrasound data are very different from ordinary video surveillance frames. All DLSM algorithms can be successfully applied to surveillance images. However, some of them are not suitable for ultrasound data. There are quite a few algorithms that are not suitable for ultrasound RF data and complex envelope data, which may due to the complexity of the RF data and the complex space of complex envelope data. The simulation experiments prove that some algorithms are still not robust to ultrasonic clutter and are not sensitive to the data with overall small pixel values (\(<10^{3}\)). As for these algorithms, the lowrank components of the results often contain inseparable background flicker, noise, and tiny motion. These algorithms have been listed in Table 13. Meanwhile, the phantom experiment results prove that some DLSM algorithms are not robust and stable with a high dynamic range greater than \(10\, \text{bits}\). For ultrasound data, an area with small values often exists in a uniform tissue. Edges that are much brighter than other tissues are also common due to the strong reflections at the interface. The CNR after preprocessing the ultrasound data is generally higher than the CNR of raw data. The result of the data that removed the peak is also significantly better than the results of raw data. Therefore, it is necessary to preprocess the ultrasound image when applying the DLSM algorithm. Moreover, parameter adjustment or other math improvements are necessary when applying some DLSM algorithms on ultrasound data to get the best filtering performance.
On the other hand, in terms of ultrasound data formats, experiments show that Bmode ultrasound data can make more algorithms successful for vascular detection. The Bmode ultrasound data may lose information. However, the outliers that may affect DLSM algorithms may also be weakened by Hilbert transform an absolute process. This might be the reason why more DLSM algorithms work for Bmode data. Although Bmode data has more good results than RF data, RF data require slightly less average calculation time and are more suitable for realtime requirements. The algorithms in Table 16 are relatively stable in all three datasets. These algorithms all require less than 1 s for computation while giving the correct sparse components. Experiments show that they may be more suitable for ultrasound clutter suppression.
Finally, 22 algorithms which are most robust to noise with the best performance in all the experiments are listed in Table 17. These algorithms may have a strong ability for ultrasound clutter suppression.
In this paper, we adapted different techniques originally proposed for natural images in the field of computer vision for ultrasound color flow imaging. As ultrasound images have unique characteristics due to the physics of sound propagation, these images have the socalled “speckle noise”. We believe that the results of this paper can be generalized to other imaging modalities that are affected by diffraction, such as optical coherence tomography (OCT).
Conclusion
The performance of 106 established lowrank and sparse decomposition algorithms for clutter filtering has been tested in this work. Our results show that few robust matrix decomposition techniques are suitable for solving the limitations of SVDbased ultrasound clutter suppression methods such as sensitivity to large noise. In addition, several matrix decomposition techniques show the potential for realtime implementation on commercial ultrasound machines due to their low computational complexity. Furthermore, some preprocessing is necessary when applying this framework to ultrasound data. Finally, some of the algorithms studied in this work can automatically estimate the optimal power Doppler images without requiring extensive manual tuning, which may pave the way for easier commercial and clinical translation of ultrasound clutter suppression (Additional file 1).
Availability of data and materials
The datasets generated and analyzed during the current study are available from http://data.sonography.ai/.
Abbreviations
 Bmode:

Brightnessmode
 CP:

CANDECOMP/PARAFAC
 CFI:

Color flow imaging
 CNR:

Contrasttonoise ratio
 CTA:

Computed tomography angiography
 CT:

Computed tomography
 DLSM:

Decomposition into lowrank and sparse matrices
 DSA:

Digital subtraction angiography
 DUS:

Duplex ultrasonography
 FIR:

Finite impulse response
 IIR:

Infinite impulse response
 LRR:

Lowrank representation
 MRA:

Magnetic resonance angiography
 MRI:

Magnetic resonance imaging
 MC:

Matrix completion
 NMF:

Nonnegative matrix factorization
 OCT:

Optical coherence tomography
 PCA:

Principal component analysis
 PCP:

Principal component pursuit
 RSTD:

Rank sparsity tensor decomposition
 RMC:

Robust matrix completion
 RNMF:

Robust nonnegative matrix factorization
 RPCA:

Robust principal component analysis
 RPCP:

Robust principal component pursuit
 RST:

Robust subspace tracking
 SNR:

Signaltonoise ratio
 SVD:

Singular value decomposition
 Stable NMF:

Stable nonnegative matrix factorization
 Stable RPCA:

Stable robust principal component analysis
 ST:

Subspace tracking
 TD:

Tensor decomposition
 tSVD:

Tensor singular value decomposition
 TTD:

Threeterm decomposition
References
 1.
Wang H, Naghavi M, Allen C, Barber RM, Bhutta ZA, Carter A, Casey DC, Charlson FJ, Chen AZ, Coates MM, et al. Global, regional, and national life expectancy, allcause mortality, and causespecific mortality for 249 causes of death, 1980–2015: a systematic analysis for the global burden of disease study 2015. Lancet. 2016;388(10053):1459–544.
 2.
Dasgupta R, Fishman SJ. Issva classification. In: Seminars in pediatric surgery, vol. 23. Elsevier; 2014. p. 158–61.
 3.
Camici PG, Crea F. Coronary microvascular dysfunction. N Engl J Med. 2007;356(8):830–40.
 4.
Marinescu MA, Löffler AI, Ouellette M, Smith L, Kramer CM, Bourque JM. Coronary microvascular dysfunction, microvascular angina, and treatment strategies. JACC Cardiovasc Imaging. 2015;8(2):210–20.
 5.
McDonald DM, Baluk P. Significance of blood vessel leakiness in cancer. AACR. 2002.
 6.
Padera TP, Stoll BR, Tooredman JB, Capen D, di Tomaso E, Jain RK. Pathology: cancer cells compress intratumour vessels. Nature. 2004;427(6976):695.
 7.
Solomon SD, Chew E, Duh EJ, Sobrin L, Sun JK, VanderBeek BL, Wykoff CC, Gardner TW. Diabetic retinopathy: a position statement by the american diabetes association. Diabetes Care. 2017;40(3):412–8.
 8.
Association AD, et al. Peripheral arterial disease in people with diabetes. Diabetes Care. 2003;26(12):3333–41.
 9.
Dolan NC, Liu K, Criqui MH, Greenland P, Guralnik JM, Chan C, Schneider JR, Mandapat AL, Martin G, McDermott MM. Peripheral artery disease, diabetes, and reduced lower extremity functioning. Diabetes Care. 2002;25(1):113–20.
 10.
Healy D, Rogers P, Hii L, Wingfield M. Angiongenesis: a new theory for endometriosis. Hum Reprod Update. 1998;4(5):736–40.
 11.
Horsch AD, Weale AR. Imaging in vascular disease. Surgery. 2018.
 12.
Evans DH, Jensen JA, Nielsen MB. Ultrasonic colour Doppler imaging. Interface Focus. 2011;1(4):490–502.
 13.
Demchuk AM, Menon BK, Goyal M. Comparing vessel imaging: noncontrast computed tomography/computed tomographic angiography should be the new minimum standard in acute disabling stroke. Stroke. 2016;47(1):273–81.
 14.
Rübenthaler J, Reiser M, Clevert DA. Diagnostic vascular ultrasonography with the help of color Doppler and contrastenhanced ultrasonography. Ultrasonography. 2016;35(4):289.
 15.
Bjaerum S, Torp H, Kristoffersen K. Clutter filter design for ultrasound color flow imaging. IEEE Trans Ultrason Ferroelectr Freq Control. 2002;49(2):204–16.
 16.
Oglat AA, Matjafri M, Suardi N, Oqlat MA, Abdelrahman MA, Oqlat AA. A review of medical Doppler ultrasonography of blood flow in general and especially in common carotid artery. J Med Ultrasound. 2018;26(1):3.
 17.
GerhardHerman M, Gardin JM, Jaff M, Mohler E, Roman M, Naqvi TZ. Guidelines for noninvasive vascular laboratory testing: a report from the american society of echocardiography and the society for vascular medicine and biology. Vasc Med. 2006;11(3):183–200.
 18.
Demené C, Deffieux T, Pernot M, Osmanski BF, Biran V, Gennisson JL, Sieu LA, Bergel A, Franqui S, Correas JM, et al. Spatiotemporal clutter filtering of ultrafast ultrasound data highly increases Doppler and fultrasound sensitivity. IEEE Trans Medical Imaging. 2015;34(11):2271–85.
 19.
Alfred C, Lovstakken L. Eigenbased clutter filter design for ultrasound color flow imaging: a review. IEEE Trans Ultrason Ferroelectr Freq Control. 2010;57(5):1096–111.
 20.
Mauldin FW, Lin D, Hossack JA. The singular value filter: a general filter design strategy for pcabased signal separation in medical ultrasound imaging. IEEE Trans Med Imaging. 2011;30(11):1951–64.
 21.
Solomon O, Cohen R, Zhang Y, Yang Y, He Q, Luo J, van Sloun RJ, Eldar YC. Deep unfolded robust pca with application to clutter suppression in ultrasound. IEEE Trans Med Imaging. 2019.
 22.
Bjaerum S, Torp H, Kristoffersen K. Clutter filters adapted to tissue motion in ultrasound color flow imaging. IEEE Trans Ultrason Ferroelectr Freq Control. 2002;49(6):693–704.
 23.
Kadi AP, Loupas T. On the performance of regression and stepinitialized iir clutter filters for color Doppler systems in diagnostic medical ultrasound. IEEE Trans Ultrason Ferroelectr Freq Control. 1995;42(5):927–37.
 24.
Hoeks A, Van de Vorst J, Dabekaussen A, Brands P, Reneman R. An efficient algorithm to remove low frequency Doppler signals in digital Doppler systems. Ultrason imaging. 1991;13(2):135–44.
 25.
Torp H. Clutter rejection filters in color flow imaging: a theoretical approach. IEEE Trans Ultrason Ferroelectr Freq Control. 1997;44(2):417–24.
 26.
Eckersley RJ, Chin CT, Burns PN. Optimising phase and amplitude modulation schemes for imaging microbubble contrast agents at low acoustic power. Ultrasound Med Biol. 2005;31(2):213–9.
 27.
Bruce M, Averkiou M, Tiemann K, Lohmaier S, Powers J, Beach K. Vascular flow and perfusion imaging with ultrasound contrast agents. Ultrasound Med Biol. 2004;30(6):735–43.
 28.
Simpson DH, Burns PN, Averkiou MA. Techniques for perfusion imaging with microbubble contrast agents. IEEE Trans Ultrason Ferroelectr Freq Control. 2001;48(6):1483–94.
 29.
Hwang JJ, Simpson DH. Two pulse technique for ultrasonic harmonic imaging. Google Patents. US Patent 5,951,478. 1999.
 30.
Simpson DH, Chin CT, Burns PN. Pulse inversion Doppler: a new method for detecting nonlinear echoes from microbubble contrast agents. IEEE Trans Ultrason Ferroelectr Freq Control. 1999;46(2):372–82.
 31.
Thomas L, Hall A. An improved wall filter for flow imaging of low velocity flow. In: 1994 Proceedings of IEEE ultrasonics symposium, vol. 3. New York: IEEE; 1994. p. 1701–4.
 32.
Yoo YM, Managuli R, Kim Y. Adaptive clutter filtering for ultrasound color flow imaging. Ultrasound Med Biol. 2003;29(9):1311–20.
 33.
Allam ME, Kinnick RR, Greenleaf JF. Isomorphism between pulsedwave Doppler ultrasound and directionofarrival estimation. II. Experimental results. IEEE Trans Ultrason Ferroelectr Freq Control. 1996;43(5):923–35.
 34.
Vaitkus PJ, Cobbold RS, Johnston KW. A new timedomain narrowband velocity estimation technique for Doppler ultrasound flow imaging. II. Comparative performance assessment. IEEE Trans Ultrason Ferroelectr Freq Control. 1998;45(4):955–71.
 35.
Ledoux LA, Brands PJ, Hoeks AP. Reduction of the clutter component in Doppler ultrasound signals based on singular value decomposition: a simulation study. Ultrasonic Imaging. 1997;19(1):1–18.
 36.
Kargel C, Hobenreich G, Trummer B, Insana MF. Adaptive clutter rejection filtering in ultrasonic strainflow imaging. IEEE Trans Ultrason Ferroelectr Freq Control. 2003;50(7):824–35.
 37.
Song F, Zhang D, Gong X. Performance evaluation of eigendecompositionbased adaptive clutter filter for color flow imaging. Ultrasonics. 2006;44:67–71.
 38.
Kruse DE, Ferrara KW. A new high resolution color flow system using an eigendecompositionbased adaptive filter for clutter rejection. IEEE Trans Ultrason Ferroelectr Freq Control. 2002;49(10):1384–99.
 39.
Alfred C, Cobbold RS. Singleensemblebased eigenprocessing methods for color flow imagingpart i. the hankelsvd filter. IEEE Trans Ultrason Ferroelectr Freq Control. 2008;55(3):559–72.
 40.
Candes EJ, SingLong CA, Trzasko JD. Unbiased risk estimates for singular value thresholding and spectral estimators. IEEE Trans Signal Process. 2013;61(19):4643–57.
 41.
BarZion A, TremblayDarveau C, Solomon O, Adam D, Eldar YC. Fast vascular ultrasound imaging with enhanced spatial resolution and background rejection. IEEE Trans Med Imaging. 2016;36(1):169–80.
 42.
Kim M, Abbey CK, Hedhli J, Dobrucki LW, Insana MF. Expanding acquisition and clutter filter dimensions for improved perfusion sensitivity. IEEE Trans Ultrason Ferroelectr Freq Control. 2017;64(10):1429–38.
 43.
Lovstakken L, Bjaerum S, Kristoffersen K, Haaverstad R, Torp H. Realtime adaptive clutter rejection filtering in color flow imaging using power method iterations. IEEE Trans Ultrason Ferroelectr Freq Control. 2006;53(9):1597–608.
 44.
Mauldin FW, Viola F, Walker WF. Complex principal components for robust motion estimation. IEEE Trans Ultrason Ferroelectr Freq Control. 2010;57(11):2437–49.
 45.
Gallippi CM, Nightingale KR, Trahey GE. Bssbased filtering of physiological and arfiinduced tissue and blood motion. Ultrasound Med Biol. 2003;29(11):1583–92.
 46.
Bayat M, Fatemi M, Alizad A. Background removal and vessel filtering of noncontrast ultrasound images of microvasculature. IEEE Trans Biomed Eng. 2019;66(3):831–42.
 47.
Adabi S, Ghavami S, Fatemi M, Alizad A. Nonlocal based denoising framework for in vivo contrastfree ultrasound microvessel imaging. Sensors. 2019;19(2):245.
 48.
Olleros GG, Stuart MB, Jensen JA, Hoyos CAV, Hansen KL. Spatiotemporal filtering for synthetic aperture slow flow imaging. In: 2018 IEEE international ultrasonics symposium (IUS). New York: IEEE; 2018. p. 1–4.
 49.
Bergqvist G, Larsson EG. The higherorder singular value decomposition: theory and an application [lecture notes]. IEEE Signal Process Mag. 2010;27(3):151–4.
 50.
Du Y, Zhang M, Alfred C, Yu W. Lowrank adaptive clutter filtering for robust ultrasound vector flow imaging. In: 2018 IEEE international ultrasonics symposium (IUS). New York: IEEE; 2018. p. 1–9.
 51.
Song P, Manduca A, Trzasko JD, Chen S. Ultrasound small vessel imaging with blockwise adaptive local clutter filtering. IEEE Trans Med Imaging. 2016;36(1):251–62.
 52.
Kim M, Zhu Y, Hedhli J, Dobrucki LW, Insana MF. Multidimensional clutter filter optimization for ultrasonic perfusion imaging. IEEE Trans Ultrason Ferroelectr Freq Control. 2018;65(11):2020–9.
 53.
Nayak R, Kumar V, Webb J, Gregory A, Fatemi M, Alizad A. Noncontrast agent based small vessel imaging of human thyroid using motion corrected power Doppler imaging. Sci Rep. 2018;8(1):15318.
 54.
Baranger J, Arnal B, Perren F, Baud O, Tanter M, Demené C. Adaptive spatiotemporal svd clutter filtering for ultrafast Doppler imaging using similarity of spatial singular vectors. IEEE Trans Med Imaging. 2018;37(7):1574–86.
 55.
Ashikuzzaman M, Belasso C, Kibria MG, Bergdahl A, Gauthier CJ, Rivaz H. Low rank and sparse decomposition of ultrasound color flow images for suppressing clutter in realtime. IEEE Trans Med Imaging. 2019.
 56.
Candès EJ, Li X, Ma Y, Wright J. Robust principal component analysis? J ACM. 2011;58(3):11.
 57.
Bayat M, Fatemi M. Concurrent clutter and noise suppression via low rank plus sparse optimization for noncontrast ultrasound flow Doppler processing in microvasculature. In: 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). New York: IEEE; 2018. p. 1080–4.
 58.
Lin Z. A review on lowrank models in data analysis. Big Data Inf Anal. 2016;1(2&3):139–61.
 59.
Davenport MA, Romberg J. An overview of lowrank matrix recovery from incomplete observations. IEEE J Sel Top Signal Process. 2016;10(4):608–22.
 60.
Lauritzen SL. Graphical models, vol. 17. Oxford: Clarendon Press; 1996.
 61.
Sobral A, Bouwmans T, Zahzah Eh. Lrslibrary: Lowrank and sparse tools for background modeling and subtraction in videos. Robust lowrank and sparse matrix decomposition: applications in image and video processing. 2016.
 62.
Zhou X, Yang C, Zhao H, Yu W. Lowrank modeling and its applications in image analysis. ACM Comput Surv. 2015;47(2):36.
 63.
Bouwmans T, Sobral A, Javed S, Jung SK, Zahzah EH. Decomposition into lowrank plus additive matrices for background/foreground separation: a review for a comparative evaluation with a largescale dataset. Comput Sci Rev. 2017;23:1–71.
 64.
Chandrasekaran V, Sanghavi S, Parrilo PA, Willsky AS. Ranksparsity incoherence for matrix decomposition. SIAM J Optim. 2011;21(2):572–96.
 65.
Li P, Yang X, Zhang D, Bian Z. Adaptive clutter filtering based on sparse component analysis in ultrasound color flow imaging. IEEE Trans Ultrason Ferroelectr Freq Control. 2008;55(7):1582–96.
 66.
Chen G, Needell D. Compressed sensing and dictionary learning. Finite Frame Theory Complet Introd Overcompleteness. 2016;73:201.
 67.
Cevher V, Duarte MF, Hegde C, Baraniuk R. Sparse signal recovery using Markov random fields. In: Adv Neural Inf Process Syst. 2009. p. 257–64.
 68.
Cevher V, Sankaranarayanan A, Duarte MF, Reddy D, Baraniuk RG, Chellappa R. Compressive sensing for background subtraction. In: European conference on computer vision. Berlin: Springer; 2008. p. 155–68.
 69.
Ramirez I, Sprechmann P, Sapiro G. Classification and clustering via dictionary learning with structured incoherence and shared features. In: 2010 IEEE computer society conference on computer vision and pattern recognition. New York: IEEE; 2010. p. 3501–3508.
 70.
Tong T, Wolz R, Coupé P, Hajnal JV, Rueckert D, Initiative ADN, et al. Segmentation of mr images via discriminative dictionary learning and sparse coding: application to hippocampus labeling. NeuroImage. 2013;76:11–23.
 71.
Sobral AC. Robust lowrank and sparse decomposition for moving object detection: from matrices to tensors. PhD thesis, Université de La Rochelle. 2017.
 72.
Bouwmans T, Javed S, Zhang H, Lin Z, Otazo R. On the applications of robust pca in image and video processing. Proc IEEE. 2018;106(8):1427–57.
 73.
Otazo R, Candes E, Sodickson DK. Lowrank plus sparse matrix decomposition for accelerated dynamic mri with separation of background and dynamic components. Magn Reson Med. 2015;73(3):1125–36.
 74.
Gao H, Yu H, Osher S, Wang G. Multienergy ct based on a prior rank, intensity and sparsity model (prism). Inverse Probl. 2011;27(11):115012.
 75.
Cohen R, Zhang Y, Solomon O, Toberman D, Taieb L, van Sloun RJ, Eldar YC. Deep convolutional robust pca with application to ultrasound imaging. In: ICASSP 20192019 IEEE international conference on acoustics, speech and signal processing (ICASSP). new York: IEEE; 2019. p. 3212–6.
 76.
Zhang H, Lin Z, Zhang C, Gao J. Relations among some lowrank subspace recovery models. Neural Comput. 2015;27(9):1915–50.
 77.
Ding X, He L, Carin L. Bayesian robust principal component analysis. IEEE Trans Image Process. 2011;20(12):3419–30.
 78.
Babacan SD, Luessi M, Molina R, Katsaggelos AK. Sparse Bayesian methods for lowrank matrix estimation. IEEE Trans Signal Process. 2012;60(8):3964–77.
 79.
Zhao Q, Meng D, Xu Z, Zuo W, Zhang L. Robust principal component analysis with complex noise. In: International conference on machine learning. 2014. p. 55–63.
 80.
Wright J, Ganesh A, Rao S, Peng Y, Ma Y. Robust principal component analysis: Exact recovery of corrupted lowrank matrices via convex optimization. In: Advances in neural information processing systems. 2009. p. 2080–8.
 81.
Liu R, Lin Z, Wei S, Su Z. Solving principal component pursuit in linear time via \(l\_1\) filtering. 2011. arXiv preprint arXiv:1108.5359.
 82.
Mu Y, Dong J, Yuan X, Yan S. Accelerated lowrank visual recovery by random projection. In: CVPR 2011. New York: IEEE; 2011. p. 2609–16.
 83.
Lin Z, Chen M, Ma Y. The augmented lagrange multiplier method for exact recovery of corrupted lowrank matrices. 2010. arXiv preprint arXiv:1009.5055.
 84.
Anderson M, Ballard G, Demmel J, Keutzer K. Communicationavoiding qr decomposition for gpus. In: 2011 IEEE international parallel & distributed processing symposium. New York: IEEE; 2011. p. 48–58.
 85.
Tang G, Nehorai A. Robust principal component analysis based on lowrank and blocksparse matrix decomposition. In: 2011 45th annual conference on information sciences and systems. New York: IEEE; 2011. p. 1–5.
 86.
Klopp O, Lounici K, Tsybakov AB. Robust matrix completion. Probab Theory Related Fields. 2017;169(1–2):523–64.
 87.
Xu H, Caramanis C, Sanghavi S. Robust pca via outlier pursuit. In: Advances in neural information processing systems. 2010. p. 2496–504.
 88.
Kim J, He Y, Park H. Algorithms for nonnegative matrix and tensor factorizations: a unified view based on block coordinate descent framework. J Glob Optim. 2014;58(2):285–319.
 89.
Lee DD, Seung HS. Learning the parts of objects by nonnegative matrix factorization. Nature. 1999;401(6755):788–91.
 90.
Van Benthem MH, Keenan MR. Fast algorithm for the solution of largescale nonnegativityconstrained least squares problems. J Chemom J Chemom Soc. 2004;18(10):441–50.
 91.
Javed S, Narayanamurthy P, Bouwmans T, Vaswani N. Robust pca and robust subspace tracking: a comparative evaluation. In: 2018 IEEE statistical signal processing workshop (SSP). New York: IEEE; 2018. p. 836–40.
 92.
Lu C, Feng J, Chen Y, Liu W, Lin Z, Yan S. Tensor robust principal component analysis: exact recovery of corrupted lowrank tensors via convex optimization. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 5249–57.
 93.
Zhang Z, Ely G, Aeron S, Hao N, Kilmer M. Novel methods for multilinear data completion and denoising based on tensorsvd. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. p. 3842–9.
 94.
Goldfarb D, Qin Z. Robust lowrank tensor recovery: models and algorithms. SIAM J Matrix Anal Appl. 2014;35(1):225–53.
 95.
Li Y, Yan J, Zhou Y, Yang J. Optimum subspace learning and error correction for tensors. In: European conference on computer vision. Berlin: Springer; 2010. p. 790–803.
 96.
Kolda TG, Bader BW. Tensor decompositions and applications. SIAM Rev. 2009;51(3):455–500.
 97.
De Lathauwer L, De Moor B, Vandewalle J. A multilinear singular value decomposition. SIAM J Matrix Anal Appl. 2000;21(4):1253–78.
 98.
De Lathauwer L, De Moor B, Vandewalle J. On the best rank1 and rank(r 1, r 2,., rn) approximation of higherorder tensors. SIAM J Matrix Anal Appl. 2000;21(4):1324–42.
 99.
Bayat M, Alizad A, Fatemi M. Multirate higher order singular value decomposition for enhanced noncontrast ultrasound Doppler imaging of slow flow. In: 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018). New York: IEEE; 2018. p. 1178–81.
 100.
Zhang H, Cai JF, Cheng L, Zhu J. Strongly convex programming for exact matrix completion and robust principal component analysis. 2011. arXiv preprint arXiv:1112.3946.
 101.
Chen Z. Multidimensional signal processing for sparse and lowrank problems. PhD thesis, Citeseer. 2014.
 102.
Cai JF, Candès EJ, Shen Z. A singular value thresholding algorithm for matrix completion. SIAM J Optim. 2010;20(4):1956–82.
 103.
Lin Z, Ganesh A, Wright J, Wu L, Chen M, Ma Y. Fast convex optimization algorithms for exact recovery of a corrupted lowrank matrix. Coordinated Science Laboratory Report no. UILUENG092214, DC246. 2009.
 104.
Errico C, Pierre J, Pezet S, Desailly Y, Lenkei Z, Couture O, Tanter M. Ultrafast ultrasound localization microscopy for deep superresolution vascular imaging. Nature. 2015;527(7579):499.
 105.
Chee AJ, Alfred C. Receiveroperating characteristic analysis of eigenbased clutter filters for ultrasound color flow imaging. IEEE Trans Ultrason Ferroelectr Freq Control. 2017;65(3):390–9.
 106.
Urban A, Dussaux C, Martel G, Brunner C, Mace E, Montaldo G. Realtime imaging of brain activity in freely moving rats using functional ultrasound. Nat Methods. 2015;12(9):873.
 107.
Errico C, Osmanski BF, Pezet S, Couture O, Lenkei Z, Tanter M. Transcranial functional ultrasound imaging of the brain using microbubbleenhanced ultrasensitive Doppler. NeuroImage. 2016;124:752–61.
 108.
ChristensenJeffries K, Browning RJ, Tang MX, Dunsby C, Eckersley RJ. In vivo acoustic superresolution and superresolved velocity mapping using microbubbles. IEEE Trans Med imaging. 2014;34(2):433–40.
 109.
Hintermüller M, Wu T. Robust principal component pursuit via inexact alternating minimization on matrix manifolds. J Math Imaging Vis. 2015;51(3):361–77.
 110.
Jensen JA. Field: A program for simulating ultrasound systems. In: 10th nordicbaltic conference on biomedical imaging, vol. 4, supplement 1, part 1. Citeseer; p. 351–3. 1996.
 111.
Jensen JA, Svendsen NB. Calculation of pressure fields from arbitrarily shaped, apodized, and excited ultrasound transducers. IEEE Trans Ultrason Ferroelectr Freq Control. 1992;39(2):262–7.
 112.
Ashikuzzaman M, Gauthier CJ, Rivaz H. Global ultrasound elastography in spatial and temporal domains. IEEE Trans Ultrason Ferroelectr Freq Control. 2019;66(5):876–87.
 113.
Freund RM, Grigas P, Mazumder R. An extended FrankWolfe method with “inface” directions, and its application to lowrank matrix completion. SIAM J Optim. 2017;27(1):319–46.
 114.
Nayak R, Fatemi M, Alizad A. Adaptive background noise bias suppression in contrastfree ultrasound microvascular imaging. Phys Med Biol. 2019;64(24):245015.
Acknowledgements
This work was supported in part by NSERC Discovery grant RGPIN 04136. The authors also wish to acknowledge the LRSLibrary which has been developed by Andrews Sobral, for collecting and sharing the opensource DLSM algorithms.
Funding
This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant RGPIN 04136.
Author information
Affiliations
Contributions
NZ designed the experiments and analyzed the results. NZ also generated the simulation data. MA provided phantom data and in vivo rat data and provided important comments for the revision of the manuscript. HR contributed to the completion of the manuscript and provides critical guidance. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
The in vivo experiment on the rat was carried out at the Animal Care Facility (ACF) of Concordia University. All procedures were approved by the Animal Ethics Committee of Concordia University \((\#30000259)\) and were conducted in accordance with guidelines of the Canadian Council on Animal Care.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Zhang, N., Ashikuzzaman, M. & Rivaz, H. Clutter suppression in ultrasound: performance evaluation and review of lowrank and sparse matrix decomposition methods. BioMed Eng OnLine 19, 37 (2020). https://doi.org/10.1186/s1293802000778z
Received:
Accepted:
Published:
Keywords
 Ultrasound color flow imaging
 Clutter suppression
 Vessel visualization
 Lowrank and sparse matrix decomposition