Skip to main content

Evaluation of lung involvement in COVID-19 pneumonia based on ultrasound images



Lung ultrasound (LUS) can be an important imaging tool for the diagnosis and assessment of lung involvement. Ultrasound sonograms have been confirmed to illustrate damage to a person’s lungs, which means that the correct classification and scoring of a patient’s sonogram can be used to assess lung involvement.


The purpose of this study was to establish a lung involvement assessment model based on deep learning. A novel multimodal channel and receptive field attention network combined with ResNeXt (MCRFNet) was proposed to classify sonograms, and the network can automatically fuse shallow features and determine the importance of different channels and respective fields. Finally, sonogram classes were transformed into scores to evaluate lung involvement from the initial diagnosis to rehabilitation.

Results and conclusion

Using multicenter and multimodal ultrasound data from 104 patients, the diagnostic model achieved 94.39% accuracy, 82.28% precision, 76.27% sensitivity, and 96.44% specificity. The lung involvement severity and the trend of COVID-19 pneumonia were evaluated quantitatively.


As of February 26, 2021, 113 509 086 confirmed cases of novel coronavirus 2019 (COVID-19) disease and 2,514,776 COVID-19-related deaths had been reported worldwide. COVID-19 has been reported in 212 countries [1]. The World Health Organization classified COVID-19 as a global pandemic [2]. Due to the increasing number of cases, it is necessary to rapidly diagnose patients and assess the severity of the disease.

The current gold standard for the diagnosis of COVID-19 is reverse transcription-polymerase chain reaction (RT-PCR) analysis of respiratory specimens. However, due to inaccurate methods when collecting samples via nasopharyngeal swabs, the false-negative rate is high [3]. Delays in the diagnosis of COVID-19 will cause the spread of the disease and the aggravation of a patient's condition. Computed tomography (CT) is the main method for diagnosing and evaluating the severity of COVID-19 pneumonia [4, 5]. However, CT also has the following problems in lung-related diagnosis. First, CT-based diagnosis is costly and involves radiation [6]. In unstable and critically ill patients, CT is not easy to perform. In addition, patients who are sensitive to radiation, such as pregnant women, need to avoid the radiation caused by CT. Second, it is of great clinical significance to determine whether there is pulmonary airway obstruction in patients with COVID-19 pneumonia. CT can only obtain static images and cannot evaluate the movement of gas in the bronchi and bronchiole in real time.

As a nonradiation medical imaging method, ultrasound is highly sensitive to the diagnosis of various lung diseases [7]. Studies have shown that lung ultrasonography (LUS) can be an important imaging tool for diagnosing common pneumonia and assessing the degree of lung involvement [8]. For example, Liu et al. proved the effectiveness of bedside LUS in the diagnosis of community-acquired common pneumonia. With CT as the gold standard, the diagnosis of community-acquired common pneumonia by LUS has reached 96.1% accuracy, and the diagnostic efficiency of LUS far exceeds that of chest X-ray [9]. LUS has been used as an effective imaging method for diagnosing common pneumonia in many institutions. The advantages of LUS are that it is inexpensive, does not involve radiation, is easy to obtain, and can be checked at the bedside, which is especially useful for patients with severe pneumonia [10, 11].

Current studies [12,13,14,15,16,17] are more focused on the diagnosis and segmentation tasks of images. Few researchers examine the impact of COVID-19 on internal organ damage among patients, especially based on ultrasound images; internal organ damage is equally important for understanding COVID-19. The current challenge for COVID-19 is not the diagnosis, but the severity and drug intake. In general, the limitations and challenges of the current research are as follows: (1) method validation of multicenter data; (2) sufficient utilization of ultrasound data; and 3) evaluation of the impact of COVID-19 on internal organs and bodily functions.

To address the issues presented above, we proposed a novel multimodal channel and receptive field attention network combined with ResNeXt (MCRFNet) for assessing lung damage in COVID-19 patients. The network can automatically fuse shallow features and determine the importance of different channels and respective fields to classify the sonograms correctly. Sonogram classes were transformed into scores to evaluate lung involvement from the initial diagnosis to rehabilitation. Proposed method can help doctors combine the scores with other indicators to evaluate the patient’s lung involvement. It is more beneficial to use this scoring system to improve the nursing level of patients with COVID-19.


MCRFNet classification accuracy

We employed four widely used metrics to quantitatively evaluate the COVID-19 lung sonogram classification performances, including accuracy (Acc), precision (PP), sensitivity (Se), and specificity (Sp) [18, 19]. In general, a better classification performance will have higher Acc, PP, Se, and Sp. Acc describes the proportion of correctly classified images, which is expressed as follows:

$${\text{Acc}} = \frac{{\text{TP + TN}}}{{\text{TP + TN + FN + FP}}} \times 100\%$$

where TP, TN, FP, and FN represent the number of true-positive predictions, true-negative predictions, false-positive predictions, and false-negative predictions, respectively. PP is useful for measuring the proportion of true-positive predictions of overall positive images and is defined as:

$${\text{PP}} = \frac{{{\text{TP}}}}{{{\text{TP + FP}}}} \times 100\%$$

Se, which is a measure of the number of true-positive predictions and false-negative predictions, is defined as:

$${\text{Se}} = \frac{{{\text{TP}}}}{{\text{TP + FN}}} \times 100\%$$

Finally, Sp considers the number of true-negative predictions and false-positive predictions, which is defined as:

$${\text{Sp}} = \frac{{{\text{TN}}}}{{\text{TN + FP}}} \times 100\%$$

Table 1 summarizes the method comparison and ablation experiment. We evaluated the classic classification of deep neural networks and several attention-oriented architectures. It can be observed that our proposed MCRFNet has noticeably higher performance than other models, achieving Acc, PP, Se, and Sp values of 94.39%, 82.28%, 76.27% and 96.44% on the three datasets, respectively. Compared with classic models, our model is specially designed for COVID-19 lung sonogram images, and attention-guided architecture has advantages in ultrasound images. Attention-oriented models show better performance than classic models; however, no previous models have fused multimodal information or considered attention combining, which impacted their classification performance.

Table 1 Method comparison and ablation experiment on different datasets

Table 1 shows the results of a comparison between models trained with or without the CRFA module and fusion module. Specifically, the model trained with the CRFA and fusion module yields a 0.028 improvement in Acc, a 0.0636 improvement in PP, a 0.0537 improvement in Se, and a 0.024 improvement in Sp, which greatly surpasses the baseline model.

To intuitively understand the fusion module, channel, and receptive field attention capability of MCRFNet, we used the Grad-CAM++ method to visualize the class activation mapping of the backbone and our proposed network [25]. Grad-CAM++ is commonly used to locate discriminative feature regions for perception, which makes the model interpretable. As shown in Fig. 1, the areas with bright colors indicate that the current region contributes the most to the classification. The results show that the backbone network focuses on the recognition of the B-line in a wider area, and our proposed network can better allow the network to focus on the line-shaped area or lung consolidation area. The samples are well recognized by ResNeXt, but there is still room for improvement. However, our MCRFNet can adaptively select the appropriate modal channel and suitable convolution kernel size. Finally, our MCRFNet performs better than ResNeXt due to the influence of the MCRF module.

Fig. 1
figure 1

The Grad-CAM++ visualization results. The blue, red, and green backgrounds represent the Mindray dataset, Stork dataset, and Philips dataset, respectively. a Original input (the actual data are not marked with a red circle); b ResNeXt, and; c MCRFNet. The red circle is the lung consolidation area

The normalized confusion matrixes of classification on three datasets by our proposed method are provided in Fig. 2. We observe that B2-line, B1&B2-line, and B1-line are partly misclassified as lower or higher level severity. Especially in the three datasets, misclassification is more obvious. The reasons for the performance degradation of multiple datasets will be discussed in the “Discussion” section. In short, the classification accuracy of the A-line and consolidation of the three datasets can reach nearly 100%. Misclassification generally occurs in the fusion classification of two categories: A&B-line and B1&B2-line.

Fig. 2
figure 2

The normalized confusion matrixes of classification on three datasets by MCRFNet. The abscissa is the predicted label, and the ordinate is the true label

Distribution of datasets

Figure 3 depicts the distribution of expert manual label classification of 5704 ultrasound images of 108 cases in three datasets. Specifically, the Stork, Mindray and Philips datasets have 2541 images of 43 cases, 2985 images of 51 cases, and 178 images of 14 cases, respectively, and we merged the first two and the first three datasets as two new datasets (Stork & Mindray, Stork & Mindray & Philips) to avoid imbalanced samples. These three datasets (Stork, Stork & Mindray, Stork & Mindray & Philips) are divided into training and test sets at a ratio of 2:1 for each category. The number of cases in the training set of the three datasets are: 29, 63, and 72, and the number of cases in the test set are 14, 31, and 36. We solved the overfitting problem by randomly flipping images, early stopping strategy, dropout, and L2 regularization.

Fig. 3
figure 3

Distribution of expert manual labels of 5704 ultrasound images in three datasets

Evaluation of the trend in the degree of lung involvement

After obtaining the trained MCRFNet, we independently tested the videos of 8 patients (additional collected data), which were examined multiple times (4 times or more) from the initial diagnosis to rehabilitation, and we performed classification according to the method in the Establishment of Scoring Standards section. CO2 partial pressure (PCO2) is a great indicator of respiratory function and is closely related to acid–base homeostasis, reflecting the amount of acid in the blood. The correlation between the score obtained and PCO2 was analyzed by Pearson correlation analysis, and the correlation is shown in Fig. 4. The Pearson correlation coefficient was 0.73 (P < 0.001). In the graph, darker colors indicate a higher frequency of occurrence. The graph shows that the score of MCRFNet is in the range of 2.7–3.4, which has a higher correlation with PCO2.

Fig. 4
figure 4

Scatter plot between the classification score of MCRFNet and CO2 partial pressure (PCO2)

In addition, two patients with multiple examinations of the MCRFNet score and PCO2 are shown in Fig. 5. We followed the three lines of the parasternal line (PSL), anterior axillary line (AAL), and posterior axillary line (PAL) in the reference [26] to divide the left and right sides of the lungs into four areas (L1–L4 and R1–R4). Only one picture is shown in the figure, but in the actual scoring, we averaged the scores of multiple pictures after framing to obtain the specific score in the figure.

Fig. 5
figure 5

Two patients had multiple examinations of the MCRFNet score and PCO2 from the initial diagnosis to rehabilitation

In short, our classification and scoring system not only reflects the degree of lung involvement of a patient, but also helps doctors combine this score with other indicators to evaluate the patient’s lung disease and even the entire person’s condition. It is more beneficial to use this scoring system to improve the nursing level of patients with COVID-19 pneumonia and enhance their support for the clinical decision-making process for the management cascade.


The current challenge of COVID-19 is not its diagnosis, but its impact on internal organs. Recently, an increasing number of COVID-19-related diagnostic and segmentation studies have been published, but the damaging effects of COVID-19 on the internal organs of the human body are more important. In this paper, we proposed a novel multimodal channel and receptive field attention network (MCRFNet) for assessing lung damage in COVID-19 patients.

Rouby et al. assessed lung involvement by scoring eight areas’ sonograms [27], while the sonograms need to be manually identified by doctors, which is time-consuming and labor-intensive. There are also some studies on the automatic classification of sonograms, but most of them are only for detecting the B-line of sonograms [28, 29]. Our MCRFNet can achieve fully automatic assessment of lung involvement in COVID-19 patients. The lung ultrasound images of these patients were classified into six types of sonograms, and the classification results were quantitatively scored to obtain the total scores of 8 regions. Then, a correlation analysis between the scores and PCO2, which is the most relevant to lung involvement, was obtained. Finally, a Pearson correlation coefficient of 0.73 was calculated, indicating that our classification scores can reflect the lung involvement of COVID-19 patients. It is useful to choose the correct treatment method based on the severity of the situation.

The reason for the performance decline after adding the Philips dataset is shown in Fig. 6. The Philips dataset comes from three different machines in two centers. Due to its contrast and resolution disparity with the Stork and Mindray datasets, our model misclassified some images into incorrect categories. It also means that the robustness and cross-domain adaptability of our model are not perfect; this limitation serves as a direction for future improvement. In terms of this problem, to make the classification model more robust, we used traditional methods to extract shallow features that are not sensitive to imaging parameters and observed great performance.

Fig. 6
figure 6

Comparison of B1-line of Philips dataset from three different machines

Some attempts were made to verify the performance of the model. We tried to train with data from a single center, while data from another center were used for independent testing. However, the Stork dataset has only 58 consolidation images; if we use the trained Stork model to predict Mindray's consolidation data (1136 images), the accuracy of independent testing will be greatly reduced in this category of classification and vice versa.

For further study, we consider applying the lung consolidation attention area of our model to segment the lung consolidation part. The data in this experiment are obtained by an oblique scan, our model may be used for longitudinal scan in further study, making the model robust to the scanning method. We will try to use the patient's ultrasound video as input, and complete the direct assessment of the patient's lung involvement through three-dimensional convolutional neural network (3D-CNN) [30] or long short-term memory (LSTM) [31].


In this paper, we proposed a novel classification network named MCRFNet that utilizes multimodal fusion and channel and receptive field attention to classify lung sonograms. In addition, we scored the predicted categories that reflect the degree of lung involvement in the patient and helped doctors to combine other indicators to assess disease trends in COVID-19 patients.


Ultrasound data acquisitions

In ultrasound imaging, the degree of lung involvement is related to several typical sonograms. The A-line is a horizontal reverberation artifact of the pleura caused by multiple reflections, representing the normal lung surface [32]. The B-line represents the interlobular septum, which is denoted by a discrete laser-like vertical hyperechoic artifact that spreads to the end of the screen, and it can be represented as the B1-line [33]. The fusion B-line is a sign of pulmonary interstitial syndrome, which shows a large area filled with the B-line in the intercostal space, and it can be represented as the B2-line [26]. Pulmonary consolidation is characterized by a liver-like echo structure of the lung parenchyma, with a thickness of at least 15 mm [27], as shown in Fig. 7.

Fig. 7
figure 7

Different ultrasound sonograms in lung examination

We used three datasets from four medical centers to build and evaluate the model: ultrasound images collected by the Stork ultrasound system (Stork Healthcare Co., Ltd. Chengdu, China) at Ruijin Hospital, Mindray ultrasound system (Mindray Medical International Limited, Shenzhen, China) at Shanghai Public Health Center, Philips ultrasound system (Philips Medical Systems, Best, the Netherlands) at Wuhan Sixth People's Hospital and Hangzhou Infectious Disease Hospital. The Stork dataset was collected with an H35C (2–5 MHz) convex array transducer, the Mindray dataset with an SC5-1 (1–5 MHz) convex array transducer, and the Philips dataset with an Epiq 5, Epiq 7 C5-1 (1–5 MHz) convex array transducer.

Multimodal generation and fusion

According to doctors’ experience in recognizing sonograms, parallel echo rays of the A-line, beam-like echo rays of the B-line, and the accumulation of exudate of lung consolidation are used as markers for classification. The gradient field is highly sensitive to the parallel echo rays of the A-line, and K-means clustering can better highlight the beam-like echo rays of the B-line [28]. As shown in Fig. 8a, we produced the gradient field and K-means clustering images as two new modalities for extracting shallow features.

Fig. 8
figure 8

a Generated gradient field and K-means clustering modalities. The largest picture represents the most sensitive modal. b The proposed fusion module. c Details of CA and RFA module. d The MCRF block is integrated with a ResBlock in ResNeXt. The fusion module is only used in the first ResNeXt layer

There are many methods to fuse multimodal inputs, and concatenate-based fusion is an intuitive fusion method [34], but this method is more suitable for situations where each modality is equally important for classification. Extracting features first and then concatenating is also a very popular fusion method [35], while the number of parameters and GPU memory limit its application. In this paper, we proposed a brand-new fusion network, as shown in Fig. 8b; this network used the minimum network parameters to achieve multimodal automatic weight distribution, thus underlining the embedding of the other two modalities on the original image. Two 1 × 1 convolutions were used to update the weights of the K-means modality and gradient field modality in the easiest way. After elementwise summation, we highlighted it on the original image by elementwise multiplication with the original image and finally added it to the original image to obtain the final fusion input.

ResNeXt with CRF attention block for classification

Shallow features were extracted by the traditional methods in “Multimodal generation and fusion” section. To extract deep features more effectively, we chose deep and wide ResNeXt as the backbone network for classification. ResNeXt [22] is a combination of ResNet [21] and Inception [36], which improves accuracy through wider or deeper networks. Each of its blocks is a measurable dimension in addition to the width and depth dimensions. It inherits the strategy of repeating layers of ResNet, but increases the number of paths and uses split conversion and merge strategies in a simple and scalable manner. ResNeXt with the CRF attention building block is shown in Fig. 8d. Our whole network replaces the building block in ResNeXt with our CRF attention building block. In detail, there is one first layer and three residual layers in our network, and every first layer and residual layer has one and three grassroots CRFA building blocks, respectively.

The CRF attention module comprised channelwise and receptive field attention modules, denoted as CA and RFA, respectively (Fig. 8c). The CA module attempts to assist the learning of layer-specific features and explores channelwise dependencies for the selection of useful features. Specifically, given an intermediate input feature channel set \(U \in {R^{H \times W \times C}}\), a squeeze operation is performed on the input image \(F_{sq} (U)\), that is, global average pooling (GAP), to encode the entire spatial feature on a channel as a global feature:

$$F_{sq} (U) = \frac{1}{W \times H}\sum\limits_{i = 1}^{W} {\sum\limits_{j = 1}^{H} {U(i,j), \, U \in R^{W \times H \times C} } }$$

The squeeze operation obtains the global description feature, and another operation is required to capture the relationship between the channels, namely, the excitation operation \(F_{ex} (F_{sq} (U))\):

$$\begin{gathered} F_{ex} (F_{sq} (U)) = \sigma \left( {W_{2} \delta \left( {W_{1} F_{sq} \left( U \right)} \right)} \right), \hfill \\ \delta \left( x \right) = \max \left( {0,x} \right),\sigma (x) = \frac{1}{{1 + e^{ - x} }} \hfill \\ \end{gathered}$$

where \(\delta ( \cdot )\) and \(\sigma ( \cdot )\) are the ReLU activation and sigmoid function, respectively. ReLU is a ramp function which has gradient one for positive inputs and zero for negative inputs. Sigmoid function maps the input from 0 to 1. \(W_{1} \in R^{{\frac{c}{16} \times c}}\) and \(W_{2} \in R^{{\frac{c}{16} \times c}}\) are the learning weights of the two fully connected layers. The excitation operation can learn the nonlinear relationship between channels. Finally, the learned activation value of each channel (sigmoid activation) is multiplied by the original feature on \(U\):

$$\hat{U}_{C} = F(U,F_{ex} (F_{sq} (U))) = U \cdot F_{ex} (F_{sq} (U)),U \in R^{W \times H \times C}$$

given the same input feature channel set \(U \in R^{H \times W \times C}\), we first conducted two transformations \(\tilde{F}:U \to \tilde{U} \in R^{H \times W \times C}\) and \(\hat{F}:U \to \hat{U} \in R^{H \times W \times C}\) with kernel sizes of 3 and 5, respectively. Then, the results of multiple branches are combined by summing the elements as follows:

$$U{ = }\tilde{U}{ + }\hat{U}$$

For the output features \(\tilde{U}\) and \(\hat{U}\), squeeze and excitation are performed, respectively, as in Eq. 2. Additionally, we used soft attention across channels to select different spatial scales of information, which is guided by the compact feature descriptor z:

$$\hat{U}_{RF} = \frac{{e^{{A_{c} z}} }}{{e^{{A_{c} z}} + e^{{B_{c} z}} }}F_{ex} (F_{sq} (\tilde{U})) + (1 - \frac{{e^{{A_{c} z}} }}{{e^{{A_{c} z}} + e^{{B_{c} z}} }})F_{ex} (F_{sq} (\hat{U})),U \in R^{W \times H \times C}$$

where \(A,B \in R^{{\frac{C}{16} \times C}}\). The final feature map of RFA is obtained through the attention weights on various kernels as in the above equation.

With the CA and RFA modules, the CA and RFA results are further integrated with the add operation, as shown in Fig. 6d:

$$\overline{U} = \hat{U}_{C} + \hat{U}_{RF}$$

Detailed procedures are as follows: (1) extract the most common 6 types of datasets in Fig. 5 from the training set in equal proportions randomly to avoid an imbalanced sample and ensure that each category can be learned. (2) Augment the data by rotation and normalize the intensity of the image. (3) Select the classifier with the best performance and test it on the test set to obtain the corresponding prediction results.

Establishment of scoring standards

We predicted the patient's per part ultrasound video of multiple examinations through the trained MCRFNet and classified and scored sonograms according to the paper [37]. A-line indicates that the patient is normally ventilated, with a score of 0; A&B-line indicates that the patient has mild lung ventilation loss, with a score of 1; B1-line indicates that the patient has moderate lung ventilation loss, with a score of 2; B1&B2-line indicates that the patient has severe lung loss of ventilation, with a score of 2.5; B2-line indicates that the patient has very severe loss of lung ventilation, with a score of 3; consolidation indicates that the patient has a solid lung change characterized by dynamic air bronchial signs, with a score of 4. After the classification result is quantified, the sum is divided by all the frames to obtain the final lung function severity score, which is 0 to 4.

Training strategy

For the Stork, Mindray, Stork & Mindray, and Stork & Mindray & Philips datasets, we used an independence test to verify the performance of the classifier. All the images were resized to 128 × 128, and a training batch consisted of 8 randomly selected images. We regularized the model by using dropout during training, and the neural network parameters were then trained by maximizing log-likelihood using the momentum optimizer with an initial learning rate of 0.1. Then, every 30 epochs, the learning rate dropped by 10 times, stochastically minimizing the cross-entropy between annotated labels and predictions. Our experiments were implemented using TensorFlow on a PC with an Intel Xeon E5, 64G RAM, Nvidia TITAN Xp 12G.

Availability of data and materials

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.



Lung ultrasound


Novel coronavirus 2019


Reverse transcription-polymerase chain reaction


Computed tomography


Ventilator-associated pneumonia


CO2 partial pressure


Parasternal line


Points of interest


Anterior axillary line


Posterior axillary line


  1. Coronavirus resource center JHUoM. COVID-19 Case Tracker: Follow global cases and trends. Updated daily. Accessed 2020.

  2. Coronavirus disease (COVID-19) Pandemic WHO. Accessed 2020.

  3. Hao W, Li M. Clinical diagnostic value of CT imaging in COVID-19 with multiple negative RT-PCR testing. Travel Med Infect Dis. 2020.

    Article  Google Scholar 

  4. Bernheim A, Mei X, Huang M, Yang Y, Fayad ZA, Zhang N, Diao K, Lin B, Zhu X, Li K, Li S, Shan H, Jacobi A, Chung M. Chest CT findings in coronavirus disease-19 (COVID-19): relationship to duration of infection. Radiology. 2020;200:463.

    Article  Google Scholar 

  5. Pan F, Ye T, Sun P, Gui S, Liang B, Li L, Zheng D, Wang J, Hesketh RL, Yang L, Zheng C. Time course of lung changes at chest CT during recovery from coronavirus disease 2019 (COVID-19). Radiology. 2020;295(3):715–21.

    Article  Google Scholar 

  6. Self WH, Courtney DM, McNaughton CD, Wunderink RG, Kline JA. High discordance of chest x-ray and computed tomography for detection of pulmonary opacities in ED patients: implications for diagnosing pneumonia. Am J Emerg Med. 2013;31(2):401–405. doi:

  7. Parra A, Perez P, Serra J, Roca O, Masclans JR, Rello J. Pneumonia and lung ultrasound in the intensive care unit. Chest. 2014;145:3.

    Article  Google Scholar 

  8. Aghdashi M, Aghdashi M, Broofeh B, Mohammadi A. Diagnostic performances of high resolution trans-thoracic lung ultrasonography in pulmonary alveoli-interstitial involvement of rheumatoid lung disease. Int J Clin Exp Med. 2013;6(7):562–6.

    Google Scholar 

  9. Liu XL, Lian R, Tao YK, Gu CD, Zhang GQ. Lung ultrasonography: an effective way to diagnose community-acquired pneumonia. Emerg Med J. 2015;32(6):433–8.

    Article  Google Scholar 

  10. Berlet T, Etter R, Fehr T, Berger D, Sendi P, Merz TM. Sonographic patterns of lung consolidation in mechanically ventilated patients with and without ventilator-associated pneumonia: a prospective cohort study. J Crit Care. 2015;30(2):327–33.

    Article  Google Scholar 

  11. Xia Y, Ying Y, Wang S, Li W, Shen H. Effectiveness of lung ultrasonography for diagnosis of pneumonia in adults: a systematic review and meta-analysis. J Thorac Dis. 2016;8(10):2822–31.

    Article  Google Scholar 

  12. Apostolopoulos ID, Mpesiana TA. Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks. Phys Eng Sci Med. 2020;43(2):635–40.

    Article  Google Scholar 

  13. El Asnaoui K, Chawki Y, Idri AJae-p. Automated Methods for Detection and Classification Pneumonia based on X-Ray Images Using Deep Learning. 2020; arXiv:2003.14363. Accessed March 01, 2020.

  14. Zheng C, Deng X, Fu Q, Zhou Q, Feng J, Ma H, Liu W, Wang X. Deep learning-based detection for COVID-19 from chest CT using Weak Label. IEEE Trans Med Imag. 2020.

    Article  Google Scholar 

  15. Chen J, Wu L, Zhang J, Zhang L, Gong D, Zhao Y, Hu S, Wang Y, Hu X, Zheng B, Zhang K, Wu H, Dong Z, Xu Y, Zhu Y, Chen X, Yu L, Yu H. Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography: a prospective study in 27 patients. Sci Reports. 2020;10:19196.

    Google Scholar 

  16. Shan F, Gao Y, Wang J, Shi W, Shi N, Han M, Xue Z, Shen D, Shi Y. Lung infection quantification of COVID-19 in CT images with deep learning. Multimed Tools Appl. 2020;3:1–16.

    Google Scholar 

  17. Gaál G, Maga B, Lukács A. Attention U-Net Based Adversarial Architectures for Chest X-ray Lung Segmentation. arXiv e-prints 2020: arXiv:2003.10304.

  18. Gupta V, Mittal M. R-Peak Detection in ECG Signal Using Yule-Walker and Principal Component Analysis. IETE J Res. 2019.

    Article  Google Scholar 

  19. Gupta V, Mittal M. QRS complex detection using STFT, Chaos analysis, and PCA in standard and real-time ECG databases. J Instit Eng. 2019;100:87.

    Article  Google Scholar 

  20. Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv e-prints 2014: arXiv:1409.1556.

  21. He KM, Zhang XY, Ren SQ, Sun J. Deep Residual Learning for Image Recognition. Ieee Conference on Computer Vision and Pattern Recognition (Cvpr). 2016;2016:770–8.

    Article  Google Scholar 

  22. Xie SN, Girshick R, Dollar P, Tu ZW, He KM. Aggregated residual transformations for deep neural networks. Proc Cvpr Ieee. 2017;20:5987–95.

    Article  Google Scholar 

  23. Hu J, Shen L, Albanie S, Sun G, Wu E. Squeeze-and-excitation networks. IEEE Trans Pattern Anal Mach Intell. 2020;42(8):2011–23.

    Article  Google Scholar 

  24. Li X, Wang W, Hu X, Yang J. Selective Kernel Networks. arXiv e-prints 2019: arXiv:1903.06586.

  25. Chattopadhay A, Sarkar A, Howlader P, Balasubramanian VN. Grad-CAM++: generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE winter conference on applications of computer vision (WACV)2018; p. 839–847.

  26. Volpicelli G, Elbarbary M, Blaivas M, Lichtenstein DA, Mathis G, Kirkpatrick AW, Melniker L, Gargani L, Noble VE, Via G, Dean A, Tsung JW, Soldati G, Copetti R, Bouhemad B, Reissig A, Agricola E, Rouby JJ, Arbelot C, Liteplo A, Sargsyan A, Silva F, Hoppmann R, Breitkreutz R, Seibel A, Neri L, Storti E, Petrovic T, Icc-Lus I-L. International evidence-based recommendations for point-of-care lung ultrasound. Intens Care Med. 2012;38(4):577–91.

    Article  Google Scholar 

  27. Rouby JJ, Arbelot C, Gao Y, Zhang M, Lv J, An Y, Wang C, Bin D, Barbas CSV, Dexheimer Neto FL, Prior Caltabeloti F, Lima E, Cebey A, Perbet S, Constantin JM. Group as training for lung ultrasound score measurement in critically Ill patients. Am J Respir Crit Care Med. 2018;5:87.

    Article  Google Scholar 

  28. Brusasco C, Santori G, Bruzzo E, Tro R, Robba C, Tavazzi G, Guarracino F, Forfori F, Boccacci P, Corradi F. Quantitative lung ultrasonography: a putative new algorithm for automatic detection and quantification of B-lines. Crit Care. 2019;23(1):288.

    Article  Google Scholar 

  29. van Sloun RJG, Demi L. Localizing B-lines in lung ultrasonography by weakly supervised deep learning, in-vivo results. IEEE J Biomed Health Inform. 2020;24(4):957–64.

    Article  Google Scholar 

  30. Zhu Y, Lan Z, Newsam S, Hauptmann A. Hidden two-stream convolutional networks for action recognition. Berlin: Springer; 2019. p. 363–78.

    Google Scholar 

  31. Ng J, Hausknecht M, Vijayanarasimhan S, Vinyals O, Monga R, Toderici G. Beyond short snippets: deep networks for video classification. Ithaca: Cornell Univ Lab; 2015.

    Google Scholar 

  32. Liu J, Copetti R, Sorantin E, Lovrenski J, Rodriguez-Fanjul J, Kurepa D, Feng X, Cattaross L, Zhang H, Hwang M, Yeh TF, Lipener Y, Lodha A, Wang JQ, Cao HY, Hu CB, Lyu GR, Qiu XR, Jia LQ, Wang XM, Ren XL, Guo JY, Gao YQ, Li JJ, Liu Y, Fu W, Wang Y, Lu ZL, Wang HW, Shang LL. Protocol and guidelines for point-of-care lung ultrasound in diagnosing neonatal pulmonary diseases based on international expert consensus. J Vis Exp. 2019;14:5.

    Article  Google Scholar 

  33. Francisco MJN, Rahal AJ, Vieira FA, Silva PS, Funari MB. Advances in lung ultrasound. Einstein (Sao Paulo). 2016;14(3):443–8.

    Article  Google Scholar 

  34. Hong D, Gao L, Yokoya N, Yao J, Chanussot J, Du Q, Zhang B. more diverse means better: multimodal deep learning meets remote sensing imagery classification. arXiv e-prints 2020: arXiv:2008.05457.

  35. Zhou T, Fu H, Zhang Y, Zhang C, Lu X, Shen J, Shao L. M2Net: Multi-modal multi-channel network for overall survival time prediction of brain tumor patients. arXiv e-prints 2020: arXiv:2006.10135.

  36. Szegedy C, Liu W, Jia YQ, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going Deeper with Convolutions. IEEE Conf Comput Vision Pattern Recogn. 2015;2015:1–9.

    Article  Google Scholar 

  37. Ranzani OT, Taniguchi LU, Torres A. Severity scoring systems for pneumonia: current understanding and next steps. Curr Opin Pulm Med. 2018;24(3):227–36.

    Article  Google Scholar 

Download references


The authors are grateful to all study participants.


This research was funded by the National Natural Science Foundation of China, grant number 91959127 and by the Shanghai Science and Technology Action Innovation Plan, grant number 19441903100.

Author information

Authors and Affiliations



ZH and JhY suggested the algorithm for image analysis; ZH implemented it and analyzed the experimental results; ZL, HZ, JL, BH, AL, XS and YX collected experimental data and provided clinical guidance; AL, JH, XP and YD consulted the obtained result. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Yang Xiao, Hui Zhang or Jianqiao Zhou.

Ethics declarations

Ethics approval and consent to participate

The Research Ethics Committee of the Ruijin Hospital granted approval for this study, and the collection and use of the data analyzed in this study.

Consent for publication

Agreed by the author.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, Z., Liu, Z., Dong, Y. et al. Evaluation of lung involvement in COVID-19 pneumonia based on ultrasound images. BioMed Eng OnLine 20, 27 (2021).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: