Classification of lung pathologies in neonates using dual-tree complex wavelet transform

Introduction Undiagnosed and untreated lung pathologies are among the leading causes of neonatal deaths in developing countries. Lung Ultrasound (LUS) has been widely accepted as a diagnostic tool for neonatal lung pathologies due to its affordability, portability, and safety. However, healthcare institutions in developing countries lack well-trained clinicians to interpret LUS images, which limits the use of LUS, especially in remote areas. An automated point-of-care tool that could screen and capture LUS morphologies associated with neonatal lung pathologies could aid in rapid and accurate diagnosis. Methods We propose a framework for classifying the six most common neonatal lung pathologies using spatially localized line and texture patterns extracted via 2D dual-tree complex wavelet transform (DTCWT). We acquired 1550 LUS images from 42 neonates with varying numbers of lung pathologies. Furthermore, we balanced our data set to avoid bias towards a pathology class. Results Using DTCWT and clinical features as inputs to a linear discriminant analysis (LDA), our approach achieved a per-image cross-validated classification accuracy of 74.39% for the imbalanced data set. Our classification accuracy improved to 92.78% after balancing our data set. Moreover, our proposed framework achieved a maximum per-subject cross-validated classification accuracy of 64.97% with an imbalanced data set while using a balanced data set improves its classification accuracy up to 81.53%. Conclusion Our work could aid in automating the diagnosis of lung pathologies among neonates using LUS. Rapid and accurate diagnosis of lung pathologies could help to decrease neonatal deaths in healthcare institutions that lack well-trained clinicians, especially in developing countries.


INTRODUCTION
The use of lung ultrasound (LUS) to diagnose and monitor lung pathologies in neonates has increased in many urban hospitals.This can be attributed to the advantages that it has over conventional imaging modalities, Xrays and CT scans.Compared to X-rays and CT scans, LUS is cheaper, more accessible and ionizing-radiation free.Furthermore, LUS is comparable to bedside chest X-ray and chest CT in diagnosing lung diseases [1], [2].Specialist medical professionals and clinicians are required to carry out the LUS scans on neonates and read the LUS scans to determine the lung condition of the neonates.There is lack of these highly trained medical professionals and clinicians in hospitals in developing countries and remote hospitals.In these environments, an automated classification tool could be used to assist medical professionals and physicians who use LUS as a diagnostic tool.
Attempts to automate the diagnosis of specific clinical disorders from LUS have already been made.In [3] and [4], the researchers detect Covid-19 pneumonia using deep learning approaches.In Tsai et al. [5], the authors performed binary classification of pleural effusion using a convolutional neural network.In the following works, features related to the common clinical LUS morphologies were extracted and used.In [6], pleural lines, A-lines and B-lines were extracted and used for classification.In [7], researchers detected the pleural line in the image and selected a region of interest (ROI) below the pleural line to grade the severity of Covid pneumonia.In [8], researchers detected the pleural line and extracted features related to the pleural line to evaluate the healthiness of the patient.In Correa et al. [9], the researchers extracted thin rectangular regions of the image and used these as inputs to a 3-layer feed-forward neural network to classify pediatric pneumonia.The majority of the existing works in general take into account the primary morphologies for each individual pathological condition they are intended to detect.To distinguish between various lung diseases, it is necessary to consider all of the major LUS morphologies connected to these disorders.In Bassiouny et al [10], 7 key LUS morphologies that are key markers for specific lung conditions were detected using a Faster Region-Based Convolutional Neural Network (FRCNN) object detection model.In our previous initial work [11], we proposed a feature extraction method that was designed to quantify the strong recurrence characteristics of the image morphologies used by doctors in diagnosing LUS images.We extracted scanlines from the images, these scanlines were shown to capture the commonly used LUS morphologies for diagnosis and used recurrence analysis to capture nonlinear features based on recurrence patterns captured in the scanlines.We had also included 3 clinical features viz.gestational age (GA), corrected gestational age at the time of the scan (CGATS), and day(s) of life (DOL).These features are required to separate CLD and RDS as advised by our clinical collaborators.
There exists some LUS works that use approaches similar to our proposed method in this paper.In [12] a similar wavelet transform was used to decompose areas in the LUS images.Contreras-Ojeda et al. selected 2 regions in the LUS image corresponding to chest tissue and lung tissue and decomposed these regions into the detail coefficients using the wavelet transform.Statistical features were extracted from these regions such as energy, mean, median, standard deviation, covariance, kurtosis, root means square (RMS), and the peakmagnitude-RMS.Finally, they used K-nearest neighbors (KNN) and tested their accuracy using 10-fold crossvalidation to classify the regions as muscular tissue or lung tissue.In [13], Cao et al. manually selected a region of interest (ROI) containing B-lines or white lung.Then, they extracted radiomic features such as first-order statistical features using wavelet filtering, Grey Level Co-Occurrence Matrix (GLCM) features, Gray Level Run Length Matrix (GLRLM) features, Gray Level Size Zone Matrix (GLSZM) features, Neighboring Gray Tone Difference Matrix (NGTDM) features, Gray Level Dependence Matrix (GLDM) features and Shape features.Finally, they used a support vector machine (SVM) for classifying the manually selected regions as B-line or white lung.Fundamentally, they are selecting a ROI in the image and classifying it as chest or lung in [12] and in [13] they are classifying the ROI as 2 morphologies, B-lines or white lung (which is a large area of coalescent B-lines).
In our work, we perform a 6-class pathological condition classification by dividing our image into two halves, top and bottom using similar features to the existing works on the decomposed wavelet images.We also used 3 clinical features as explained earlier to separate between pathological conditions with similar morphologies.The 6 conditions are separated by quantifying characteristics of the image morphologies associated with the conditions using a time-frequency transformation approach.This approach will be used to isolate the spatially-localized line patterns and texture patterns found in LUS.In the paper by G. Chen [14], the 1-D dual-tree complex wavelet transform (DTCWT) was used to extract features for automatic seizure detection from EEG signals.In the paper by D.B. Aydogan et al. [15], the 2-D DTCWT is used for the detection of bone fractures, classification and segmentation of brain tumors from MRI images.In [16], DTCWT is used to extract features for breast tumor classification from MRI images.In [17], researchers used the DTCWT to classify ultrasound images of thyroids into 6 categories.These works indicate that the DTCWT's approximate high directional selectivity, scaleinvariance, good space-frequency localization and shift invariance properties makes it an excellent time-frequency transformation algorithm for medical image analysis tasks such as the proposed LUS image classification.In LUS, the main lung morphologies are spatially isolated high intensity patterns or texture patterns.In our work, the DTCWT is used to decompose the image into subimages from which we extract a combination of textural and morphological features using global statistical pixel intensity distribution, grey-level co-occurrence matrix, grey-level run length matrix and linear binary pattern features.The rest of the paper is organized as follows: Section II describes the LUS morphologies in neonates, Section III defines the datasets, details of methods are presented in Section IV, results and discussion is presented in Section V, and our conclusions with future works are presented in Section VI.A block diagram outlining the proposed method is illustrated in Fig. 1.

LUS MORPHOLOGIES IN NEONATES
This section contains descriptions of the most important clinical markers used by clinicians in diagnosing different lung conditions as presented in our previous work [11].These clinical markers are Pleural lines, A-lines, Separate B-lines, Coalescent B-lines and Consolidations.Capturing the characteristics of these clinical markers is essential to successfully automate the diagnosis of LUS.
2.1 Pleural Line [Fig.2(a-c)]: Identifying and characterizing pleural line artifact is the most important LUS markers for clinicians to diagnose different lung pathologies.The pleura is made up of the visceral pleura lining, that is a thin membrane which is affixed to the surface of the lungs and the parietal pleura lining, that is the membrane affixed to the chest wall.Between the two pleural layers is a thin layer of fluid that allows the layers to slide freely during respiration.In LUS, at the interface separating the pleura and the lung tissue the ultrasound waves are reflected back producing a bright (echogenic) horizontal line.This is called pleural line artifact.

A-Lines [Fig. 2(d)]:
A-lines are horizontal artifacts generated by the repeated reflection of the ultrasound beam between the pleural line and the probe surface.It only occurs when the lung is well aerated or in the case of PTX.They are typically at equal distance similar in appearance to the pleural line and generally are a sign of lung aeration.

B-Lines [Fig. 2(e-f )]:
B-lines are a sign that there is fluid or a lack of air in the interstitial and/or alveolar spaces of the lungs.The concentration of B-lines is correlated with the amount of fluid in the lungs.B-lines appear as vertical reverberation line artifacts.It is only a true B-line if it starts at the pleural line and extends to the bottom of the screen.A couple of B-lines can occur under normal circumstances especially in neonates as small amounts of fluid may still be present in the lungs after birth.However, ≥ 3 B-Lines in a LUS frame is an indication of interstitial lung disease.As they increase in number, the B-lines appear to diffuse together and are called coalescent B-lines which is an indication of alveolar-interstitial disease.

Consolidations [Fig. 2(g)]:
Due to severe lack of aeration in the lungs, consolidations appear as hypoechoic areas with hyperechoic short lines (air-bronchogram) with irregular or absent pleural lines.

Double Lung Point [Fig. 2(h)]:
Double Lung Point is specific to TTN and has a sensitivity of 100% [18].The difference in aeration between the upper lung regions (A-lines) and lower lung region (B-lines) causes Double Lung Point.Absence of Double Lung Point doesn't rule out TTN.
In Fig. 3 we have illustrated the six anatomical regions (3 each on the left and right side of the lungs) of a clinically standard LUS scan and in Table 1, we have provided a brief summary of expected morphological characteristics in LUS images to assign one of the 6 lung conditions.

DATASET
Our research collaborators from a Canadian tertiary neonatal intensive care unit in Mount Sinai Hospital (MSH) acquired all of the LUS scan videos.Research Ethics Board approvals and data sharing agreement were obtained Will have areas of consolidation.In severe cases, the consolidated area will look like the liver tissue which is known as "hepatization".CLD Irregular and thickened pleural line.In addition, there may be presence of different severities of B-lines from spared areas to separate coalescent B-lines or even some spared areas.
from both Toronto Metropolitan University and MSH.A breakdown of the number of patients, videos and images for each condition are shown in Table 2. Two datasets were used for this work, a whole dataset (class unbalanced) and a balanced dataset.The whole dataset was created using 5 frames from each of the videos acquired by our clinical collaborators.The whole dataset contained 6 or more videos for most of the patients in the dataset.For most of the patients, 1 or more videos of each of the commonly imaged areas of the lungs (R1, R2, R3, L1, L2 and L3 as shown in Figure 3) were included in the dataset.R4, L4 regions are imaged for specific conditions, but not for any of the conditions in the dataset.For patients with PTX, as it happens in a specific area of the lung pleura and not in both lungs, only lung areas with pneumothorax were included so these patients usually had less than 6 videos.For the whole dataset, 5 frames were taken at equal intervals in the video unless a particular frame was not clear.This was done to avoid the large similarity between neighboring frames that would bias the model.Looking at the distribution of the whole dataset in Table 2 it is clear that the dataset is biased towards TTN, which comprises more than a third of the dataset.The second dataset, the balanced dataset, consisted

METHODS
In this section, the methods and process for obtaining the DTCWT features and performing pattern classifications are presented.

Preprocessing
The preprocessing of the LUS images consisted of two main operations, artifact removal and normalization.
The LUS videos have region information and other artifacts overlaid on the videos, which was close to the max intensity in the image.These artifacts remain in the same x-y coordinates throughout the video so these artifacts were removed semi-automatically by selecting a region of interest (ROI) over the artifact, this ROI was used for all the images in a video.The artifacts were removed by selecting all the pixels in the ROI that had an intensity greater than 50% of the maximum intensity of the ROI.The 8-pixel connected neighbors were also selected.All the selected pixels were replaced with median intensity for the ROI.For normalization, the images were all resized to [520 420], then 10 pixels were removed from each side of the image resulting in all the images being resized to [500 400].These steps were taken to remove the high-intensity artifacts that can easily be picked up by some of the subbands and normalize the images for DTCWT decomposition.The images were cropped to remove parts of the image that were not important and to remove some artifacts that appear at the edges of the video.

Time-Frequency/ Time-Scale Transformation
The LUS morphologies can be described as a combination of information that is localized spatially as well as texture patterns and small oscillations.These patterns can be characterized as belonging to different spatial frequencies and relative locations, so a 2D time-frequency/time-scale transformations may be able to capture these patterns making it easier to extract features that can quantify these patterns.

DWT
The 2-D discrete wavelet transform (DWT) can be used to analyze an image and isolate these different frequencies using different scales.The 2-D discrete wavelet transform uses basis functions to decompose various scaled .The 2-D DWT is implemented by using 1D wavelet and scaling functions [19].The 1D DWT of a signal can be defined as: Where a J,k is the approximation coefficients and d j,k is the detail coefficients at octave decomposition levels j. ψ(n) is the orthonormal wavelet function, φ(n) is the scaling function and k is the translation parameter.
Using the 2D-DWT an image I(k 1 , k 2 ) can be decomposed as follows [21]: Where N and M are the row and column numbers.A j are the approximation coefficients or approximation sub-images and Dh j , Dv j and Dd j are the detail coefficients or detail subimages for every level of decomposition j.The function f d j is defined as: However, an issue with the DWT is that it is shift-variant.This means that if the morphologies in the LUS images were translated, the DWT would generate a different set of DWT coefficients.

DTCWT
In order to address the lack of shift-invariance, DTCWT type approaches can be used as it is nearly shift invariant.Also, DTCWT has good directional selectivity and perfect reconstruction.The 2-D DTCWT uses two separate decomposition trees to calculate the complex transform of an image.One of the trees are used to calculate the real parts of the complex coefficients and the other tree is used to calculate the imaginary parts of the complex coefficients [22].The DTCWT is implemented by using two separate two-channel filter banks.Approximate shift invariance was achieved by doubling the sampling rate at each level of the tree by using two trees [22].DTCWT decomposition of an image is done by using a complex scaling function and six wavelet functions [23,22].The DTCWT results in a low-passed version of the image at each decomposition level and six high-frequency sub-images at each decomposition level corresponding to the six wavelet functions oriented at angles α=(±15 The decomposition of an image I(k 1 , k 2 ) can be performed by using a complex scaling function and six complex wavelet function as follows [24]: Where j 0 is the number of decomposition level, A j0,l and D g j,l are scaling coefficients and wavelet coefficients respectively.φ j0,l (k 1 , k 2 ) represents the scaling function and ψ g j,l (k 1 , k 2 ) represents the six wavelet functions [24].

Feature Extraction
We extracted features from the top and bottom half of the DTCWT sub-images.The rational being, the top half of the images will have features related to the pleural line and the bottom half will have features related to B-lines and A-lines.To extract features that can describe the intensity distribution of the pixels we used statistical features directly on the image, grey-level co-occurrence features and rotation invariant uniform LBP features.To extract features that measure the morphology in the image, we used grey-level run length matrix features.A combination of textural and morphological features was selected to obtain a complementary feature set that should produce a more robust classification model [25].Statistical features were extracted from the pixel intensities of the images [26].

Statistical Features
The below statistical features were chosen as they are commonly used as global features in medical image analysis.
Where n is the number of rows in the image, m is the number of columns in the image, i is the row number, j is the column number and I is the image.

GLCM Features
The grey-level co-occurrence matrix is calculated by determining how often pairs of pixels with specific values and in a specified spatial relationship occur in an image.Then statistical measures are extracted from the GLCM [27].To calculate the GLCM we first quantized the image to 8 levels, generating an 8x8 GLCM and choose 6 offsets, [0 1; 1 0; 0 2; 2 0; 1 1; 2 2;], to generate 6 different GLCMs.Element (i, j) of the gray level co-occurrence matrix represents the number of occasions a pixel with intensity i is adjacent to a pixel with intensity j in the LUS image.The GLCM stores the co-occurring values representing the distance and angular spatial relationship of pixels in a structured matrix.We then extracted 5 features from the GLCM [28].
Where i is the row number of the GLCM, j is the column number of the GLCM and G is the GLCM.Where u is the mean pixel intensity in the quantized image and σ 2 is the variance of the pixel intensity in the quantized image.

GLRLM Features
The grey-level run length matrix is used to store run lengths based on grey level value and length of the run.A grey level run is a set of pixels having the same grey level value, pixels are consecutive and collinearly distributed in some given direction [29].We calculated the 4 GLRLM for each of the 2 ROIs in the image using 4 directions 0 • , 45 • , 90 • and 135 • .Then extracted 11 features from the GLRLM.The equations are shown in Table 3 [30,31].

LBP Features
The LBP histogram is used to store LBP pixel labels for the image.The pixel labels are calculated by thresholding the 3x3-neighborhood of each pixel with the center value and considering the result as a binary number [32].We used the rotation-invariant LBP which results in 10 bins.First, only uniform LBPs, patterns with a maximum of two circular 0-1 or 1-0 transitions are considered unique patterns, all nonuniform patterns are stored in one bin.Then, in the rotation-invariant version LBP patterns that result in the same value when rotated are stored in the same bin.The values of the bins were used as the feature set.
*In the above GLRLM feature equations, i is the grey level, j is the run length and R is the GLRLM.

Clinical Features
As with our previous initial work and as explained earlier, we included 3 clinical features (GA, CGATS, and DOL) in our models as they are essential in segregating CLD and RDS.For the diseases CLD and RDS, the LUS images morphologies are not enough to separate the conditions as affirmed by our clinical collaborators.Without these features a meaningful clinical diagnosis of the conditions is not possible or extremely difficult only using LUS images.While these are important features in a classification sense, it is critical to note that the clinical features themselves cannot separate the conditions very well.For example, Normal and PTX are unrelated to the clinical features and can occur in neonates regardless of gestation age, days of life and age at the time of scan.The clinical features can only separate the conditions in a meaningful way and play a supportive role when LUS information is made available.

Pattern Classification
In this work, we used a simple Linear Discriminant Analysis (LDA) based classifier to perform the classification of the 6 LUS conditions using the features extracted from the LUS images.While performing classification on the balanced dataset, the models were trained based on equal prior probabilities between the groups and for the whole dataset we used prior probabilities based on group size.Simple linear classifier was chosen over the complex non-linear classifiers to place the emphasis more on extracting meaningful features relevant to clinical markers and to keep the outcomes conservative and realistic.Likewise, we used an univariate feature selection procedure using chi-square tests [33] due its speed as a low computational complexity feature selection method was required for doing feature selection inside the loop of cross-validation.In chi-square feature selection process, a individual chi-square test is performed for each feature and the class labels.A small p-value indicates that the corresponding feature is dependent on the class, and is ranked accordingly as an important feature.
In terms of cross validation, we tested the performance using two forms of cross-validation.We used leaveone-out cross-validation (LOO CV), which is a form of cross-validation where only one image is used as the testing set and the model is trained on the rest of the images.This is repeated so that each of the images is used as a testing set.The average accuracy of the all these runs then results in per-image classification performance.We also used leave-one-subject-out cross-validation (LOSO CV).In this form of cross-validation, all of the images belonging to a subject are used as the testing set and the rest of the images are used as the training set.This is repeated until each subject has served in the testing set.The average accuracy of the all these runs then results in per-subject classification performance.The motivation behind the LOSO cross-validation is to avoid biasing the model due to the similarity between images of the same patient.We also only selected 5 images from each video to avoid the large similarity between frames in the same video for our LOO results.The images were selected at equal intervals throughout the video.If the lung was not clearly imaged in the frame another frame was selected as was the case at the beginning or end of a few videos.

RESULTS AND DISCUSSION
We performed the following four 6-group classification experiments: (i) LDA with LOO CV on balanced dataset, (ii) LDA with LOO CV on the whole dataset, (iii) LDA with LOSO CV on balanced dataset and (iv) LDA and LOSO CV on the whole dataset.In performing the above experiments we also restricted the feature dimension to the top 15 DTCWT features selected by the feature selection method and 3 clinical features.Considering the approximate training group sample sizes, 18 features will form 10-15% of the training group sizes which is expected to produce conservative/ realistic performance without lose of generalization or over-fitting.We also tested the distribution of top selected features by the feature selection method and found that in general GLCM and GLRM features were selected in larger proportions in comparison to statistical and LBP features.
The obtained results are presented in the four confusion matrices in Tables 4-7.Table 4 presents the results of the 6-group classification with LOO CV on the balanced dataset.With an overall per-image classification accuracy of 92.78%, most of the groups are classified well expect PTX which has a slight overlap between Normal and TTN.Table 5 presents the results of the 6-group classification with LOO CV on the whole dataset.With an overall per-image classification accuracy of 74.39%, the performance drops considerably in comparison to the balanced dataset.As evident from the Table 5, looking at the TTN column most of the issue seems to be the overlap with TTN.This is expected as the whole dataset is skewed due to almost 1/3rd of TTN cases and also as confirmed by our clinical collaborator, TTN can have varying and overlapping presentation with Normal, PTX and RDS.Specifically, for the whole dataset experiment, Normal was misclassified as TNN 75.14% of the time, PTX was misclassified as TTN 36.19% of the time, and RDS was misclassiifed as TTN 25.31% of the time.These 3 conditions look very similar, they all commonly have A-lines, however there are differences between these diseases such as the the amount of fluid present in the lungs which can be picked up in the balanced dataset.These might have contributed to the reduced performance of the per-image classification using the whole dataset.
Moving on to the LOSO results in Table 6 and 7, we see in general a reduction as expected in the per-subject classification accuracies in comparison with the per-image classification accuracies in Table 4 and 5.This could be explained due to two reasons.First, we have removed any subject bias that may have influenced the results using LOO CV with images.Secondly, LOSO CV reduces the amount of condition specific data the model is trained on for the patient in the testing set, which affects the accuracy of the model, which could lead to a decrease in accuracy.In addition, for the whole dataset case, the unbalanced and 1/3rd presence of the TTN might have amplified the difficulty level of classifying the groups.Lastly the reduced feature dimension might not have enough discriminative ability to separate the groups well.
To test whether increasing the features dimension beyond top 15 DTCWT features and 3 clinical features could potentially improve the classification abilities, we repeated the LOSO classification experiments with both the balanced and the whole dataset.In Fig. 5 we have illustrated the the change in the LOSO overall accuracy with the increase in the number of DTCWT features.The datatips in the figure only show the number of DTCWT  8.This alludes that there may be scope for improvement with optimization in the feature dimension (or) using advanced machine learning methods which is out of the scope of this paper.
As reported in the introduction, most of the existing works around LUS only try to detect a single pathological disease.This makes it difficult to compare our DTCWT feature extraction algorithm with existing works as our work is more comprehensive performing a multi-group classification with a larger dataset.In another previous work from our group [10], a smaller but comparable dataset containing images from 5 conditions was used.That work used an object detection model to detect the 7 LUS morphologies, but did not perform direct pathological condition classification.The object detection model does produce meaningful outputs, which are comparable to the meaningful image decomposition features captured in this work.In addition, the proposed work performed pathological condition classification and included 3 clinical features which enabled us to separate certain lung conditions that are inseparable from images alone as verified by our clinical collaborators.To compare our previous initial work using recurrence features [11] with DTCWT features, we recomputed the results using recurrence features with updated datasets used in this work.We were able to achieve a per-image classification accuracy of 85.42% on the balanced dataset with LOO CV and 72.00% with LOO CV on the whole dataset using the recurrence features.These results were still lower than the proposed DTCWT work.
The results and discussion provided above demonstrates the potential of using a DTCWT based approach to build an automated machine learning systems that could eventually lead to decision support systems to assist the clinical community.While there is scope for improvement especially in TTN related misclassifications albeit high LOO CV accuracies, the promising potential of designing a system that could serve as a first level screening tool to identify lung conditions from LUS images is encouraging.Especially in remote communities and developing countries with lack of specialist clinicians such system could help with timely diagnosis and treatment of neonates suffering respiratory diseases, thereby saving lives.

CONCLUSIONS
To facilitate the widespread use of LUS an automated classification tool could be used to assist medical professionals and physicians in hospitals where there is a lack of highly trained medical professionals and clinicians.In this work, we have demonstrated that simple DTCWT features extracted from LUS images can be used in a system to classify the six common LUS pathologies in neonates.With a LDA based classifier, the proposed classification model achieved a LOO CV per-image classification accuracy of 92.78% on the balanced dataset and a LOO CV per-image classification accuracy of 74.39% on the whole dataset.Likewise the proposed approach achieved a maximum LOSO CV per-subject classification accuracy of 81.53% on the balanced dataset and a maximum LOSO CV per-subject classification accuracy of 64.97% on the whole dataset.The proposed DTCWT features along with the 3 clinical features performed fairly well at separating the 6 lung pathologies.
Future work involves extracting dynamic features to detect lung sliding from post-processed M-mode images to help separate Normal, PTX and TTN.Our team is also working on advanced machine learning, deep learning approaches, and feature fusion techniques which will augment these results helping to achieve better robustness and reliability.

Figure 1 :
Figure 1: Block diagram of the proposed method

Figure 2 :
Figure 2: a: A sample normal Pleura from a subject with Normal Lung, b: A sample thick Pleura (> 2mm) from a subject with CON, c: A sample thick and irregular Pleura from a subject with CLD, d: Sample A-Lines from a subject with PTX, e: Sample separate B-Lines from a subject with TTN, f: Sample coalescent B-lines from a subject with RDS, g: Sample CON illustration from a subject with Consolidation, and h: Sample Double Lung Point from a subject with TTN

Figure 3 :
Figure 3: The 6 standard lung regions scanned during lung ultrasound (L1, L2, L3, R1, R2, R3).This is a standard clinical method.Table 1: Lung Conditions and the Associated Morphologies Name Morphologies Normal Presence of A-lines in all lung regions, normal Pleural line and lung sliding.PTX No lung sliding in areas of PTX, but will look like a normal lung.However, no US waves pass the air between the pleural layers.TTN Normal pleural line, with indications of interstitial (separate B-lines) or alveolarinterstitial syndrome (coalescent B-lines) in the lower lung regions.Will contain Alines in the upper lung regions.RDSIrregular and thickened pleural line and with consolidations in some areas of the lung.Will have coalescent B-line to ≥3 separate B-lines in all regions of the lungs.CONWill have areas of consolidation.In severe cases, the consolidated area will look like the liver tissue which is known as "hepatization".CLD Irregular and thickened pleural line.In addition, there may be presence of different severities of B-lines from spared areas to separate coalescent B-lines or even some spared areas.

Figure 5 :
Figure 5: LOSO Accuracies for Balanced and Whole Dataset with increased Number of DTCWT Features

Table 2 :
Whole Dataset Overview Six videos, one for each of the lung regions, were taken for each patient and 5 frames taken at equal intervals were used from each video.For 2 of the PTX patients, only 3 and 4 videos were available so 10 and 7 or 8 frames were taken from those videos to have 30 images per patient.The patients and videos for the balanced dataset were determined by our clinical collaborators.The balanced dataset was created to test the performance of the DTCWT features without any effects of bias stemming from a class unbalanced dataset.

Table 4 :
Classification confusion matrix for results with LOO CV on BALANCED dataset using top 15 DTCWT features and 3 clinical features.The overall per-image classification accuracy achieved is 92.78%

Table 5 :
Classification confusion matrix for results with LOO CV on WHOLE dataset using top 15 DTCWT features and 3 clinical features.The overall per-image classification accuracy achieved is 74.39%

Table 6 :
Classification confusion matrix for results with LOSO CV on BALANCED dataset using top 15 DTCWT features and 3 clinical features.The overall per-subject classification accuracy achieved is 75%

Table 7 :
Classification confusion matrix for results with LOSO CV on WHOLE dataset using top 15 DTCWT features and 3 clinical features.The overall per-subject classification accuracy achieved is 63.48%

Table 8 :
Maximum accuracy obtained using DTCWT features and 3 clinical features with LOSO CV on the balanced and the whole dataset (Number of features given in brackets)