Skip to main content

Comparison of deep learning-based image segmentation methods for intravascular ultrasound on retrospective and large image cohort study

Abstract

Objectives

The aim of this study was to investigate the generalization performance of deep learning segmentation models on a large cohort intravascular ultrasound (IVUS) image dataset over the lumen and external elastic membrane (EEM), and to assess the consistency and accuracy of automated IVUS quantitative measurement parameters.

Methods

A total of 11,070 IVUS images from 113 patients and pullbacks were collected and annotated by cardiologists to train and test deep learning segmentation models. A comparison of five state of the art medical image segmentation models was performed by evaluating the segmentation of the lumen and EEM. Dice similarity coefficient (DSC), intersection over union (IoU) and Hausdorff distance (HD) were calculated for the overall and for subsets of different IVUS image categories. Further, the agreement between the IVUS quantitative measurement parameters calculated by automatic segmentation and those calculated by manual segmentation was evaluated. Finally, the segmentation performance of our model was also compared with previous studies.

Results

CENet achieved the best performance in DSC (0.958 for lumen, 0.921 for EEM) and IoU (0.975 for lumen, 0.951 for EEM) among all models, while Res-UNet was the best performer in HD (0.219 for lumen, 0.178 for EEM). The mean intraclass correlation coefficient (ICC) and Bland–Altman plot demonstrated the extremely strong agreement (0.855, 95% CI 0.822–0.887) between model's automatic prediction and manual measurements.

Conclusions

Deep learning models based on large cohort image datasets were capable of achieving state of the art (SOTA) results in lumen and EEM segmentation. It can be used for IVUS clinical evaluation and achieve excellent agreement with clinicians on quantitative parameter measurements.

Introduction

Intravascular ultrasound (IVUS) is a cutting-edge medical imaging technique used in cardiology to visualize the interior of coronary artery with exceptional clarity [1]. IVUS offers a unique perspective by providing real-time cross-sectional images of the blood vessel walls, allowing physicians to obtain detailed information about the structure, composition and extent of atherosclerotic plaques. IVUS plays a very important role in improving the understanding of coronary lesions and guiding interventional treatment by accurately measuring lumen and vessel diameter as well as determining the nature and severity of plaque [2]. The different echogenic properties of the coronary vessel wall allow for a clear interpretation of the vessel structure on IVUS images. In the clinical analysis of IVUS images, accurate segmentation of the lumen and external elastic membrane (EEM) from IVUS images is of great clinical importance and helps to quantitatively assess atherosclerotic plaques by measuring lumen diameter, lumen area, plaque burden and plaque eccentricity, etc. IVUS images exhibit a variety of features, including bifurcation, side vessels, branch confluence, ultrasound artifacts, thrombosis, stent, coarctation, and various plaques, especially calcified plaques. Given this complexity of IVUS images, it can take several years to train an experienced IVUS reader. In clinical practice having a physician manually contour the lumen and EEM can be very time consuming and difficult to guarantee accuracy. Therefore, an automated method of lumen and EEM segmentation that balances speed and accuracy is urgently needed in clinical research.

Deep learning-based medical image segmentation algorithms have made remarkable progress in recent years, and new concepts and methods are constantly being introduced into the field [3,4,5,6,7]. Yang et al. proposed a fully convolutional network (FCN)-based IVUS-Net to segment the lumen and EEM on a dataset of 435 IVUS images including 10 cases, and achieved better results than traditional image segmentation methods [8]. This work provides a benchmark for deep learning-based methods for IVUS segmentation. Tong et al. presented an automated method for detecting lumen borders based on dictionary learning. This method required manual extraction of texture features of the image and was only used to segment the lumen [9]. Dong et al. proposed an 8-layer U-Net network based on segmentation of the lumen and EEM, but the sample size was too small and no further generalization performance was verified on multiple IVUS images [10]. In contrast, the study by Du et al. remedied these shortcomings. They constructed a multicenter, IVUS dataset containing 6516 images, synthetically measured multiple convolutional segmentation networks, and validated the generalization performance on a variety of IVUS images [11]. Combining the above related studies, we believe it is necessary to measure the segmentation effect of IVUS lumen and EEM on a larger dataset using the latest models in the field of image segmentation.

In this study, we constructed IVUS image dataset containing over 11,000 images with diverse features and categories. Second, we comprehensively compared the segmentation performance of the latest image segmentation models on lumen and EEM based on recent model advances in the field of image segmentation, and performed diversity generalization performance tests. Further, we compared the agreement of relevant quantitative clinical parameters with manual segmentation computations based on the best segmentation results.

Results

Demographics of the study cohort

We performed a demographic analysis of 11,070 IVUS images of all 113 cases on the full study cohort and analyzed the statistical differences between them in the Table 1. There were no significant differences between the cohorts in terms of the proportion of male patients, age and the proportion of patients with CAD, which also suggests that the internal distribution in our randomly divided data cohort is reasonable. In terms of the coronary vessels involved, although the vast majority of lesions were concentrated in the left anterior descending (LAD) (65.49%), the distribution of lesion cases across vessels was relatively even in three study cohorts. In terms of IVUS image categories, we divided all images into seven categories: calcified plaque, bifurcation, adjacent vessels, stents, guidewire artifacts, lipid fibrous plaque and normal vessels (None), with a relatively large proportion of calcified plaque, guidewire artifacts and lipid fibrous plaque, but with little difference in overall distribution.

Table 1 Overview of the baseline characteristics of all study cohort

Performance of different models

Table 2 shows the segmentation performance of the five segmentation models for the lumen and EEM on the test set images. The segmentation metrics (mean ± standard deviation) include dice similarity coefficients (DSC), intersection over union (IoU) and Hausdorff distance (HD). Among them CENet achieved the highest in both DSC and IoU for the lumen and EEM segmentation. Res-UNet, on the other hand, performed best in HD, with HD of 0.219 and 0.178 for the lumen and EEM, respectively. Figure 1 further showed the variability between the segmentation performance of different models in terms of statistical tests.

Table 2 Performance of the five segmentation models on the testing cohort for lumen and EEM segmentation with metrics including dice similarity coefficient (DSC), intersection over union (IoU) and Hausdorff distance (HD)
Fig. 1
figure 1

Comparison of the performance of the 5 segmentation models for lumen (left subplot) and EEM (right subplot). ANOVA was used to statistically analyze the variability in the performance of the different models. *0.01 ≤ P < 0.05; **0.001 ≤ P < 0.01; ***P < 0.001

In Table 3, we further compared the segmentation performance of the five models on seven different subsets of the IVUS categories in the testing cohort, including calcified plaque, bifurcation, adjacent vessels, stent, guidewire artifacts, lipid fibrous plaques and normal images (None). There was crossover between subsets, meaning that a single IVUS image may have several IVUS image categories. The results in Table 3 can be used as a further breakdown of Table 2, and thus the performance comparison between the models in Table 3 is consistent with Table 2. Both Res-UNet and CENet performed very robustly on each subset. Overall, Res-UNet performed relatively better on lumen segmentation for each subset, while CENet was better on EEM. Figure 2 exhibited the visualization of the segmentation contours of the 5 models on different IVUS image categories.

Table 3 Means and standard deviations of the dice similarity coefficient (DSC), intersection over union (IoU) and Hausdorff distance (HD) of five segmentation models evaluated on seven different IVUS categories subsets of the testing cohort
Fig. 2
figure 2

Visualization of the lumen and EEM contours of the 5 segmentation models on the 7 IVUS image categories. The yellow, blue, green, and red contours correspond to the manually delineated lumen border, the manually delineated EEM border, the model segmented lumen border, and the model segmented EEM border, respectively

Agreement of IVUS quantitative measurement parameters

The ICC of IVUS quantitative measurement parameters between model segmentations and manual measurements is shown in Table 4. Eight of all 12 parameters had extremely strong ICC (ICC > 0.9), two had a strong consistency (0.6 < ICC < 0.8), and the remaining two had a moderate consistency (0.4 < ICC < 0.6). The Bland–Altman plot in Fig. 3 also demonstrated the extremely strong agreement between the model’s prediction and manual parameters.

Table 4 The ICC and 95% confidence interval (CI) for the comparison of all 12 IVUS quantitative measurement parameters between model segmentations and manual measurements
Fig. 3
figure 3

The Bland–Altman plots for the comparison of all 12 IVUS quantitative measurement parameters between model segmentations and manual measurements. MinLD minimum lumen diameter, MaxLD maximum lumen diameter, LEI lumen eccentricity index, Lumen-CSA lumen cross-sectional area, MinEEMD minimum EEM diameter, MaxEEMD maximum EEM diameter, EEM-CSA EEM cross-sectional area, MinPT minimum plaque thickness, MaxPT maximum plaque thickness, PEI plaque eccentricity index, PCSA plaque cross-sectional area, PB plaque burden

Comparison with previous studies

We also compared the metrics of our best model with the metrics of previous related studies in Table 5. Our model achieved the best current segmentation performance, and obtained optimal values on all metrics with DSC, IoU and HD.

Table 5 Comparison of IVUS segmentation metrics between our study and previous related studies

Discussion

In this study, we aimed to explore the accurate segmentation performance of deep learning 2D image segmentation models on lumen and EEM under a very large IVUS image cohort dataset, as well as quantitative IVUS parameter evaluation based on segmentation. The results of the testing cohort showed that the CNN-based Res-UNet and CENet network structures have outstanding performance on IVUS image segmentation. And the quantitative IVUS parameters obtained by automatic segmentation based on the model were in excellent agreement with that calculated by manual segmentation.

The four main contributions of this work are as follows: (1) we have constructed a large image segmentation dataset of IVUS, with 11,070 images containing seven diverse IVUS image categories, and also set up a professional annotation team to ensure the quality and reliability of lumen and EEM’s masks. (2) The performance of the latest state of the art (SOTA) medical image segmentation model on a large IVUS image dataset was tested to explore the segmentation generalizations capabilities of different IVUS image classes. (3) On IVUS images, the CNN-based 2D medical image segmentation model outperformed the currently popular Transformer and MLP structure-based image segmentation models. (4) The IVUS quantitative measurement parameters calculated based on the deep learning model segmentation have excellent agreement with the manually segmented measurement parameters.

Deep learning image segmentation models have made notable progress in recent years [14,15,16]. In terms of model structure, it is mainly divided into CNN based, Transformer and MLP based, and the recent prompt-based structural design [17, 18]. Although current Transformer-based segmentation models have achieved outperforming CNN-based models on some segmentation tasks, Swin-UNet does not stand out as far as IVUS segmentation was concerned. UNeXt is a lightweight segmentation model based on the MLP, which can be seen to have a parametric count of only 1.47 million compared to other models, and in this respect UNeXt sacrifices some accuracy for speed. From Tables 2 and 3, the CNN-based segmentation models represented by CENet and Res-UNet outperform the Transformer structure-based segmentation models under tens of thousands of orders of magnitude of data in the IVUS domain. We speculated that this may be related to the relatively fixed structural hierarchy of the IVUS images themselves, which fits the learning characteristics of CNN.

Most of the IVUS measurement parameters predicted by our model were in excellent agreement with those obtained from manual measurements, but the performance was mediocre for some of the extension and plaque measurement parameters (LEI, PEI, MinPT and MaxPT). The reason for this may be related to the relatively irregular morphological features of the plaques themselves on IVUS images compared to the fixed hierarchy of the lumen and EEM. We also made a comparison of metrics with previous studies, a comparison that was perhaps not so fair because everyone is not in the same baseline, and the datasets and models used are different.

Our study also had some limitations. First, because of timely reasons we did not collect data from multi-centers in this study, which may affect the generalization ability of the model to some extent. Expanding this study to multi-centers is something that needs to be dealt with urgently in the future. Second, the IVUS probes frequency we used was 40 MHz, which is the current mainstream probes frequency. But it might be better to supplement the images with a 60 MHz frequency. Third, we have directly used existing medical image segmentation models rather than designing our own, which somewhat detracts from the novelty of this work.

Conclusion

In conclusion, we explored upper limits for automatic segmentation of IVUS lumen and EEM under large image cohorts. Deep learning-based segmentation of IVUS images can achieve excellent segmentation accuracy, and IVUS measurement parameters obtained based on segmentation calculations can be further used for clinical evaluation.

Methods

Data and annotations

As a retrospective study, we collected 134 IVUS pullbacks from 157 patient cases at the Second Affiliated Hospital of Zhejiang University School of Medicine between December 2020 and February 2022, 97% of which had coronary artery lesions. All IVUS images were in Digital Imaging and Communications in Medicine (DICOM) format and generated by Boston Scientific’s iLab with a 40-MHz OptiCross catheter. After excluding cases with severe artifacts, severely calcified plaques, poor imaging quality and some duplicated, 113 pullbacks from 113 cases were ultimately retained. To reduce redundancy and avoid similarity between training images, we sampled about 90 to 110 images from each pullback, and the final amount of data obtained was 11,070 images. All images were anonymized and no personal patient information was involved. Patient inclusion and exclusion criteria are shown in Fig. 4.

Fig. 4
figure 4

Flowchart shows patient inclusion and exclusion in deep learning segmentation. c cases, p pullbacks, i images

The annotation requires the cardiologist to manually label the outline of the lumen and EEM and save it as a mask. To ensure the reliability and consistency of the annotation, we assembled an IVUS image annotation team including three cardiology and ultrasound specialists. Two of them with at least 5 years of experience were dedicated to IVUS annotation, and the remaining one with at least 10 years of experience was responsible for the quality review of the annotation. The flow of the annotation is as follows: the two annotators perform the IVUS contouring separately, and if the intersection over union (IoU) between the two masks is greater than 0.98, we can assume that the annotation of the two is the same. If not, a third reviewer will make the determination. The annotation tool uses Labelme version 5.1.0, an open-source image polygonal annotation software with Python. Examples of the annotations of the IVUS images are demonstrated in Fig. 5. We then divided the full dataset, with 70% used for training the segmentation model, 10% for validation and 20% for testing.

Fig. 5
figure 5

A set of 5 images containing examples of IVUS lumen and EEM annotations

Segmentation models

We experimented on our large cohort IVUS dataset with five classical, representative and recent network structures in medical image segmentation, namely Res-UNet, Deeplab v3 plus, Swin-UNet, UNeXt and CENet.

Res-UNet

Res-UNet uses a UNet encoder-decoder backbone, in combination with residual connections, atrous convolutions, pyramid scene parsing pooling, and multi-tasking inference [19]. To achieve consistent training as the depth of the network increases, the building blocks of the UNet architecture were replaced with modified residual blocks of convolutional layers [20]. For better understanding across scales, multiple parallel atrous convolutions with different dilation rates are employed within each residual building block. The pyramid scene parsing pooling layer is used to enhance the performance of the network by including background context information.

Deeplab v3 plus

Deeplab v3 plus is a novel encoder-decoder structure which employs Deeplab v3 as a powerful encoder module and a simple yet effective decoder module [21]. Deeplab v3 plus adapts the Xception model for the segmentation task and applies depthwise separable convolution to both atrous spatial pyramid pooling (ASPP) module and decoder module, resulting in a faster and stronger encoder-decoder with atrous convolution network [22, 23]. The atrous convolution is a powerful tool to control the resolution of features computed by deep convolutional neural networks and adjust the filter's field-of-view to capture multi-scale information, generalizing the standard convolution operation. The depthwise separable convolution factorizes a standard convolution into a depthwise convolution followed by a point-wise convolution.

Swin-UNet

Swin-UNet is a UNet-like pure Transformer for medical image segmentation [24]. The tokenized image patches are fed into the Transformer-based U-shaped Encoder–Decoder architecture with skip-connections for local–global semantic feature learning. Swin-UNet consists of an encoder, bottleneck, decoder, and skip connections. The encoder uses hierarchical Swin Transformer with shifted windows to extract context features [25]. The symmetric Swin Transformer-based decoder with patch expanding layer is designed to perform the up-sampling operation to restore the spatial resolution of the feature maps. Similar to the U-Net, the skip connections are used to fuse the multi-scale features from the encoder with the up-sampled features.

UNeXt

UNeXt is the first convolutional multilayer perceptron (MLP)-based network for image segmentation [26]. It is designed in an effective way with an early convolutional stage and an MLP stage in the latent stage. The tokenized MLP block is used to tokenize and project the convolutional features. The MLPs are used to model the representation and focus on learning local dependencies by shifting the channels of the inputs. The network also includes skip connections between various levels of encoder and decoder to fuse multi-scale features.

CENet

CENet is a context encoder network designed to capture more high-level information and preserve spatial information for 2D medical image segmentation [27]. It mainly consists of three major components: a feature encoder module, a context extractor, and a feature decoder module. The feature encoder module uses a pretrained ResNet block as a fixed feature extractor. The context extractor module is formed by a newly proposed dense atrous convolution (DAC) block and residual multi-kernel pooling (RMP) block. The feature decoder module is used to restore the high-level semantic features extracted from the feature encoder module and context extractor module.

IVUS quantitative parameter measurement

The automated segmentation of the lumen and EEM from IVUS images allows us to measure a number of quantitative parameters that reflect the extent of coronary artery disease. Given a set of IVUS pullbacks, the lumen and EEM are segmented for all images and parameters including lumen measurement parameters, EEM measurement parameters and plaque measurement parameters [28]. Specifically, the lumen measurement parameters include minimum lumen diameter, maximum lumen diameter, lumen eccentricity index and lumen cross-sectional area (CSA). The EEM measurement parameters include minimum EEM diameter, maximum EEM diameter and EEM-CSA. The plaque measurement parameters include maximum plaque thickness, minimum plaque thickness, plaque eccentricity index, plaque CSA and plaque burden. Both the minimum lumen diameter and the cross-sectional area are important references for the degree of lumen stenosis.

Statistical analysis

All statistical analyses were performed using R statistical and computing software (http://www.r-project.org). The statistical analysis of this study was reflected in three aspects. First, detailed demographic statistics and hypothesis testing were carried out on the study cohort and the divided dataset. Secondly, for the segmentation model results, dice similarity coefficient (DSC), intersection over union (IoU) and Hausdorff distance (HD) were used to measure the accuracy of the algorithm. Finally, for the IVUS quantitative measurement parameters, intraclass correlation coefficient (ICC) and Bland–Altman analysis were performed to analyses the agreement of the automated segmentation measurements with the manual segmentation measurements [29].

Availability of data and materials

The datasets analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

IVUS:

Intravascular ultrasound

EEM:

External elastic membrane

FCN:

Fully convolutional network

CAD:

Coronary artery disease

LAD:

Left anterior descending

LCX:

Left circumflex

RCA:

Right coronary artery

ANOVA:

Analysis of variance

DSC:

Dice similarity coefficients

IoU:

Intersection over union

HD:

Hausdorff distance

MinLD:

Minimum lumen diameter

MaxLD:

Maximum lumen diameter

LEI:

Lumen eccentricity index

Lumen-CSA:

Lumen cross-sectional area

MinEEMD:

Minimum EEM diameter

MaxEEMD:

Maximum EEM diameter

EEM-CSA:

EEM cross-sectional area

MinPT:

Minimum plaque thickness

MaxPT:

Maximum plaque thickness

PEI:

Plaque eccentricity index

PCSA:

Plaque cross-sectional area

PB:

Plaque burden

SOTA:

State of the art

ICC:

Intraclass correlation coefficient

MLP:

Multilayer perceptron

ASPP:

Atrous spatial pyramid pooling

DAC:

Dense atrous convolution

RMP:

Residual multi-kernel pooling

References

  1. Nissen SE, Yock P. Intravascular ultrasound: novel pathophysiological insights and current clinical applications. Circulation. 2001;103(4):604–16.

    Article  Google Scholar 

  2. McDaniel MC, Eshtehardi P, Sawaya FJ, et al. Contemporary clinical applications of coronary intravascular ultrasound. JACC Cardiovasc Interv. 2011;4(11):1155–67.

    Article  Google Scholar 

  3. Wang R, Lei T, Cui R, et al. Medical image segmentation using deep learning: a survey. IET Image Proc. 2022;16(5):1243–67.

    Article  Google Scholar 

  4. Hatamizadeh A, Tang Y, Nath V, et al. Unetr: transformers for 3D medical image segmentation. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2022. p. 574–84.

  5. Liu X, Song L, Liu S, et al. A review of deep-learning-based medical image segmentation methods. Sustainability. 2021;13(3):1224.

    Article  MathSciNet  Google Scholar 

  6. Du G, Cao X, Liang J, et al. Medical image segmentation based on U-Net: a review. J Imaging Sci Technol. 2020. https://doi.org/10.2352/J.ImagingSci.Technol.2020.64.2.020508.

    Article  Google Scholar 

  7. Hesamian MH, Jia W, He X, et al. Deep learning techniques for medical image segmentation: achievements and challenges. J Digit Imaging. 2019;32:582–96.

    Article  Google Scholar 

  8. Yang J, Tong L, Faraji M, et al. IVUS-Net: an intravascular ultrasound segmentation network. In: Smart multimedia: first international conference, ICSM 2018, Toulon, France, August 24–26, 2018, revised selected papers 1. Springer International Publishing; 2018. p. 367–77.

  9. Tong J, Li K, Lin W, et al. Automatic lumen border detection in IVUS images using dictionary learning and kernel sparse representation. Biomed Signal Process Control. 2021;66: 102489.

    Article  Google Scholar 

  10. Dong L, Jiang W, Lu W, et al. Automatic segmentation of coronary lumen and external elastic membrane in intravascular ultrasound images using 8-layer U-Net. Biomed Eng Online. 2021;20(1):1–9.

    Article  Google Scholar 

  11. Du H, Ling L, Yu W, et al. Convolutional networks for the segmentation of intravascular ultrasound images: evaluation on a multicenter dataset. Comput Methods Programs Biomed. 2022;215: 106599.

    Article  Google Scholar 

  12. Kim S, Jang Y, Jeon B, et al. Fully automatic segmentation of coronary arteries based on deep neural network in intravascular ultrasound images. In: Intravascular imaging and computer assisted stenting and large-scale annotation of biomedical data and expert label synthesis: 7th joint international workshop, CVII-STENT 2018 and third international workshop, LABELS 2018, held in conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, proceedings 3. Springer International Publishing; 2018. p. 161–8.

  13. Balocco S, Gatta C, Ciompi F, et al. Standardized evaluation methodology and reference database for evaluating IVUS image segmentation. Comput Med Imaging Graph. 2014;38(2):70–90.

    Article  Google Scholar 

  14. Kirillov A, Mintun E, Ravi N, et al. Segment anything. arXiv preprint. 2023. arXiv:2304.02643.

  15. Ma J, Wang B. Segment anything in medical images. arXiv preprint. 2023. arXiv:2304.12306.

  16. Mazurowski MA, Dong H, Gu H, et al. Segment anything model for medical image analysis: an experimental study. arXiv preprint. 2023. arXiv:2304.10517.

  17. Lüddecke T, Ecker A. Image segmentation using text and image prompts. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. p. 7086–96.

  18. Wu J. PromptUNet: toward interactive medical image segmentation. arXiv preprint. 2023. arXiv:2305.10300.

  19. Khanna A, Londhe ND, Gupta S, et al. A deep residual U-Net convolutional neural network for automated lung segmentation in computed tomography images. Biocybern Biomed Eng. 2020;40(3):1314–27.

    Article  Google Scholar 

  20. Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention—MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, Part III 18. Springer International Publishing; 2015. p. 234–41.

  21. Chen L C, Papandreou G, Schroff F, et al. Rethinking atrous convolution for semantic image segmentation. arXiv preprint. 2017. arXiv:1706.05587.

  22. Chen LC, Papandreou G, Kokkinos I, et al. Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans Pattern Anal Mach Intell. 2017;40(4):834–48.

    Article  Google Scholar 

  23. Chollet F. Xception: deep learning with depthwise separable convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. p. 1251–8.

  24. Cao H, Wang Y, Chen J, et al. Swin-unet: Unet-like pure transformer for medical image segmentation. In: European conference on computer vision. Cham: Springer Nature Switzerland; 2022. p. 205–18.

  25. Liu Z, Lin Y, Cao Y, et al. Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF international conference on computer vision. 2021. p. 10012–22.

  26. Valanarasu JMJ, Patel VM. Unext: Mlp-based rapid medical image segmentation network. In: International conference on medical image computing and computer-assisted intervention. Cham: Springer Nature Switzerland; 2022. p. 23–33.

  27. Gu Z, Cheng J, Fu H, et al. Ce-net: context encoder network for 2D medical image segmentation. IEEE Trans Med Imaging. 2019;38(10):2281–92.

    Article  Google Scholar 

  28. Dijkstra J, Koning G, Reiber JHC. Quantitative measurements in IVUS images. Int J Card Imaging. 1999;15:513–22.

    Article  Google Scholar 

  29. Bartko JJ. The intraclass correlation coefficient as a measure of reliability. Psychol Rep. 1966;19(1):3–11.

    Article  Google Scholar 

Download references

Acknowledgements

The authors thank the Second Affiliated Hospital of Zhejiang University for providing the IVUS datasets and ArteryFlow Technology Co., Ltd for technical supports.

Funding

This study was supported by National Natural Science Foundation of China (82170329).

Author information

Authors and Affiliations

Authors

Contributions

LD: project administration, WL: original draft writing and revising, XLu: data cleaning and model training, XLeng: Data curation, JX: project administration, CL: clinical interpretation and project administration. All authors reviewed the manuscript.

Corresponding authors

Correspondence to Jianping Xiang or Changling Li.

Ethics declarations

Ethics approval and consent to participate

As a retrospective study, we only collected data from patients’ historical IVUS examinations for analysis and did not have contact with the patients themselves. We have applied for a waiver of informed consent during the ethical review at The Second Affiliated Hospital of Zhejiang University. Our study was in accordance with the Declaration of Helsinki.

Consent for publication

Not applicable.

Competing interests

No benefits in any form have been received or will be received from a commercial party related directly or indirectly to the subject of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dong, L., Lu, W., Lu, X. et al. Comparison of deep learning-based image segmentation methods for intravascular ultrasound on retrospective and large image cohort study. BioMed Eng OnLine 22, 111 (2023). https://doi.org/10.1186/s12938-023-01171-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12938-023-01171-2

Keywords