Skip to main content
Fig. 7 | BioMedical Engineering OnLine

Fig. 7

From: Deep learning-driven multi-view multi-task image quality assessment method for chest CT image

Fig. 7

The flowchart of three sub-evaluations in the position evaluation. a Is the original CT image sequence, b shows the detection result for the arms position evaluation from a after YOLOv8 detection. c, d depict the lung contour segmentation result used for the scan baseline position evaluation and the body contour segmentation result used for the body position evaluation from a through U-Net segmentation, respectively. The dashed rectangle box on the bottom illustrates the implementation details of the region measurement algorithm. e Is the CT image with the scan baseline, f is the extracted scan baseline image from e, g is the lung contour mask extracted from c, h is the result after canny detection of the lung contour mask in g, with the detected result surrounded by a green rectangle, and i is the overlay result of f and h, which is the final algorithm result. The dashed rectangle box on the right represents the implementation details of the distance measurement algorithm. j Is the body contour mask extracted from d, k is the result after canny detection of the body contour mask in j, where blue points indicate the center points of the body contour, and green points represent the center of a circle with a radius of 50 pixels

Back to article page