- Research
- Open Access

# Variational based smoke removal in laparoscopic images

- Congcong Wang
^{1}Email authorView ORCID ID profile, - Faouzi Alaya Cheikh
^{1}, - Mounir Kaaniche
^{2}, - Azeddine Beghdadi
^{2}and - Ole Jacob Elle
^{3, 4}

**17**:139

https://doi.org/10.1186/s12938-018-0590-5

© The Author(s) 2018

**Received:**22 August 2018**Accepted:**11 October 2018**Published:**19 October 2018

## Abstract

### Background

In laparoscopic surgery, image quality can be severely degraded by surgical smoke, which not only introduces errors for the image processing algorithms (used in image guided surgery), but also reduces the visibility of the observed organs and tissues. To overcome these drawbacks, this work aims to remove smoke in laparoscopic images using an image preprocessing method based on a variational approach.

### Methods

In this paper, we present the physical smoke model where the degraded image is separated into two parts: *direct attenuation* and *smoke veil* and propose an efficient variational-based desmoking method for laparoscopic images. To estimate the *smoke veil*, the proposed method relies on the observation that *smoke veil* has low contrast and low inter-channel differences. A cost function is defined based on this prior knowledge and is solved using an augmented Lagrangian method. The obtained *smoke veil* is then subtracted from the original degraded image, resulting in the *direct attenuation* part. Finally, the smoke free image is computed using a linear intensity transformation of the *direct attenuation* part.

### Results

The performance of the proposed method is evaluated quantitatively and qualitatively using three datasets: two public real smoked laparoscopic datasets and one generated synthetic dataset. No-reference and reduced-reference image quality assessment metrics are used with the two real datasets, and show that the proposed method outperforms the state-of-the-art ones. Besides, standard full-reference ones are employed with the *synthetic dataset*, and indicate also the good performance of the proposed method. Furthermore, the qualitative visual inspection of the results shows that our method removes smoke effectively from the laparoscopic images.

### Conclusion

All the obtained results show that the proposed approach reduces the smoke effectively while preserving the important perceptual information of the image. This allows to provide a better visualization of the operation field for surgeons and improve the image guided laparoscopic surgery procedure.

## Keywords

- Laparoscopic images
- Smoke removal
- Dehazing
- Variational approach
- Image quality

## Background

In the last decade, with the technological advances in laparoscopic devices, medical imaging and the demand for minimally invasive approaches have led to an increase in the number of laparoscopic surgeries [1]. During a laparoscopic surgery, a number of small incisions are made by the oncologist, then a needle is inserted to expand the abdomen with carbon dioxide gas to allow room for other instruments. The oncologist uses specialized instruments for example telescope, ultrasonic probe to visualize the abdominal cavity. Thus the video/images captured by the laparoscope is one of the most important intra-operative data modality. A high video/images quality is of vital importance for the operating surgeons and for computer vision based navigation systems [2–4].

However, the artifacts during laparoscopic surgery which include smoke, blood, dynamic illumination conditions, specular reflections, etc., [5] deteriorate image quality. In particular, smoke caused by such as laser ablation and electrocautery [6] significantly reduces the contrast and radiance information for large areas of the scene. Surgeons’ visibility would inevitably suffer from this degradation [2]. Besides, computer vision algorithms developed in image guided navigation systems are mainly for clear images, smoke would influence their performance especially in heavy smoke area [7]. Therefore, when smoke is detected [8], smoke removal by image processing techniques in laparoscopic surgery becomes necessary to provide a clear operation field visualization for surgeons and to avoid degradation of the performance of computer vision algorithms.

In this paper, we present the physical smoke model and propose an efficient variational based desmoking method for laparoscopic images. Instead of using the widely employed atmospheric scattering model [9, 10] and estimating the transmission or depth map as well as the atmospheric light, we propose to resort to a smoke model where the degraded image is separated into two parts: *direct attenuation* and *smoke veil*. The estimation of the *smoke veil* relies on two assumptions: *smoke veil* has low contrast and low RGB inter-channel differences. Then the *direct attenuation* part is obtained by subtracting the *smoke veil* from original degraded image. Finally, the smoke free image is recovered from the *direct attenuation* part by linear transformation of intensity.

The remainder of this paper is organized as follows. In “Related work” section, a review of laparoscopic image desmoking methods as well as image dehazing is given. In “Physical model of smoke image acquisition” section, the retained physical model for smoke image is derived from the atmospheric scattering model. “Proposed smoke removal approach” section describes our proposed approach by defining the energy function and the optimization procedure for *smoke veil* estimation as well as the smoke free image recovery strategy. Finally, in “Experimental results” section, quantitative and qualitative results are presented and some conclusions are drawn in “Conclusion” section.

## Related work

*t*is the medium transmission map, considered to decrease exponentially with the light penetration depth (scene depth). A rough mapping between the 3D scene and the projected 2D representation of the transmission map could be expressed as follows:

*d*(

*x*,

*y*) is the scene depth and \(\beta\) is the scattering coefficient [9, 18]. The dehazing process is then reduced to estimate \(\mathbf {J}\) by computing the transmission

*t*or the depth

*d*and the global atmospheric light \(\mathbf {A}\).

Based on the above physical model, the recent desmoking methods have been developed by using different tools based on Bayesian inference [11, 12], statistical observation model called refined dark channel prior [13], fusion scheme strategy [14] and deep learning [15].

Indeed, in [11], the authors formulated a joint desmoking and denoising problem as a Bayesian inference problem based on probabilistic graphical model. This work is then extended in [12] for desmoking, denoising and specularity removal based on learned color probability density functions (PDFs) and undegraded texture priors. In [13], an adapted dark-channel prior combined with histogram equalization method is presented. Directly applying the original dark channel prior dehazing method proposed in [17] would introduce color distortion as a result of the improper assumptions, Tchaka et al. modify the dark channel values by performing a thresholding operation or refining the dark channel values with empirically selected parameters. Then a histogram equalization process is applied to enhance the contrast. A frame by frame analysis shows that the proposed method can reach a better visual quality but a higher MSE (mean square error) compared to the original method presented in [17]. In [14], a visibility-driven fusion defogging framework is proposed. The atmospheric veil is estimated first by a bilateral of bilateral grid (BBG) which is inspired by the work presented in [19]. Then the visibility is recovered by inversing the physical model. A contrast enhancement and luminance fusion scheme are employed to further correct the dark pixels and color distortion. No-reference image quality assessment metrics are used as objective measures to evaluate the obtained results. The results show a promising performance of the method but inefficient in the case of thick and dense smoke. Recently, in [15], a deep learning desmoking method has been proposed. Sabri et al. propose to generate synthetic smoke by Perlin noise [20] which is widely used in computer graphics and embed it to the clear non-smoke images linearly. The synthetic dataset and a dehazing AOD-Net ( All-in-One Dehazing Network) model [21] are then used for transfer learning. This method reaches 20 fps for 512 × 512 color videos but fail in the case of heterogeneous smoke with highly varied spatial density.

While there is few works related to laparoscopic images smoke removal, a similar problem referred to as image dehazing has been studied in the literature [10]. We can simply divide the image dehazing algorithms into two categories: physical model (Eq. 1) based image restoration approaches and non-physical model based approaches.

For the first category, the parameters (e.g. transmission, global atmospheric light) of the physical model are estimated, the restored image is then obtained by inversing the physical model. Many of the image dehazing methods use the atmospheric scattering model and rely on the estimation of the transmission map *t* or the depth map of the images and the global atmospheric light [17, 19, 22]. He et al. propose the dark channel approach based on a statistical observation from outdoor haze-free images: for most of the haze-free natural images, pixel values are very low for at least one channel [17]. The transmission map *t* computed by this prior together with an estimated \(\mathbf {A}\) calculated from the detected most haze-opaque region of the image are applied to invert Eq. (1), resulting in a haze free image. This is a well-known efficient approach and lots of recent methods based on it have been proposed [13, 23]. Besides, some deep learning approaches have been proposed to better estimate the physical model’s parameters as no hand-crafted features are required for this kind of approaches. Cai et al. present a CNN (Convolutional Neural Network) DehazeNet to estimate transmission from input hazy images [24]. Later, a end to end AOD-Net is proposed to estimate a new parameter which integrates both *t* and \(\mathbf {A}\) [21]. Recently, a new architecture allowing to jointly estimate transmission map, atmospheric light and haze free image has been proposed in [25].

Except the model based approaches, some methods have been developed without estimating transmission or depth maps but by trying for example to improve the contrast [16, 26]. Tan et al. [16] tried to enhance the haze image directly by maximizing the local contrast under an airlight smooth constraint. In [26], a variational contrast enhancement framework for image dehazing with a modified gray-world assumption is proposed. Later in [27], an improved version is presented, where a saturation term is added to the variational cost function aiming to maximize the contrast and saturation together. In [28], Galdran et al. further improved their work by enhancing faraway high-density-fog regions where normally have more fog and preserving nearby low-density-fog regions. It is worth to point out that these methods do not rely on a physical atmospheric model, but try to maximize contrast and saturation. In [29], a multi-scale fusion dehazing method is proposed by deriving a white balance and contrast enhanced inputs. The important features of the two inputs are filtered by luminance, chromaticity and saliency weight maps and then fused by a multi-scale Laplacian and Gaussian pyramid strategy [30] similar to the scheme used in [31] for perceptual contrast enhancement. In [32], a gated fusion network is presented in which three inputs are derived by white balance, contrast enhancement and gamma correction.

Although dehazing and desmoking are similar problems, while haze is related to scene depth, smoke concentration is a local phenomenon which does not depend on the scene depth, but rather depends on the thickness of smoke. Moreover, in laparoscopic images, the light source is provided from the instrument which is not evenly distributed, and the organ surface is not a Lambertian surface. These properties violate the assumptions underlying Eq. (1), which makes it inappropriate to apply directly to laparoscopic images. Therefore, a robust and efficient desmoking approach is highly desired. In this paper, we propose a smoke removal method based on the physical smoke model. The physical model for smoke laparoscopic images and the proposed method will be presented in the following.

## Physical model of smoke image acquisition

*g*denotes the optical settings of the camera, \(\eta\) represents the nature of the illumination, \(\rho (\lambda )\) counts for the scene element’s aperture and the scene point’s reflectance, \(d_{s}\) is the smoke thickness and \(d_{d}\) is the distance between the scene element and imaging sensor (depth), \(\mathbf {J}_{s}\) can represent the smoke free radiance and \(t_{s}\) is the transmission. Following atmospheric model’s definition introduced in [16, 17], \(\mathbf {L}\) is defined as

*direct attenuation*.

*smoke veil*.

- 1.
For the atmospheric model in Eq. (1), \(\mathbf {A}\) is a constant and represents the global atmospheric light which can be derived from the radiance of an infinite distance object. For the smoke model, \(\mathbf {A}_{s}\) depends on the illumination property \(\eta\) which is not a constant as a result of the laparoscope light source and the distance \(d_{d1}\) where smoke appears. Thus the conventional approaches used to estimate \(\mathbf {A}\) can not be used to estimate \(\mathbf {A}_{s}\).

- 2.
The transmission

*t*in Eq. (1) depends on the depth of the scene while \(t_{s}\) in Eq. (5) depends on the smoke thickness. Therefore, depth related transmission estimation approaches are not suitable for the above smoke model.

## Proposed smoke removal approach

As discussed in the previous section, it is a challenging task to estimate \(t_{s}\) and \(\mathbf {A}_{s}\) as it is a hard ill-posed problem: according to Eq. (5), there is only one known variable \(\mathbf {I}\), but are three unknown variables. In order to solve this under-constrained problem, instead of estimating \(\mathbf {A}_{s}\) and \(t_{s}\) separately, we propose to perform the task into two steps: estimation of the *smoke veil* \(\mathbf {F}\), and computation of the smoke free image \(\mathbf {J}_{s}\) from *direct attenuation* \(\mathbf {L}\). In this section, the above two steps are described and discussed.

### Smoke veil estimation

The additive *smoke veil* is a whitish or grayish veil which mainly is a function of the properties of illumination and smoke thickness. The illumination might be not evenly distributed, but it is smoothly distributed. Although the smoke thickness is not closely correlated with the depth of the scene, if there is large depth jump of the scene, the smoke thickness has a higher possibility to change. This observation leads us to the assumption regarding to the properties of *smoke veil*: *smoke veil* is smoothly distributed except in regions exhibiting high scene depth changes.

As the physical model assumes that the scattering and transmission properties are independent from wavelength. Therefore, for each channel, the same *smoke veil* value is added to the RGB channels. We can conduct our second assumption: *smoke veil*’s RGB channels’ intensity are equal, which means they have low inter-channel differences (the value of differences between RG, GB and BR channels are low). Our task reduces to estimate a *smoke veil* which satisfies those two assumptions.

We formulate the *smoke veil estimation* as an ill-posed inverse problem and solve it by optimizing variational model with the two assumptions described above. More precisely, an energy function is first defined and then minimized (i.e optimized) via an augmented Lagrangian method, as we shall address next.

#### Energy function

*smoke veil*’s variation is smooth which means it has low contrast and the RGB inter-channel differences are low, we propose to estimate the

*smoke veil*by minimizing the following energy function:

*smoke veil*to be estimated, \(\gamma\) is a scalar to adjust weights between the two terms of the equation, and \(\left\| \mathbf {F} _{TV}\right\| _{2}\) is an isotropic total variation (TV)-norm which is given by:

*i*denotes the pixel’s index, \(\theta _{x}\), \(\theta _{y}\), \(\theta _{c}\) are three scalar parameters to balance the weights between the gradient of the color image and the inter-channel differences, and \(\mathbf {D}_{x}\), \(\mathbf {D}_{y}\), \(\mathbf {D}_{c}\) are the forward differential operators along the three dimensions which are computed as:

*x*,

*y*,

*c*) represents the pixel coordinates of the color image with horizontal and vertical directions (

*x*,

*y*) and channel direction

*c*. Using matrix-vector notation, \([\mathbf {D}_{d}\mathbf {F}]_i\), with \(d \in \{x,y,c\}\), denotes the

*i*-th component of the one dimensional vector obtained from \(\mathbf {D}_{d}\mathbf {F}\).

The first term in Eq. (6) is the data fidelity term which aims to keep the similarity between the estimated *smoke veil* and the input degraded image. In our case, the parameter \(\gamma\) is set to a small value to enforce \(\mathbf {F}\) to be consistent with \(\mathbf {I}\) only for large scale structure part, for example the large depth jump part. As shown in Fig. 3b, when there is depth jump, the corresponding estimated *smoke veil* is not smooth in these regions.

The second term is the regularization term which imposes our assumptions as constraints on the image. TV norm is chosen due to its convexity and edges preserving ability [34]. The TV term would emphasize more when \(\gamma\) is small. This term enforces \(\mathbf {F}\) to fulfill our assumptions: smoothness or low contrast (low derivative value of \(\mathbf {F}\) with respect to variable *x* and *y*) and low inter-channel differences (low derivative value of \(\mathbf {F}\) with respect to variable *c*).

#### Optimization method

### Smoke free image recovery

*direct attenuation*\(\mathbf L\) is then calculated as:

*smoke veil*is deducted from the input image. A higher value would result in less smoke left in the

*direct attenuation*, but would cause darker pixel values as shown in Fig. 2. The latter illustrates the restored smoke free images obtained by different \(\alpha\) values. The perceptual smoke density reaches lower levels when \(\alpha (c)=0.8\) and \(\alpha (c)=1\). However, it leads to much distortion and many pixels become quite dark as shown in Fig. 2g, h.

One of the main variables for *smoke veil* is smoke thickness. Higher *smoke veil* values means there is heavier smoke in the image, and so, a higher \(\alpha\) value is required. Therefore, \(\alpha\) can be simply set to the mean values of the estimated *smoke veil* over the RGB channels. For the example illustrated in Fig. 2, the calculated \(\alpha\) values are 0.5747 for R channel, 0.5745 for G channel and 0.5745 for B channel. The estimated smoke free image is displayed in Fig. 2e which shows good visual quality.

Figure 3b illustrates the estimated *smoke veil*. When this part is eliminated from the original degraded image (Fig. 3a), we obtain the *direct attenuation* which is a dim smoke free image as shown in Fig. 3c. From Eq. (3), the object radiance is attenuated exponentially with the thickness of smoke. In laparoscopic surgery, although the smoke thickness changes, the depth range of the scene is limit, which means there would not be large jump for the values of smoke thickness \(d_{s}\), and so the variation range of \(t_{s}\) becomes small. Therefore, instead of trying to estimate the actual values of smoke thickness or \(t_{s}\) which is a challenging task, we apply linear transformation to linearly map the R, G, B channels’ values of \(\mathbf L\) (Fig. 3c) to [0; 255] yielding the final smoke free image \(\mathbf {J}_{s}\) (Fig. 3d).

## Experimental results

In this section, we present the experimental results to assess the performance of our proposed method. The employed datasets are firstly introduced. Then, a performance analysis of the proposed method is presented based on quantitative and qualitative results. The quantitative evaluation is achieved through the use of no-reference and reduced-reference IQA (image quality assessment) metrics for two real degraded laparoscopic image datasets: *Dataset1* and *Dataset2*, while full-reference IQA metrics are used for a *synthetic dataset*.

In vivo procedure datasets [39, 40], taken from Hamlyn Centre Laparoscopic/Endoscopic Video Dataset Page [41], are used for validation. Real smoked images are selected manually from the original datasets. *Dataset1* has 96 smoked images and *Dataset2* contains 4031 images. Besides, as the ground truth information for a smoke laparoscopic image is not available, we propose to evaluate the method on synthetic data. The latter is created by the method presented in [15]. According to [15], Perlin noise is employed to generate synthetic smoke, then the smoke is linearly embedded to the manually selected ground truth smoke free images. Thus, 100 test images are obtained and referred to as *synthetic dataset*.

In order to show the benefits of the proposed method, we will compare it to the following recent ones [13, 17, 27, 28]. This selection of comparison approaches is based on the suitability for desmoking task and the availability of the source code. The first one is the atmospheric model based image dehazing method with dark channel prior [17]. This method will be designated by DCP. It is important to note here that similar approach has been investigated in [13] to remove smoke for laparoscopic images by adding thresholding or refining steps. In the following, this method is denoted by R-DCP. It should be noted that this method has been considered in the qualitative evaluation part. However, it has not been considered in the quantitative evaluation part because of its sensitivity to different parameters which should be empirically selected for input smoked images of the large experimental datasets. The third one, which will be denoted by E-VAR, corresponds to an enhanced variational approach developed in [27]. Finally, the fourth one, designated by F-VAR, is a fusion-based variational technique [28]. E-VAR and F-VAR rely on a mild physical constraint, which are more suitable for desmoking task.

### Influence of the parameters

*smoke veil*with different \(\gamma\) settings. A smaller value gives a smoother estimated

*smoke veil*(Fig. 4b, c), a higher value would force the

*smoke veil*more similar to the original image (Fig. 4d–f). Therefore, a smaller \(\gamma\) should be chosen as discussed in “Energy function” section. In our experiments, this parameter is set to 1.

Another important parameter in our approach is the \(\alpha\) value used in Eq. (14). Let us recall that the effect of this parameter on recovering the smoke free image which has been shown in Fig. 2 and discussed in “Smoke free image recovery” section. Indeed, compared to the case \(\alpha (c)=1\), the obtained results show the benefits of weighting the *smoke veil* during the computation of the *direct attenuation* part. Following this analysis, we proposed in our experiments to set this parameter to the mean value of the estimated *smoke veil* over the RGB channels.

Regarding the other parameters (which had little influence) , they have been chosen as follows: \(\theta _{x}=\theta _{y}=\theta _{c}=1\) for Eq. (7) and \(\rho =5\) for Eq. (10).

### Quantitative evaluation on *Dataset1* and *Dataset2*

As the ground-truth information for a smoked laparoscopic image is not available, we propose to employ three no-reference IQA metrics and another reduced-reference one that compares the visibility of edges before and after smoke removal. For the purpose of evaluating the ability of smoke removal, a referenceless Fog Aware Density Evaluator (FADE) is employed which has been used to evaluate the perceptual fog density [42, 43]. A lower FADE value means a lower perceptual fog density. Besides, a just noticeable blur based no-reference objective image sharpness metric (JNBM) [44] is used to evaluate the perceptual sharpness. A higher value means higher perceptual sharpness or lower blurriness. Furthermore, we employ a metric, proposed by Hautière et al. [45], which aims to assess the ability of restoring edges (RE) that are not visible in \(\mathbf I\) but are in \(\mathbf J _{s}\) (obtained after smoke removal). A higher RE value means a better edge restoration. Finally, in order to measure the global image contrast, we proposed to use Mutual Information based Contrast Measure (MICM) recommended in [46]. A lower MICM value indicates higher and better contrast in the images.

*Dataset1*and

*Dataset2*. Figure 5 illustrates the scores of the four metrics on

*Dataset2*. All the four metrics show better scores for our approach. In terms of FADE metric, the DCP method removes smoke well. However, it scarifies the perceptual quality as shown in Fig. 7b as a result of the unsuitable underlying constants for desmoking purpose. E-VAR removes more smoke than F-VAR. F-VAR’s results indicate that there are still high smoke density in the images. Our proposed method’s smoke density is the lowest. The proposed approach removes the smooth smoke component of the image resulting in a contrast enhanced image, which has the best scores for JNBM, RE and MICM.

Quantitative evaluation results for *Dataset1* and *Dataset2*

### Quantitative evaluation on *synthetic dataset*

For *synthetic dataset*, full-reference metrics PSNR (peak signal-to-noise ratio) and SSIM (structural similarity index metric) which are widely used in quantitative evaluation of dehazing approaches [21, 25, 32] are used. Besides, MAD (Most Apparent Distortion) which is recommended in [47] is also applied for validation.

*synthetic dataset*. Our proposed method outperforms the others in term of all the three full reference image quality metrics.

### Qualitative visual results

*synthetic dataset*. Figures 7, 8 show results with different smoke density images from

*Dataset1*and

*Dataset2*.

Figure 6 illustrates examples of the ground truth smoke free images, the synthetic degraded images and the desmoked images from different methods. DCP and E-VAR over enhance the edges. Besides, although DCP and E-VAR show a good smoke remove ability, they both cause color shift. R-DCP shows a fine result, but it causes some blur, for example the instrument part of second row of Fig. 6f. The result obtained by our proposed method shows more pleasant perceptual images.

In Fig. 7, the original images contain low (first row) , moderate and not smooth (second row), moderate and smooth (third row ) and high (last row) smoke density. In Fig. 8, low (first row), moderate (second and third rows), heavy and not smooth (last row) smoke density situations are shown. Our proposed method can remove the smoke well with a pleasant perceptual visual quality except the not smooth ones. DCP removes smoke well but sometimes leads to severe color shift. E-VAR shows a fine result but with over enhanced effect and slightly worse smoke removal compared to our proposed method and the DCP one. F-VAR fails to remove smoke in some images. R-DCP fails to remove the smoke properly for moderate and heavy smoke images and it lacks robust performance when we compare the results from different datasets (Figs. 7e, 8e). Our proposed method achieves well results except the last row of Fig. 8, as dense and heterogeneous smoke violate the underlying assumptions for the physical model. All the methods can not handle dense and heterogeneous smoke well, as shown in the second row of Fig. 7 and the last row of Fig. 8.

Therefore, all the obtained results confirm the benefits of the proposed desmoking method for laparoscopic images.

## Conclusion

In this paper, a variational based desmoking method is presented. The aim is to remove the smoke from the scene, which allows to improve the image guided surgery condition as well as the surgeons’ visibility. Instead of estimating transmission and global atmospheric light as performed in natural image dehazing methods, we estimate the *smoke veil* by a variational method and then estimate the smoke free image based on a simple intensity linear transformation. Quantitative validation on real smoked laparoscopic datasets and synthetic dataset as well as qualitative evaluations are performed. The obtained results show that the proposed approach reduces the smoke effectively while preserving the important perceptual information of the image. A further work related to the acceleration of the proposed method using a GPU implementation is ongoing. Moreover, future research work could be further investigated to improve the smoke free image recovery step by resorting to a fusion strategy and considering temporal information to overcome heavy and heterogeneous smoke images.

## Declarations

### Authors' contributions

The work presented in this paper was carried out in collaboration between all authors. CW conceived and carried out the main part of this manuscript. MK, FAC, AB and OJE contributed in discussion and analyzing. FAC, AB and OJE are supervisors of this research. All authors read and approved the final manuscript.

### Acknowledgements

We would like to thank all the colleagues who contribute valuable discussions in this study.

### Competing interests

The authors declare that they have no competing interests.

### Availability of data and materials

Two real datasets are public and can be downloaded, while the synthetic dataset can be easily generated. The datasets used and/or analysed during the current study can be provided by the corresponding author if requested.

### Consent for publication

Not applicable.

### Ethics approval and consent to participate

Not applicable.

### Funding

This work is funded by the Research Council of Norway through project no. 247689 IQ-MED: Image Quality enhancement in MEDical diagnosis, monitoring and treatment.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

## Authors’ Affiliations

## References

- Tsui C, Klein R, Garabrant M. Minimally invasive surgery: national trends in adoption and future directions for hospital strategy. Surgical Endosc. 2013;27(7):2253–7.View ArticleGoogle Scholar
- Stoyanov D. Surgical vision. Ann Biomed Eng. 2012;40(2):332–45.View ArticleGoogle Scholar
- Palomar R, Cheikh FA, Edwin B, Beghdadhi A, Elle OJ. Surface reconstruction for planning and navigation of liver resections. Comput Med Imaging Graph. 2016;53:30–42.View ArticleGoogle Scholar
- Wang C, Alaya Cheikh F, Kaaniche M, Elle OJ. Liver surface reconstruction for image guided surgery. In: Medical imaging 2018: image-guided procedures, robotic interventions, and modeling, vol. 10576. Bellingham: International Society for Optics and Photonics; 2018. p. 105762.Google Scholar
- Sdiri B, Beghdadi A, Cheikh FA, Pedersen M, Elle OJ. An adaptive contrast enhancement method for stereo endoscopic images combining binocular just noticeable difference model and depth information. Electron Imaging. 2016;2016(13):1–7.View ArticleGoogle Scholar
- Barrett WL, Garber SM. Surgical smoke: a review of the literature. Surg Endosc. 2003;17(6):979–87.View ArticleGoogle Scholar
- Maier-Hein L, Groch A, Bartoli A, Bodenstedt S, Boissonnat G, Chang P-L, Clancy N, Elson DS, Haase S, Heim E. Comparative validation of single-shot optical techniques for laparoscopic 3-d surface reconstruction. IEEE Trans Med Imaging. 2014;33(10):1913–30.View ArticleGoogle Scholar
- Leibetseder A, Primus MJ, Petscharnig S, Schoefmann K. Real-time image-based smoke detection in endoscopic videos. In: Proceedings of the on thematic workshops of ACM multimedia. New York: ACM; 2017. p. 296–304.Google Scholar
- Narasimhan SG, Nayar SK. Vision and the atmosphere. Int J Comput Vis. 2002;48(3):233–54.View ArticleGoogle Scholar
- Xu Y, Wen J, Fei L, Zhang Z. Review of video and image defogging algorithms and related studies on image restoration and enhancement. IEEE Access. 2016;4:165–88.View ArticleGoogle Scholar
- Kotwal A, Bhalodia R, Awate SP. Joint desmoking and denoising of laparoscopy images. In: IEEE international symposium on biomedical imaging (ISBI). New York: IEEE; 2016. p. 1050–4 .Google Scholar
- Baid A, Kotwal A., Bhalodia R, Merchant SN, Awate SP. Joint desmoking, specularity removal, and denoising of laparoscopy images via graphical models and Bayesian inference. In: IEEE international symposium on biomedical imaging (ISBI). New York: IEEE; 2017. p. 732–6 .Google Scholar
- Tchakaa K, Pawara VM, Stoyanova D. Chromaticity based smoke removal in endoscopic images. In: Proc. of SPIE, vol. 10133. 2017. p. 101331M.Google Scholar
- Luo X, McLeod AJ, Pautler SE, Schlachta CM, Peters TM. Vision-based surgical field defogging. IEEE Trans Med Imaging. 2017;36(10):2021–30.View ArticleGoogle Scholar
- Bolkar S, Wang C, Alaya Cheikh F, Yildirim S. Deep smoke removal from minimally invasive surgery videos. In: 2018 IEEE international conference on image processing (ICIP). New York: IEEE; 2018.Google Scholar
- Tan RT. Visibility in bad weather from a single image. In: IEEE international conference on computer vision and pattern recognition (CVPR). 2008. p. 1–8.Google Scholar
- He K, Sun J, Tang X. Single image haze removal using dark channel prior. IEEE Transactions Pattern Anal Mach Intell. 2011;33(12):2341–53.View ArticleGoogle Scholar
- McCartney EJ. Optics of the atmosphere: scattering by molecules and particles. New York: Wiley; 1976. p. 421.Google Scholar
- Tarel JP, Hautiere N. Fast visibility restoration from a single color or gray level image. In: IEEE international conference on computer vision (ICCV). 2009. p. 2201–8.Google Scholar
- Perlin K. Improving noise. In: ACM transactions on graphics (TOG), vol. 21. New York: ACM; 2002. p. 681–2.Google Scholar
- Li B, Peng X, Wang Z, Xu J, Feng D. Aod-net: all-in-one dehazing network. In: Proceedings of the IEEE international conference on computer vision (ICCV). 2017. p. 4770–8Google Scholar
- Zhu Q, Mai J, Shao L. A fast single image haze removal algorithm using color attenuation prior. IEEE Trans Image Process. 2015;24(11):3522–33.MathSciNetView ArticleGoogle Scholar
- Xu H, Guo J, Liu Q, Ye L. Fast image dehazing using improved dark channel prior. In: IEEE international conference on information science and technology (ICIST). 2012. p. 663–7.Google Scholar
- Cai B, Xu X, Jia K, Qing C, Tao D. Dehazenet: an end-to-end system for single image haze removal. IEEE Trans Image Process. 2016;25(11):5187–98.MathSciNetView ArticleGoogle Scholar
- Zhang H, Patel VM. Densely connected pyramid dehazing network. In: The IEEE conference on computer vision and pattern recognition (CVPR). 2018.Google Scholar
- Galdran A, Vazquez-Corral J, Pardo D, Bertalmío M. A variational framework for single image dehazing. In: European conference on computer vision (ECCV) workshops, vol. 3. 2014. p. 259–70 .Google Scholar
- Galdran A, Vazquez-Corral J, Pardo D, Bertalmío M. Enhanced variational image dehazing. SIAM J Imaging Sci. 2015;8(3):1519–46.MathSciNetView ArticleGoogle Scholar
- Galdran A, Vazquez-Corral J, Pardo D, Bertalmío M. Fusion-based variational image dehazing. IEEE Signal Process Lett. 2017;24(2):151–5.MATHGoogle Scholar
- Ancuti CO, Ancuti C. Single image dehazing by multi-scale fusion. IEEE Trans Image Process. 2013;22(8):3271–82.View ArticleGoogle Scholar
- Burt PJ, Adelson EH. The Laplacian pyramid as a compact image code. In: Readings in computer vision. 1987. p. 671–9.Google Scholar
- Saleem A, Beghdadi A, Boashash B. A distortion-free contrast enhancement technique based on a perceptual fusion scheme. Neurocomputing. 2017;226:161–7.View ArticleGoogle Scholar
- Ren W, Ma L, Zhang J, Pan J, Cao X, Liu W, Yang MH. Gated fusion network for single image dehazing. 2018. arXiv preprint arXiv:1804.00213
- Tian H, Li W, Ogunbona P, Wang L. Single image smoke detection. In: Asian conference on computer vision (ACCV). Berlin: Springer; 2014. p. 87–101.Google Scholar
- Chan T, Esedoglu S, Park F, Yip A. Recent developments in total variation image restoration. Math Models Comput Vis. 2005;17(2).Google Scholar
- Wahlberg B, Boyd S, Annergren M, Wang Y. An admm algorithm for a class of total variation regularized estimation problems. 2012. arXiv preprint arXiv:1203.1828
- Tao M, Yang J, He B. Alternating direction algorithms for total variation deconvolution in image reconstruction. TR0918, Department of Mathematics, Nanjing University. 2009.Google Scholar
- Chan SH, Khoshabeh R, Gibson KB, Gill PE, Nguyen TQ. An augmented Lagrangian method for total variation video restoration. IEEE Trans Image Process. 2011;20(11):3097–111.MathSciNetView ArticleGoogle Scholar
- Boyd S, Parikh N, Chu E, Peleato B, Eckstein J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends® Mach Learn. 2011;3(1):1–122.MATHGoogle Scholar
- Giannarou S, Visentini-Scarzanella M, Yang GZ. Probabilistic tracking of affine-invariant anisotropic regions. IEEE Trans Pattern Anal Mach Intell. 2013;35(1):130–43.View ArticleGoogle Scholar
- Ye M, Johns E, Handa A, Zhang L, Pratt P, Yang GZ. Self-supervised siamese learning on stereo image pairs for depth estimation in robotic surgery. In: Hamlyn symposium on medical robotics. 2017.Google Scholar
- London IC. Hamlyn Centre Laparoscopic/Endoscopic Video Datasets. http://hamlyn.doc.ic.ac.uk/vision/. Accessed Oct 2017.
- Choi LK, You J, Bovik AC. Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Trans Image Process. 2015;24(11):3888–901.MathSciNetView ArticleGoogle Scholar
- Image & Video Quality Assessment at LIVE. http://live.ece.utexas.edu/research/fog/. Accessed 12 Oct 2017.
- Ferzli R, Karam LJ. A no-reference objective image sharpness metric based on the notion of just noticeable blur (jnb). IEEE Trans Image Process. 2009;18(4):717–28.MathSciNetView ArticleGoogle Scholar
- Hautière N, Tarel JP, Aubert D, Dumont E. Blind contrast enhancement assessment by gradient ratioing at visible edges. Image Anal Stereol. 2011;27(2):87–95.MathSciNetView ArticleGoogle Scholar
- Qureshi MA, Beghdadi A, Deriche M. Towards the design of a consistent image contrast enhancement evaluation measure. Signal Process Image Commun. 2017;58:212–27.View ArticleGoogle Scholar
- Pedersen M. Evaluation of 60 full-reference image quality metrics on the cid: Iq. In: 2015 IEEE international conference on image processing (ICIP). New York: IEEE; 2015. p. 1588–92.Google Scholar