We developed our algorithm to measure the lipid layer thickness quantitatively by analyzing the lipid layer interference patterns on tear film with the Lipiscanner 1.0 which consists of a LED panel (115 mm × 58 mm) covered with a polycarbonate diffuser for homogeneous illumination (Fig. 1). This simple device was used to observe the lipid layer of the tear film with a slit lamp microscope for ophthalmology and a scientific complementary metal-oxide semiconductor (sCMOS) camera. A patient’s head is placed in a fixed position on the head-chin rest of the slit lamp microscope and white light from the LED is irradiated onto the lipid layer of the eye. The white LEDs provide a color temperature of ~ 6500 K. The color from the white light interference is used to assigning the LLTs to ROI image pixels. The measurements presented in this paper represent the LLT of the precorneal tear film in the inferior iris region which was illuminated by white light.
Instrument setup
We used the Lipiscanner 1.0 to visualize the lipid layer by white light interference, and the captured videos were used for measuring the lipid layer thickness (Fig. 1). In this setup, the LED light was illuminated onto the inferior cornea of the patients, which was then reflected onto the slit lamp microscope. The illuminating beam was not focused to a point but was spread over parts of the cornea by a diffuser. An sCMOS camera (Guppy Pro F-503, Allied Vision) was used to record the video of the interference pattern of lipid layer within the cornea region.
Patients with dry eye syndrome
This study followed the tenets of the Declaration of Helsinki and was approved by the Institutional Review Board of Chuncheon Sacred Heart Hospital. Twenty patients with dry eyes and fourteen healthy volunteers were included in this pilot study: six patients with hyposecretory MGD (low delivery of meibum) (patient group I), seven patients with dry eye syndrome without MGD (patient group II), seven patients with hypersecretory MGD (high delivery of meibum) (patient group III), and fourteen healthy volunteers as a control group (patient group IV). An ophthalmologist who is also a meibomian gland expert (Dr. Ho Sik Hwang) categorized the patients into groups I, II and III after evaluating their medical histories [26], slit lamp examinations to measure tear break-up time [27], corneal stain, lid margin abnormality, meibum volume and quantity, Schirmer’s test [28] for tear production measurement, and meibography for morphological evaluation of the meibomian gland.
Image processing
One of the main challenges in the processing of the eye images in our setup was that the pupil often moves suddenly during the image acquisition. To address this, we developed an algorithm that was robust even when the pupil location changed. For each patient, our algorithm processed an image sequence of 2.5 s duration (a subset of the original sequence, i.e. 75 images), and we discarded images where the pupil was not visible or was partially occluded (e.g. caused by blinking). Afterward, we started with the darkest spot in the image (which is a point in the pupil) and performed a region-growing process to extract the pupil region. We repeated this process with a different starting point to extract the iris region, from which we could extract our region of interest (ROI) of where the interferometry colors existed. Finally, we compensated colors that could be altered by the illumination light or the ambient room light so that accurate measurements of the lipid layer thickness could be achieved.
The entire image processing part of the algorithm to measure the thickness of the lipid layer runs through six phases, as shown in Fig. 2.
The Additional file 1: Movie S1 shows the procedure of image processing algorithm.
Phase 1: Exclude unnecessary frames
First, we have to select the image frames in the raw video for analysis and discard frames that are not suitable for use. The resulting extracted video must satisfy the following conditions: The center of the pupil in each frame must be near the center of the screen (to increase recognition accuracy), and there should be no change in image brightness and zoom level between image frames.
We achieve this by filtering out frames that have no ROI, including cases when the eye closes. When eye blinking occurs, the light from the LED is strongly reflected from the eyelid, and the entire image becomes brighter. We define \({\text{B}}_{\text{opened}}\) as the brightness when the ROI is visible, and \({\text{B}}_{\text{closed}}\) as the brightness when the eye closes. Then the relationship of the values is \({\text{B}}_{\text{opened}} < \;{\text{B}}_{\text{closed}}\), where \({\text{B}}_{\text{opened}}\) is set to the mode of full frame brightness data, and \({\text{B}}_{\text{closed}}\) is set to the highest value of all brightness data. The threshold value to filter out frames including eye blinking is \({\text{B}}_{\text{opened}} \; + \;0.33\left( {{\text{B}}_{\text{closed}} \; - \;{\text{B}}_{\text{opened}} } \right)\). The coefficient of 0.33 helps to filter out the unnecessary frames and prevents false positive frames from being included in the ROI data (see Additional file 5: Image S1).
Phase 2: Find the pupil and iris regions
Under our current image acquisition conditions, the darkest region (21 × 21 pixels) of an image in our sequence is located inside the pupil that can be used to demarcate the pupil region. Starting with the centroid of the darkest region in the image, we apply the flood-fill algorithm with 8-directions to select all pixels that belong to the pupil [29]. The Flood-fill algorithm is an algorithm that starts at a point and selects a connected group of pixels where the color (or brightness) difference between the examined pixel and the starting pixel is smaller than some selected threshold value (see Additional file 6: Image S2). Applying the flood-fill algorithm for extracting the pupil, the boundaries of the region may not be smooth and contain a portion of the iris. Thus, it is necessary to remove unwanted iris region as it affects the centroid by blurring the image followed by re-applying flood-fill algorithm with different threshold value (see Additional file 7: Image S3).
Once the image has been blurred, additional flood-fill algorithm with a lower threshold value can be used to capture the shape of the pupil with smooth boundaries without including the iris. After the pupil region is successfully extracted, the centroid of this region can be calculated which gives an approximation to the location of the center of the eye. If the location of the center of the eye obtained after correction using blurring image and the flood-fill algorithm is far from the darkest region in the pupil, we take the centroid of the darkest region as the center of the eye instead while we may partially lose the ROI.
To extract the iris region, we apply the same flood-fill algorithm again with a starting point outside the pupil (i.e. a point in the iris). We then perform the Canny edge detection [30] on the resulting region to extract the boundary pixels. However, as the iris is partially covered by the eyelid most of the time, the resulting boundary consists of many false boundaries of the iris. To overcome this, we examine only the boundary pixels that have almost vertical edge orientations and are within an empirically pre-defined distance from the detected center of the eye (in our data, the diameter of iris was approximately 480 pixels. However, this number depends on the camera zoom state and patient’s eye trait. Thus, in practice this iris size has to be determined using the iris from image.) It is because in order to get as much ROI as possible, we need the exact radius of the iris but already know the approximate value. From these boundary pixels, we can take their average distances from the center of the eye and estimate the radius of the iris.
Phase 3: Extract the region of interest
The ROI should be the region within the iris that shows interference colors that should appear in the bottom half of the iris image. Thus, we choose our ROI to the area within 80% of the measured iris to prevent the lower eyelid from being included in the ROI and to exclude the sclera as part of ROI in case the centroid of pupil is inaccurate. We then crop away the regions above the center of the circle, the regions outside of the circle, and for the region within the circle, we also crop away small regions on the left and right that are farther than 80% of the radius of the new inner circle. This is to prevent the appearance of saturated pixels from the white sclera of the eye. After that, pixels that have a brightness less than the average brightness of the region are also removed. The resulting region is our ROI. Figure 3 shows some of the intermediate images in phases 2 and 3 and the final ROI.
Phase 4: Subtract iris color from region of interest
The colors in the ROI originate from a combination of the white light interference and the iris color. We need to subtract the iris color component from the ROI, or otherwise, it can affect the subsequent color analysis. We achieve this by finding the color of the iris in the phase 3 circle outside of the ROI, and then subtract it from the ROI.
Phase 5: Correct for illumination colors
Depending on the color and brightness of the room lighting or camera exposure value at the time of image acquisition, the colors in the image may appear to be different from the actual ones. Also, the color temperature of the white light that we used to calculate the theoretical color corresponding to the thickness is different from that of the LED light of the Lipiscanner. To compensate for this, we estimated how much the RGB values were biased from the white color of the sclera of the patient’s eye captured under the same conditions and applied the corresponding correction to the ROI.
Phase 6: Assign thickness to all ROI pixels
We map each pixel’s color to a thickness value of the lipid layer using a lookup table represented by the three-dimensional solid curve in the RGB space (Fig. 6). The lookup table is obtained by applying the Fresnel equations to the reflection-transmission model and then plotting the color output against a different lipid film thickness input. Color values that do not match any values in the lookup table are approximated by the color in the lookup table that has the closest Euclidean distance from it. In this way, the distribution of LLT variation within each frame and across image frames can be assessed, and the mean and standard deviation of LLT can be calculated.