TY - JOUR AU1 - Li, Zihan AU2 - Li, Chen AU3 - Yao, Yudong AU4 - Zhang, Jinghua AU5 - Rahaman, Md Mamunur AU6 - Xu, Hao AU7 - Kulwa, Frank AU8 - Lu, Bolin AU9 - Zhu, Xuemin AU1 - Jiang, Tao AB - 1 Introduction 1.1 Environmental Microorganisms All the time, Environmental Microorganisms (EMs) [1] are part of our environment. Some EMs bring us benefits, while others affect our physical health. Many researchers devote themselves to study these microorganisms to improve our lives. Nowadays we usually use a microscope to observe EMs. However, scholars sometimes get it wrongly. Image analysis has a great significance for the analysis of EM images. It can help researchers to analyze the types and forms of EMs. For example, Rotifera is a common EM and it is widely distributed in lakes, ponds, rivers and other brackish water bodies, having great significance in the study of ecosystem structure function and biological productivity because of their extremely fast reproduction rate and high yield. In addition, Arcella is also a kind of common EMs. Arcella mainly feeds on plant giardia and single-celled algae. An oligoplastic water body is the most suitable living environment. Two EM image examples are shown in Fig 1. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 1. An example of EM images. https://doi.org/10.1371/journal.pone.0250631.g001 1.2 Application scenarios of Environmental Microorganisms Noise can be generated during the acquisition or transmission of digital images [2]. Image denoising can reduce the noise of EM images while preserving the details. In addition, image segmentation is the technique and process of dividing an image into a number of specific regions with unique properties and extracting specific targets [3]. So image segmentation technology can be used to segment images of EMs to separate microorganisms from the complex background of the images. After that, the feature extraction part is performed. When the input data is too complex or the amount of data to be processed is large and redundant, the input data will be converted into streamlined features (such as the commonly used ‘feature vectors’). Feature extraction is the process of transforming redundant input data into the desired streamlined features [4]. For the segmented EMs, we usually extract their shape features, color features or deep learning features. We need to use these features for image classification and image retrieval. Image classification is determined by the trained classifier, which is trained by the training data with category labels. We put the extracted feature vectors into a classifier and match them with the known data and put them into the same group of EMs. Image retrieval is given a query image and searches for similar images. We extract the feature vector and calculate its similarity to the feature vector of the known data. 1.3 Contribution Environmental surveys are always carried out in an outdoor environment where conditions such as temperature and salinity are constantly changing. As EMs are very sensitive to these environmental conditions the quality of the observed EMs can be easily affected. It is difficult to collect sufficient EM images [5]. As a result, when researchers want to create EM datasets, they often run out of data. Currently, there are some existing EM datasets, but many of them are not open source. To the best of our knowledge, we know seven special EM datasets. This will make it difficult for EM researchers to obtain the existing EM data set and require much time to collect it. In two cases, we only know the types of microorganisms and the number of samples used in user experiments. The remaining five are our EMDS series. The seven databases are NMCR [6], CECC [7], EMDS-1 [8–10], EMDS-2 [8–12], EMDS-3 [1, 13, 14], EMDS-4 [15–18] and EMDS-5 [19, 20]. Environmental Microorganism Data Set Fifth Version (EMDS-5) has been made available to other researchers as an open source dataset. In addition, EMDS-5 has many advantages over other datasets. EMDS-5 provides the corresponding Ground Truth (GT) images. Since it takes a lot of time and human resources to make GT images, many datasets do not make GT images corresponding to their own data sets. GT images play an important role in image analysis. GT images can be a significant evaluation index for image segmentation. The result of image segmentation can be judged by comparing the segmented image with the GT images. EDMS-5 has a variety of EM images to provide sufficient data support for image classification and image retrieval. The experiment of multi-classification can be carried out for image classification of multi-species EMs to obtain ideal results. At the same time, many kinds of EM images and sufficient data provide strong data support for the results of image retrieval. 1.1 Environmental Microorganisms All the time, Environmental Microorganisms (EMs) [1] are part of our environment. Some EMs bring us benefits, while others affect our physical health. Many researchers devote themselves to study these microorganisms to improve our lives. Nowadays we usually use a microscope to observe EMs. However, scholars sometimes get it wrongly. Image analysis has a great significance for the analysis of EM images. It can help researchers to analyze the types and forms of EMs. For example, Rotifera is a common EM and it is widely distributed in lakes, ponds, rivers and other brackish water bodies, having great significance in the study of ecosystem structure function and biological productivity because of their extremely fast reproduction rate and high yield. In addition, Arcella is also a kind of common EMs. Arcella mainly feeds on plant giardia and single-celled algae. An oligoplastic water body is the most suitable living environment. Two EM image examples are shown in Fig 1. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 1. An example of EM images. https://doi.org/10.1371/journal.pone.0250631.g001 1.2 Application scenarios of Environmental Microorganisms Noise can be generated during the acquisition or transmission of digital images [2]. Image denoising can reduce the noise of EM images while preserving the details. In addition, image segmentation is the technique and process of dividing an image into a number of specific regions with unique properties and extracting specific targets [3]. So image segmentation technology can be used to segment images of EMs to separate microorganisms from the complex background of the images. After that, the feature extraction part is performed. When the input data is too complex or the amount of data to be processed is large and redundant, the input data will be converted into streamlined features (such as the commonly used ‘feature vectors’). Feature extraction is the process of transforming redundant input data into the desired streamlined features [4]. For the segmented EMs, we usually extract their shape features, color features or deep learning features. We need to use these features for image classification and image retrieval. Image classification is determined by the trained classifier, which is trained by the training data with category labels. We put the extracted feature vectors into a classifier and match them with the known data and put them into the same group of EMs. Image retrieval is given a query image and searches for similar images. We extract the feature vector and calculate its similarity to the feature vector of the known data. 1.3 Contribution Environmental surveys are always carried out in an outdoor environment where conditions such as temperature and salinity are constantly changing. As EMs are very sensitive to these environmental conditions the quality of the observed EMs can be easily affected. It is difficult to collect sufficient EM images [5]. As a result, when researchers want to create EM datasets, they often run out of data. Currently, there are some existing EM datasets, but many of them are not open source. To the best of our knowledge, we know seven special EM datasets. This will make it difficult for EM researchers to obtain the existing EM data set and require much time to collect it. In two cases, we only know the types of microorganisms and the number of samples used in user experiments. The remaining five are our EMDS series. The seven databases are NMCR [6], CECC [7], EMDS-1 [8–10], EMDS-2 [8–12], EMDS-3 [1, 13, 14], EMDS-4 [15–18] and EMDS-5 [19, 20]. Environmental Microorganism Data Set Fifth Version (EMDS-5) has been made available to other researchers as an open source dataset. In addition, EMDS-5 has many advantages over other datasets. EMDS-5 provides the corresponding Ground Truth (GT) images. Since it takes a lot of time and human resources to make GT images, many datasets do not make GT images corresponding to their own data sets. GT images play an important role in image analysis. GT images can be a significant evaluation index for image segmentation. The result of image segmentation can be judged by comparing the segmented image with the GT images. EDMS-5 has a variety of EM images to provide sufficient data support for image classification and image retrieval. The experiment of multi-classification can be carried out for image classification of multi-species EMs to obtain ideal results. At the same time, many kinds of EM images and sufficient data provide strong data support for the results of image retrieval. 2 Dataset information of EMDS-5 EMDS-5 is made up of 1260 images of 21 EM classes. The original 420 EM images are partly collected under artificial light sources and partly under natural light sources with a 400× optical microscope. In addition, 840 GT images are manually prepared, including 420 single-object GT images and 420 multi-object GT images. Basic information of the 21 EM classes in EMDS-5 is given in Table 1, and an example of 21 EM classes in EMDS-5 is shown in Fig 2. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 2. An example of 21 EM classes in EMDS-5. Single-object GT images (SGI), Multi-object GT images (MGI). https://doi.org/10.1371/journal.pone.0250631.g002 Download: PPT PowerPoint slide PNG larger image TIFF original image Table 1. Basic information of 21 EM classes in EMDS-5. Number of original images (NoOI), Number of single-object GT images (NoSGI), Number of multi-object GT images (NoMGI), Visible characteristics (VC). https://doi.org/10.1371/journal.pone.0250631.t001 Three researches from University of Science and Technology Beijing (China) and University of Heidelberg (Germany) provide the original image data of EMDS-5. Furthermore, the preparation of EMDS-5 GT images is jointly completed by three researchers from Northeastern University (China), Johns Hopkins University (US) and Huazhong University of Science and Technology (China). All of them have research backgrounds in Environmental Engineering or Biological Information Engineering. Especially, EMDS-5 GT images are manually labelled based on pixel-level with two rules: Rule A: The area where an EM is located is labelled as foreground (1, white). In contrast, other areas are labelled as background (0, black). Rule B: Because the microscopic images in EMDS-5 are collected under optical microscopes, this process produces interference fringes and results in unwanted edges in the EM images. Hence, when making GT images, the most complicated thing is to determine the edges of an EM. First, each researcher selects the edges that she or he thinks are the clearest to label. Then, if their labelling results are conflict, they have a collective discussion to judge and decide a final solution. 3 Image processing evaluation using EMDS-5 3.1 Evaluation of image denoising methods We add a total of 13 kinds of noise to the original images and then denoise the noisy images with different methods. The noise we have chosen is grouped into Poisson noise, multiplicative noise, Gaussian noise, and pretzel noise in total. Gaussian noise is a noise whose probability density function follows a normal distribution. Poisson noise is a noise model that conforms to the Poisson distribution. Multiplicative noise is a type of noise caused by random variations in channel characteristics. Multiplicative noise is related to the signal by multiplication. Pepper noise, also known as impulse noise, which randomly changes some pixel values, is a black and white bright and dark dot noise generated by the image sensor, transmission channel, decoding process, etc. For multiplicative noise, we divide it into multiplicative noise with a variance of 0.2 and multiplicative noise with a variance of 0.04 according to the variance of multiplicative noise. We classify Gaussian noise according to the mean and variance: Gaussian noise with mean 0 variance 0.01, mean 0.5 variance 0.01, mean 0 variance 0.03 and mean 0.5 variance 0.03, and also Position Gaussian noise and Brightness Gaussian noise. Similarly, we divide the pepper noise into pepper noise, salt noise, pepper noise with a density of 0.01 and pepper noise with a density of 0.03. We summarize the above noises and count them into 13 kinds of noise to add noise to the original image respectively, and then use different filters for denoising. An example of the noisy EM images is shown in Fig 3. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 3. An example of different noisy EM images using EMDS-5 images. https://doi.org/10.1371/journal.pone.0250631.g003 We use nine different methods to denoise and choose to use the similarity between the denoised image and the original image and the variance of the two as the evaluation index. The evaluation index is expressed by Eq (1) [2]. (1) where A is the similarity, i1 is the denoised image, i is the original image, and N is the number of pixels. The closer the value of A is to 1, the better the denoising effect. We use the above original image as an example, and use the table to list the similarity between the image after removing various noises and the original image using different filters. Here we have simplified the names of the noise and filters involved in the table as follows.Types of noise (ToN), Denoising method (DM), Two-Dimensional Rank Order Filter (TROF), Mean Filter Window: 3 × 3 (MF: 3 × 3), Mean Filter Window: 5 × 5 (MF: 5 × 5), Wiener Filter Window: 3 × 3 (WF: 3 × 3), Wiener Filter Window: 5 × 5 (WF: 5 × 5), Maximum Filter (MaxF), Minimum Filter (MinF), Geometric Mean Filter (GMF), Arithmetic Mean Filter (AMF), Poisson noise (PN), Multiplicative noise variance: 0.2 (MN v: 0.2), Multiplicative noise variance: 0.04 (MN v: 0.04), Gaussian noise Variance: 0.01, Mean: 0 (GN m: 0, v: 0.01), Gaussian noise Variance: 0.01, Mean: 0.5 (GN m: 0.5, v: 0.01), Gaussian noise Variance: 0.03, Mean: 0 (GN m: 0, v: 0.03), Gaussian noise Variance: 0.03, Mean: 0.5 (GN m: 0.5, v: 0.03), Salt and pepper noise density: 0.01 (SPN d: 0.01), Salt and pepper noise density: 0.03 (SPN d: 0.03), Pepper noise (PpN), Brightness Gaussian noise (BGN), Position Gaussian noise (PGN), Salt noise (SN). The comparison of similarities between denoised images and original image using EMDS-5 are shown in Table 2. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 2. A comparison of similarities between denoised images and original image using EMDS-5. (In [%].). https://doi.org/10.1371/journal.pone.0250631.t002 From the comparision in Table 2, we find that EMDS-5 can support distinguishable evaluation for different denoising methods. For example, the maximum filtering effect is not very good, so it is not ideal for the denoising results of Gaussian noise and multiplicative noise, but it is still very good for the denoising results of salt and pepper noise and Poisson noise. In addition, the mean variance of the denoised image and the original image is an indicator of stability of denoising mehods. The mean variance is expressed by Eq (2) [2]. (2) where l(i, j) and B(i, j) represent the pixels corresponding to the original image after denoising, and S represents the mean variance. The comparison of variances between denoised images and original image using EMDS-5 are shown in Table 3. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 3. A comparison of variances between denoised images and original image using EMDS-5. (In [%].). https://doi.org/10.1371/journal.pone.0250631.t003 From the comparison in Table 3, we find that our EMDS-5 is useful to test and evaluate image decisioning methods effectively. For example, increasing the mean value of Gaussian noise will result in greater variance between the denoised images and the original images, indicating that the results after denoising are not very stable. In addition, we chose to use the most well-known Image Quality Assessment (IQA) to evaluate the image quality of the denoised images. Here we use two indicators, Peak-Signal to Noise Ratio (PSNR) and Mean Structural Similarity (SSIM), to evaluate the image quality of the denoised images. PSNR calculates the difference between the grey value of the pixel to be measured and the corresponding pixel of the reference image. PSNR is a method of assessing image quality using a statistical approach. We hypothesis that the image to be evaluated is F, the reference image is R, and their sizes are MN. The calculation method for characterizing image quality using PSNR, which is expressed by Eq (3) [21]. (3) PSNR measures the image quality by calculating the global size of the pixel error between the image to be evaluated and the reference image. The larger the PSNR value, the less distortion between the image to be evaluated and the reference image, and the image quality is better. SSIM is a commonly used image quality evaluation method originally proposed in [22]. SSIM is composed of three contrast functions. The brightness contrast function is expressed by Eq (4). (4) Contrast contrast function is expressed by Eq (5). (5) Structural contrast function is expressed by Eq (6). (6) σxy is expressed by Eq (7). (7) We combine the three functions and finally get the SSIM index function expressed by Eq (8). (8) Where ux, uy are all pixels of the image block; σx, σy are the standard deviation of the image pixel values; σx σy is the covariance of x and y; C1, C2, C3 are constants, in order to avoid the system error caused when the denominator is 0. SSIM is a number between 0 and 1. The larger the SSIM, the smaller the difference between the two images. The comparison of PSNR between denoised images and original image using EMDS-5 are shown in Table 4. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 4. A comparison of PSNR between denoised images and original image using EMDS-5. https://doi.org/10.1371/journal.pone.0250631.t004 The comparison of SSIM between denoised images and original image using EMDS-5 are shown in Table 5. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 5. A comparison of SSIM between denoised images and original image using EMDS-5. (In [%].). https://doi.org/10.1371/journal.pone.0250631.t005 3.2 Evaluation of edge detection methods Edge detection is an important component of image preprocessing. In order to prove the effectiveness of our EMDS-5 in edge detecition evaluation, seven operators are used to detect edges from images in EMDS-5 dataset. The seven operators are Canny, Laplace of Gaussian (LoG), Prewitt, Roberts, Sobel, Zero cross and CNN, and an example of the edge detection results is shown in Fig 4. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 4. An example of seven edge detection results using EMDS-5 images. https://doi.org/10.1371/journal.pone.0250631.g004 For the edge detection of images, we choose two evaluation metrics, PSNR and SSIM, to evaluate the results of edge detection. We choose the edge detection result obtained by Sobel operator as the standard, and compare the results obtained by other edge detection methods with it and evaluate the results. A comparison of edge detection methods using EMDS-5 is shown in Table 6. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 6. A comparison of edge detection methods using EMDS-5. Evaluation index (EI), Operator type (OT). https://doi.org/10.1371/journal.pone.0250631.t006 From Table 6, we find that the PSNR evaluation index that the edge detection results obtained by the Prewitt operator are the most similar to the Sobel results. The SSIM evaluation index shows that the difference between the results of other operators and the results of Sobel operator is also very small. By comparison, we can see that EMDS-5 images can be used to detect and evaluate various edge detection methods. 3.1 Evaluation of image denoising methods We add a total of 13 kinds of noise to the original images and then denoise the noisy images with different methods. The noise we have chosen is grouped into Poisson noise, multiplicative noise, Gaussian noise, and pretzel noise in total. Gaussian noise is a noise whose probability density function follows a normal distribution. Poisson noise is a noise model that conforms to the Poisson distribution. Multiplicative noise is a type of noise caused by random variations in channel characteristics. Multiplicative noise is related to the signal by multiplication. Pepper noise, also known as impulse noise, which randomly changes some pixel values, is a black and white bright and dark dot noise generated by the image sensor, transmission channel, decoding process, etc. For multiplicative noise, we divide it into multiplicative noise with a variance of 0.2 and multiplicative noise with a variance of 0.04 according to the variance of multiplicative noise. We classify Gaussian noise according to the mean and variance: Gaussian noise with mean 0 variance 0.01, mean 0.5 variance 0.01, mean 0 variance 0.03 and mean 0.5 variance 0.03, and also Position Gaussian noise and Brightness Gaussian noise. Similarly, we divide the pepper noise into pepper noise, salt noise, pepper noise with a density of 0.01 and pepper noise with a density of 0.03. We summarize the above noises and count them into 13 kinds of noise to add noise to the original image respectively, and then use different filters for denoising. An example of the noisy EM images is shown in Fig 3. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 3. An example of different noisy EM images using EMDS-5 images. https://doi.org/10.1371/journal.pone.0250631.g003 We use nine different methods to denoise and choose to use the similarity between the denoised image and the original image and the variance of the two as the evaluation index. The evaluation index is expressed by Eq (1) [2]. (1) where A is the similarity, i1 is the denoised image, i is the original image, and N is the number of pixels. The closer the value of A is to 1, the better the denoising effect. We use the above original image as an example, and use the table to list the similarity between the image after removing various noises and the original image using different filters. Here we have simplified the names of the noise and filters involved in the table as follows.Types of noise (ToN), Denoising method (DM), Two-Dimensional Rank Order Filter (TROF), Mean Filter Window: 3 × 3 (MF: 3 × 3), Mean Filter Window: 5 × 5 (MF: 5 × 5), Wiener Filter Window: 3 × 3 (WF: 3 × 3), Wiener Filter Window: 5 × 5 (WF: 5 × 5), Maximum Filter (MaxF), Minimum Filter (MinF), Geometric Mean Filter (GMF), Arithmetic Mean Filter (AMF), Poisson noise (PN), Multiplicative noise variance: 0.2 (MN v: 0.2), Multiplicative noise variance: 0.04 (MN v: 0.04), Gaussian noise Variance: 0.01, Mean: 0 (GN m: 0, v: 0.01), Gaussian noise Variance: 0.01, Mean: 0.5 (GN m: 0.5, v: 0.01), Gaussian noise Variance: 0.03, Mean: 0 (GN m: 0, v: 0.03), Gaussian noise Variance: 0.03, Mean: 0.5 (GN m: 0.5, v: 0.03), Salt and pepper noise density: 0.01 (SPN d: 0.01), Salt and pepper noise density: 0.03 (SPN d: 0.03), Pepper noise (PpN), Brightness Gaussian noise (BGN), Position Gaussian noise (PGN), Salt noise (SN). The comparison of similarities between denoised images and original image using EMDS-5 are shown in Table 2. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 2. A comparison of similarities between denoised images and original image using EMDS-5. (In [%].). https://doi.org/10.1371/journal.pone.0250631.t002 From the comparision in Table 2, we find that EMDS-5 can support distinguishable evaluation for different denoising methods. For example, the maximum filtering effect is not very good, so it is not ideal for the denoising results of Gaussian noise and multiplicative noise, but it is still very good for the denoising results of salt and pepper noise and Poisson noise. In addition, the mean variance of the denoised image and the original image is an indicator of stability of denoising mehods. The mean variance is expressed by Eq (2) [2]. (2) where l(i, j) and B(i, j) represent the pixels corresponding to the original image after denoising, and S represents the mean variance. The comparison of variances between denoised images and original image using EMDS-5 are shown in Table 3. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 3. A comparison of variances between denoised images and original image using EMDS-5. (In [%].). https://doi.org/10.1371/journal.pone.0250631.t003 From the comparison in Table 3, we find that our EMDS-5 is useful to test and evaluate image decisioning methods effectively. For example, increasing the mean value of Gaussian noise will result in greater variance between the denoised images and the original images, indicating that the results after denoising are not very stable. In addition, we chose to use the most well-known Image Quality Assessment (IQA) to evaluate the image quality of the denoised images. Here we use two indicators, Peak-Signal to Noise Ratio (PSNR) and Mean Structural Similarity (SSIM), to evaluate the image quality of the denoised images. PSNR calculates the difference between the grey value of the pixel to be measured and the corresponding pixel of the reference image. PSNR is a method of assessing image quality using a statistical approach. We hypothesis that the image to be evaluated is F, the reference image is R, and their sizes are MN. The calculation method for characterizing image quality using PSNR, which is expressed by Eq (3) [21]. (3) PSNR measures the image quality by calculating the global size of the pixel error between the image to be evaluated and the reference image. The larger the PSNR value, the less distortion between the image to be evaluated and the reference image, and the image quality is better. SSIM is a commonly used image quality evaluation method originally proposed in [22]. SSIM is composed of three contrast functions. The brightness contrast function is expressed by Eq (4). (4) Contrast contrast function is expressed by Eq (5). (5) Structural contrast function is expressed by Eq (6). (6) σxy is expressed by Eq (7). (7) We combine the three functions and finally get the SSIM index function expressed by Eq (8). (8) Where ux, uy are all pixels of the image block; σx, σy are the standard deviation of the image pixel values; σx σy is the covariance of x and y; C1, C2, C3 are constants, in order to avoid the system error caused when the denominator is 0. SSIM is a number between 0 and 1. The larger the SSIM, the smaller the difference between the two images. The comparison of PSNR between denoised images and original image using EMDS-5 are shown in Table 4. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 4. A comparison of PSNR between denoised images and original image using EMDS-5. https://doi.org/10.1371/journal.pone.0250631.t004 The comparison of SSIM between denoised images and original image using EMDS-5 are shown in Table 5. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 5. A comparison of SSIM between denoised images and original image using EMDS-5. (In [%].). https://doi.org/10.1371/journal.pone.0250631.t005 3.2 Evaluation of edge detection methods Edge detection is an important component of image preprocessing. In order to prove the effectiveness of our EMDS-5 in edge detecition evaluation, seven operators are used to detect edges from images in EMDS-5 dataset. The seven operators are Canny, Laplace of Gaussian (LoG), Prewitt, Roberts, Sobel, Zero cross and CNN, and an example of the edge detection results is shown in Fig 4. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 4. An example of seven edge detection results using EMDS-5 images. https://doi.org/10.1371/journal.pone.0250631.g004 For the edge detection of images, we choose two evaluation metrics, PSNR and SSIM, to evaluate the results of edge detection. We choose the edge detection result obtained by Sobel operator as the standard, and compare the results obtained by other edge detection methods with it and evaluate the results. A comparison of edge detection methods using EMDS-5 is shown in Table 6. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 6. A comparison of edge detection methods using EMDS-5. Evaluation index (EI), Operator type (OT). https://doi.org/10.1371/journal.pone.0250631.t006 From Table 6, we find that the PSNR evaluation index that the edge detection results obtained by the Prewitt operator are the most similar to the Sobel results. The SSIM evaluation index shows that the difference between the results of other operators and the results of Sobel operator is also very small. By comparison, we can see that EMDS-5 images can be used to detect and evaluate various edge detection methods. 4 Image segmentation evaluation using EMDS-5 4.1 Single-object image segmentation In order to prove the effectiveness of EMDS-5 for image segmentation evaluation, six typical image segmentation methods are compared to segment the EMDS-5 original images, including GrubCut, Markov Random Field (MRF), Canny edge detection based, Watershed, Otsu thresholding and Region growing approaches. GrubCut is a common and classic method of semi-automatic segmentation. MRF is a classical graph based segmentation method. Otsu thresholding is an image segmentation method based on threshold. Region growing approaches and Watershed algorithm are classical region based segmentation methods. An example of different single-object segmentation results is shown in Fig 5. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 5. An example of different single-object segmentation results using EMDS-5 images. https://doi.org/10.1371/journal.pone.0250631.g005 We compare the images obtained after image segmentation with the corresponding GT images, where five evaluation indexes in Table 7 are used to evaluate the segmentation results [23, 24]. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 7. The image segmentation evaluation metrics used in this paper and their definitions. TP (True Positive), FN (False Negative), FP (False Positive). https://doi.org/10.1371/journal.pone.0250631.t007 In Table 7, Vpred represents the foreground that is predicted by the model; Vgt represents the foreground in a ground truth image. We show the evaluation results of the sample images in Table 8. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 8. A comparison of single-object segmentation methods using EMDS-5. Image segmentation methods (ISM), Evaluation index (EI), Watershed algorithm (WA), Otsu thresholding (OT), Region growing (RG). (In [%].). https://doi.org/10.1371/journal.pone.0250631.t008 From Table 8, it is observed that because the GrubCut method segments the original images, when compared with the GT image it leads to a low evaluation result. Among several other classic single-object image segmentation methods, the results of Otsu Thresholding and MRF segmentation closest to the GT images and the best effect. Other segmentation methods have a certain gap compared with these two segmentation methods. Through the comparison of these image segmentation parties, we can conclude that EMDS-5 is effective in testing and evaluating image segmentation methods. 4.2 Multi-object image segmentation For multi-object image segmentation, we use two methods, k-means and U-net, to test our EMDS-5. k-means is an unsupervised learning approach (clustering) and U-net is a supervised learning method (deep convolutional neural network, DCNN). These two methods are representative of the classic methods in their respective fields. The structure of U-net is shown in Fig 6 [19]. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 6. The structure of U-Net. https://doi.org/10.1371/journal.pone.0250631.g006 U-Net is initially a DCNN for performing microscopic image segmentation tasks. A strong use of data enhancement is at the core of U-net. This allows for more efficient use of existing annotation samples. In addition, U-Net’s end-to-end architecture allows retrieval of shallow information about the network. The structure of U-Net is symmetrical. The U-Net uses a network structure that contains both downsampling and upsampling [24]. The left half is the contracting path, a downsampling operation in which two 3 × 3 convolutions (unfilled convolutions) are repeated, followed by a ReLU activation function with a 2 × 2 maximum pooling layer for downsampling, doubling the number of feature channels at each downsampling, to achieve a minimum resolution of 32 × 32. The right half of the region is the expansive path. There are still a large number of feature channels in the upsampling part, which allow the network to propagate contextual information to the high resolution layers, so that the expansive path is more or less symmetrical with respect to the systolic path, resulting in an U-shaped structure. Each layer in this region contains a 2 × 2 inverse convolution operation for upsampling, which halves the feature channels. A fusion operation with the clipped feature map of the same dimensional layer is also included, followed by the addition of two 3 × 3 convolutions with ReLU activation functions [24]. Since unfilled convolution is used, boundary pixels are lost in each convolution, so cropping is necessary. In the last layer, each 64-component feature vector is mapped to the desired number of classes using 1 × 1 convolution [25]. The examples of different multi-object segmentation results are shown in Fig 7. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 7. An example of different multi-object segmentation results using EMDS-5. https://doi.org/10.1371/journal.pone.0250631.g007 For these two multi-object image segmentation methods, a comparison is shown in Table 9. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 9. A comparison of multi-object segmentation methods using EMDS-5. Image segmentation methods (ISM), Evaluation index (EI). (In [%].). https://doi.org/10.1371/journal.pone.0250631.t009 It can be seen from Table 9 that the segmentation effect of U-Net in the multi-target image segmentation method is much higher than that of k-means, showing the effectiveness of EMDS-5 for evaluaiton of multi-object image segmentation methods. 4.1 Single-object image segmentation In order to prove the effectiveness of EMDS-5 for image segmentation evaluation, six typical image segmentation methods are compared to segment the EMDS-5 original images, including GrubCut, Markov Random Field (MRF), Canny edge detection based, Watershed, Otsu thresholding and Region growing approaches. GrubCut is a common and classic method of semi-automatic segmentation. MRF is a classical graph based segmentation method. Otsu thresholding is an image segmentation method based on threshold. Region growing approaches and Watershed algorithm are classical region based segmentation methods. An example of different single-object segmentation results is shown in Fig 5. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 5. An example of different single-object segmentation results using EMDS-5 images. https://doi.org/10.1371/journal.pone.0250631.g005 We compare the images obtained after image segmentation with the corresponding GT images, where five evaluation indexes in Table 7 are used to evaluate the segmentation results [23, 24]. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 7. The image segmentation evaluation metrics used in this paper and their definitions. TP (True Positive), FN (False Negative), FP (False Positive). https://doi.org/10.1371/journal.pone.0250631.t007 In Table 7, Vpred represents the foreground that is predicted by the model; Vgt represents the foreground in a ground truth image. We show the evaluation results of the sample images in Table 8. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 8. A comparison of single-object segmentation methods using EMDS-5. Image segmentation methods (ISM), Evaluation index (EI), Watershed algorithm (WA), Otsu thresholding (OT), Region growing (RG). (In [%].). https://doi.org/10.1371/journal.pone.0250631.t008 From Table 8, it is observed that because the GrubCut method segments the original images, when compared with the GT image it leads to a low evaluation result. Among several other classic single-object image segmentation methods, the results of Otsu Thresholding and MRF segmentation closest to the GT images and the best effect. Other segmentation methods have a certain gap compared with these two segmentation methods. Through the comparison of these image segmentation parties, we can conclude that EMDS-5 is effective in testing and evaluating image segmentation methods. 4.2 Multi-object image segmentation For multi-object image segmentation, we use two methods, k-means and U-net, to test our EMDS-5. k-means is an unsupervised learning approach (clustering) and U-net is a supervised learning method (deep convolutional neural network, DCNN). These two methods are representative of the classic methods in their respective fields. The structure of U-net is shown in Fig 6 [19]. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 6. The structure of U-Net. https://doi.org/10.1371/journal.pone.0250631.g006 U-Net is initially a DCNN for performing microscopic image segmentation tasks. A strong use of data enhancement is at the core of U-net. This allows for more efficient use of existing annotation samples. In addition, U-Net’s end-to-end architecture allows retrieval of shallow information about the network. The structure of U-Net is symmetrical. The U-Net uses a network structure that contains both downsampling and upsampling [24]. The left half is the contracting path, a downsampling operation in which two 3 × 3 convolutions (unfilled convolutions) are repeated, followed by a ReLU activation function with a 2 × 2 maximum pooling layer for downsampling, doubling the number of feature channels at each downsampling, to achieve a minimum resolution of 32 × 32. The right half of the region is the expansive path. There are still a large number of feature channels in the upsampling part, which allow the network to propagate contextual information to the high resolution layers, so that the expansive path is more or less symmetrical with respect to the systolic path, resulting in an U-shaped structure. Each layer in this region contains a 2 × 2 inverse convolution operation for upsampling, which halves the feature channels. A fusion operation with the clipped feature map of the same dimensional layer is also included, followed by the addition of two 3 × 3 convolutions with ReLU activation functions [24]. Since unfilled convolution is used, boundary pixels are lost in each convolution, so cropping is necessary. In the last layer, each 64-component feature vector is mapped to the desired number of classes using 1 × 1 convolution [25]. The examples of different multi-object segmentation results are shown in Fig 7. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 7. An example of different multi-object segmentation results using EMDS-5. https://doi.org/10.1371/journal.pone.0250631.g007 For these two multi-object image segmentation methods, a comparison is shown in Table 9. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 9. A comparison of multi-object segmentation methods using EMDS-5. Image segmentation methods (ISM), Evaluation index (EI). (In [%].). https://doi.org/10.1371/journal.pone.0250631.t009 It can be seen from Table 9 that the segmentation effect of U-Net in the multi-target image segmentation method is much higher than that of k-means, showing the effectiveness of EMDS-5 for evaluaiton of multi-object image segmentation methods. 5 Feature extraction evaluation using EMDS-5 We use GT images to localize the target EMs in the original images to test feature extraction methods. Since GT images have single-object GT images and multi-object GT images, feature extraction methods are grouped into two types. An example of original images and target EM images extracted from GT images are shown in Fig 8. First, we randomly select ten images from each EM class as the training set and the other ten as the test set. Then, we extract and compare 12 features, including two color feaures (HSV (Hue, Saturation and Value) and RGB (Red, Green and Blue) features), three texture features (GLCM (Grey-level Co-occurrence Matrix), HOG (Histogram of Oriented Gridients) and LBP (Local Binary Pattern) features), four geometric features (area, perimeter, long and short axis features), seven invariant moment features (Hu moments), and two deep learning features (VGG16 and Resnet50 features). We test the color features extract from the respective channels of RGB features and HSV features as a single feature vector. Lastly, we use a Radial Basis Function Support Vector Machine (RBFSVM) classifier (supported by LIBSVM [26]) to test each feature and calculate their accuracies. The LIBSVM parameters are set as −s 0 −t 0 −c 2 −g 1 −b 1. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 8. An example of localized EMs by GT images. https://doi.org/10.1371/journal.pone.0250631.g008 5.1 Single-object feature extraction In Table 10, the accuracies of EM image classification using single-object features are compared. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 10. Classification accuracy of single-object features by RBFSVM using EMDS-5. Feature type (FT), Accuracy (Acc), Geometric features (Geo), Hu moments (Hu). (In [%].). https://doi.org/10.1371/journal.pone.0250631.t010 5.2 Multi-object feature extraction In Table 11, the accuracies of EM image classification using multi-object features are compared. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 11. Classification accuracy of multi-object features by RBFSVM using EMDS-5. F Feature type (FT), Accuracy (Acc), Geometric features (Geo), Hu moments (Hu). (In [%].). https://doi.org/10.1371/journal.pone.0250631.t011 From Tables 10 and 11, we can find that when using the same RBFSVM classifiers to classify EM images with different features, we obtain different classification results, showing the effectiveness of EMDS-5 for the feature extraction evaluation. Especially, because VGG16 feature achieves the best effect, we chose it in the following section about ‘classification evaluation’. 5.1 Single-object feature extraction In Table 10, the accuracies of EM image classification using single-object features are compared. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 10. Classification accuracy of single-object features by RBFSVM using EMDS-5. Feature type (FT), Accuracy (Acc), Geometric features (Geo), Hu moments (Hu). (In [%].). https://doi.org/10.1371/journal.pone.0250631.t010 5.2 Multi-object feature extraction In Table 11, the accuracies of EM image classification using multi-object features are compared. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 11. Classification accuracy of multi-object features by RBFSVM using EMDS-5. F Feature type (FT), Accuracy (Acc), Geometric features (Geo), Hu moments (Hu). (In [%].). https://doi.org/10.1371/journal.pone.0250631.t011 From Tables 10 and 11, we can find that when using the same RBFSVM classifiers to classify EM images with different features, we obtain different classification results, showing the effectiveness of EMDS-5 for the feature extraction evaluation. Especially, because VGG16 feature achieves the best effect, we chose it in the following section about ‘classification evaluation’. 6 Image classification evaluation using EMDS-5 We use the features extracted from the EMDS-5 data to test classification performance of different classifiers. As mentioned in Sec. 5, we use the extracted VGG16 features for testing in this section. The VGG16 feature vector selects the 16th layer feature vector. The dimension is 1 × 1000. First, we randomly select ten images from each EM class as the training set and use another ten as test set. Then, we select 14 normally used classifiers for EM image classification, including four SVMs, three k-Nearest Neighbors (KNNs), three Random Forests (RFs), two VGG16 and two Inception-V3 classifiers. We combine and compare the extracted VGG16 features with four classic classifiers. In addition, four deep learning classifiers are directly compared. In VGG16 and Inception-V3, we divide the data into test, validation and test sets. Then we test the accuracy of any two types of EM image classification. We change the ratio of the images owned by these three datasets and test the accuracy, separately. Especially, the parameters of four SVM classifiers are shown in Table 12. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 12. The parameters of four SVM classifiers for EMDS-5 image classification (supported by LIBSVM). https://doi.org/10.1371/journal.pone.0250631.t012 Furthermore, a comparison of different classifiers for EM image classification using EMDS-5 is shown in Table 13. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 13. A comparison of EM image classification results using EMDS-5. Accuracy (Acc), nTree (nT), VGG16 (Train: Validation: Test = 1: 1: 2) is VGG16: 1: 1: 2, VGG16 (Train: Validation: Test = 1: 2: 1) is VGG16: 1: 2: 1, Inception-V3 (Train: Validation: Test = 1: 1: 2) is I-V3: 1: 1: 2, Inception-V3 (Train: Validation: Test = 1: 2: 1) is I-V3: 1: 2: 1. (In [%].). https://doi.org/10.1371/journal.pone.0250631.t013 It can be seen from Table 13 that when using the same feature to test different classifiers. The classification results of the two deep learning networks are the best. From the comparison of the results of different classifiers, we can see that EMDS-5 images can be effectively applied to the testing and evaluation of various classification algorithms. 7 Image retrieval evaluation using EMDS-5 We use EMDS-5 for image retrieval. Because we use different features, we group the image retrieval methods into two categories: texture feature and deep learning feature based image retrieval approaches. We use Average Precision (AP) [18] to evaluate the retrieval results. AP is derived from the field of information retrieval and is primarily used to evaluate ranked lists of retrieved samples. The definition of AP in our article is shown in Eq (9). (9) M is the number of related EM images, P(k) is by considering the cut-off position divided by the kth position in the list, and rel(k) is an index. The EM image rank in the kth position is the target type image, then take 1; otherwise, take 0. AP represents the average value of the accuracy of the current position target type EM image. Our experiment is conducted on 21 types of EM images, so we apply the mean AP (mAP) to summarize the APs of each class. It is calculated by obtaining the average value of APs. During the retrieval process, we match the feature vector of the image to be tested with the feature vectors of all the images in the EMDS-5 dataset and calculate the Euclidean distance between the two. Then calculate the mAP value of the search result of the type of image to be tested as the search result. We display the first 20 images in the search results, in which the frame of the correct image is marked with a color. 7.1 Texture feature based image retrieval using EMDS-5 We extract a total of four texture features, GLCM, GGCM, HOG and LBP to test the EMDS-5 image retrieval evaluation function. An example of the image retrieval results based on texture features is shown in Fig 9. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 9. An example of image retrieval results with GLCM using EMDS-5. https://doi.org/10.1371/journal.pone.0250631.g009 Furthermore, the retrieval results of four texture features are demonstrated in Fig 10. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 10. A comparison of image retrieval results with four texture features using EMDS-5. https://doi.org/10.1371/journal.pone.0250631.g010 7.2 Deep learning feature based image retrieval using EMDS-5 We first extract VGG16 features and Resnet50 features. Then, the selected feature vectors are the feature vectors of the last layer of the respective network. The dimension is 1 × 1000. The following figure is an example of retrieval results based on deep learning features. An example of retrieval results based on deep learning features is shown in Fig 11. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 11. An example of image retrieval results based on VGG16 feature using EMDS-5. https://doi.org/10.1371/journal.pone.0250631.g011 Furthermore, the image retrieval results with two deep learning features are shown in Fig 12. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 12. A comparison of image retrieval results with two deep learning features using EMDS-5. https://doi.org/10.1371/journal.pone.0250631.g012 We calculate the variance of the mAP based on texture feature image retrieval and the variance of the mAP based on deep learning feature image retrieval. The result we get is that the variance of the image retrieval results based on deep learning features is smaller, which shows that the results of deep learning feature image retrieval are more stable. By comparing the results of different retrieval methods, we can know that EMDS-5 images can be effectively applied to various image retrieval tests and evaluations. 7.1 Texture feature based image retrieval using EMDS-5 We extract a total of four texture features, GLCM, GGCM, HOG and LBP to test the EMDS-5 image retrieval evaluation function. An example of the image retrieval results based on texture features is shown in Fig 9. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 9. An example of image retrieval results with GLCM using EMDS-5. https://doi.org/10.1371/journal.pone.0250631.g009 Furthermore, the retrieval results of four texture features are demonstrated in Fig 10. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 10. A comparison of image retrieval results with four texture features using EMDS-5. https://doi.org/10.1371/journal.pone.0250631.g010 7.2 Deep learning feature based image retrieval using EMDS-5 We first extract VGG16 features and Resnet50 features. Then, the selected feature vectors are the feature vectors of the last layer of the respective network. The dimension is 1 × 1000. The following figure is an example of retrieval results based on deep learning features. An example of retrieval results based on deep learning features is shown in Fig 11. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 11. An example of image retrieval results based on VGG16 feature using EMDS-5. https://doi.org/10.1371/journal.pone.0250631.g011 Furthermore, the image retrieval results with two deep learning features are shown in Fig 12. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 12. A comparison of image retrieval results with two deep learning features using EMDS-5. https://doi.org/10.1371/journal.pone.0250631.g012 We calculate the variance of the mAP based on texture feature image retrieval and the variance of the mAP based on deep learning feature image retrieval. The result we get is that the variance of the image retrieval results based on deep learning features is smaller, which shows that the results of deep learning feature image retrieval are more stable. By comparing the results of different retrieval methods, we can know that EMDS-5 images can be effectively applied to various image retrieval tests and evaluations. 8 Conclusion and future work EMDS-5 is a microscopic image dataset containing 21 classes of EMs. EMDS-5 contains original image and GT images of each EM. GT images include single-object GT images and multi-object GT images. Each original image has two corresponding GT images. Each microorganism class has 20 original images, 20 single-object GT images and 20 multi-object GT images. EMDS-5 has the function of testing the denoising effect. When testing the denoising effect of EMDS-5, we add 13 kinds of noise, such as Possion noise and Gaussian noise, and use nine kinds of filters to test the denoising effect of various noises. EMDS-5 can also evaluate the results of edge detection methods. We adopt six edge detection methods and use two evaluation indexes to evaluate the detection results and get good results. In terms of image segmentation, EMDS-5 can detect the results of image segmentation due to its single-object GT image and multi-object GT image. So we do the testing with two parts: single-object image segmentation and multi-object image segmentation. In the single-object image segmentation part, we use six methods such as GrubCut and MRF to segment the original images. In terms of multi-object image segmentation, we use k-means and U-Net methods for segmentation. We extract nine features from the images in EMDS-5, such as RGB, HSV, GLCM, HOG. We use the LIBSVM classifier to evaluate the results of the extracted features. In the test, we randomly select ten images of each type of EMs as the training set and ten images as the test set. In terms of classification, we use VGG16 features to test different classifiers such as LIBSVM, KNN, RF. In terms of image retrieval, we divide image retrieval based on texture features and image retrieval based on deep learning features. In terms of texture features, we select four features, GLCM, GGCM, HOG and LBP, to test separately. In the deep learning feature, we select two deep learning features, VGG16 feature and Resnet50, for retrieval. We select the last layer of features of these two deep learning networks as feature vectors. We use mAP as an evaluation index to detect the quality of retrieval. In the future, we will expand the types of microorganisms and increase the number of images of each microorganism class. We hope that we can use the EMDS database to achieve more functions in the future. Acknowledgments We thank Prof. Dr. Beihai Zhou and Dr. Fangshu Ma from the University of Science and Technology Beijing, PR China, Prof. Joanna Czajkowska from Silesian University of Technology, Poland, and Prof. Yanling Zou from Freiburg University, Germany, for their previous cooperations in this work. We also thank Miss Zixian Li and Mr. Guoxian Li for their important discussion. TI - EMDS-5: Environmental Microorganism image dataset Fifth Version for multiple image analysis tasks JF - PLoS ONE DO - 10.1371/journal.pone.0250631 DA - 2021-05-12 UR - https://www.deepdyve.com/lp/public-library-of-science-plos-journal/emds-5-environmental-microorganism-image-dataset-fifth-version-for-YsHtH71blS SP - e0250631 VL - 16 IS - 5 DP - DeepDyve ER -