Different variant of U-Net base - Neural Networks architecture for categorical Semantic Segmentation from telecentric reflected microscopic time-series images
Compression between Different U-Net - Neural Networks architecture for Semantic Segmentation
Ali Ghaznavi∗,
Renata Rychtáriková∗,
Petr Císař∗,
Mohammadmehdi Ziaei∗,
Dalibor Štys∗
(* indicates equal contribution)
Multi-class segmentation of unlabelled living cells in time-lapse light microscopy images is challenging due to the temporal behaviour and changes in cell life cycles and the complexity of these images. The deep-learning-based methods achieved promising outcomes and remarkable success in single- and multi-class medical and microscopy image segmentation. The main objective of this study is to develop a hybrid deep-learning-based categorical segmentation and classification method for living HeLa cells in reflected light microscopy images. A symmetric simple U-Net and three asymmetric hybrid convolution neural networks—VGG19-U-Net, Inception-U-Net, and ResNet34-U-Net—were proposed and mutually compared to find the most suitable architecture for multi-class segmentation of our datasets. The inception module in the Inception-U-Net contained kernels with different sizes within the same layer to extract all feature descriptors. The series of residual blocks with the skip connections in each ResNet34-U-Net’s level alleviated the gradient vanishing problem and improved the generalisation ability. The m-IoU scores of multi-class segmentation for our datasets reached 0.7062, 0.7178, 0.7907, and 0.8067 for the simple U-Net, VGG19-U-Net, Inception-U-Net, and ResNet34-U-Net, respectively. For each class and the mean value across all classes, the most accurate multi-class semantic segmentation was achieved using the ResNet34-U-Net architecture (evaluated as the m-IoU and Dice metrics).
The data achieved by reflected light microscope from living Hela cells in different time-laps experiments under the condition already have been described in manuscript and divided to train, test and validation sets.
The labeled data have been prepared manually to train with the deep learning based methods
The models have been trained based on four hybrid different CNN architecture (with the size of 512 * 512) to achieve the best segmentation result already reported in manuscript.
The Data-Set is Available in below links:
[To download Dataset you can use this link:] Click Here "Microscopic data-set web directory include: Training, Testing and Validation datasets are separately available in the linked repository."
[To download trained models and supplementary data you can use this link:] Click Here "Trained Models and other supplementary data: Simple Unet, Vgg19-Unet, Inception-Unet, ResNet34-Unet models are separately available in the linked repository."
We use this Deep Neural Network architecture: Modelling Bright Filed Dataset on U-Net Networks:
Important hyperparameters setup:
Image Size = 512 * 512
number of layer ; default = 5
Activation function = ReLU
epoch size; default = 200
batch size; default = 8
Early Stop = 30
learning rate ; default = 10e -3
dropout_rate = 0.05
To run the script please use this file on Google Colab or Jupyter Notebook:
Unet+Vgg+inception_ResNet(Document_Colab).ipynb
We uses evaluation Metrics for experimental results:
Precision, Recall, Intersection over Union (IoU), Accuracy, Dice
If you find our work useful in your research, please consider citing:
@article{unknown,
author = {Ghaznavi, Ali and Rychtarikova, Renata and Cisar, petr and Ziaei, MohammadMehdi and Stys, Dalibor},
year = {2022},
month = {03},
pages = {},
doi = {}
title = {Hybrid deep-learning multi-class segmentation of HeLa cells in reflected light microscopy images}
}
-
20/04/2023: Creat the Repo
-
-----: Initial release.