跳到主要內容

臺灣博碩士論文加值系統

(44.220.247.152) 您好!臺灣時間:2024/09/19 00:26
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:李柔依
研究生(外文):Rou-YiLi
論文名稱:應用深度學習於角膜內皮細胞影像識別
論文名稱(外文):Applying Deep Learning on Corneal Endothelial Cells Recognition
指導教授:王大中
指導教授(外文):Ta-Chung Wang
學位類別:碩士
校院名稱:國立成功大學
系所名稱:民航研究所
學門:運輸服務學門
學類:航空學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:英文
論文頁數:48
中文關鍵詞:影像辨識深度學習物體識別實例分割卷積神經網路Mask R-CNN眼角膜內皮細胞
外文關鍵詞:image recognitiondeep learningobject detectioninstance segmentationconvolution neural network (CNN)Mask R-CNNcorneal endothelial cells
相關次數:
  • 被引用被引用:0
  • 點閱點閱:214
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
隨著現代生活型態改變,長時間且高強度的用眼方式導致許多眼角膜相關疾病產生,而這些疾病可能導致患者視力衰退甚至失明的情況,進而需要透過更換角膜組織來恢復視力。其中,角膜移植手術中又以更換角膜內皮層為主,然而目前人體捐贈的角膜組織處於供不應求的狀態,因此許多專家致力於發展人工的方式來培養角膜內皮細胞,希望能夠縮短病患等待移植的時間。在培養結果的評估上,內皮細胞的生長情況難以直接用人眼判別,因此本研究採用深度學習來提供一個更為省時、省力的自動化辨識方法。除此之外,本研究在模型結構上進行優化,當模型面臨到大多數醫療影像在進行深度學習時訓練資料過少的問題之下仍然能取得不錯的辨識結果。

本研究使用深度學習模型Mask R-CNN對內皮細胞影像進行辨識,模型將會輸出健康的內皮細胞的mask在輸入影像中的分布。其中,在模型中作為特徵提取器的骨幹結構為卷積神經網路,本研究比較多種熱門的卷積神經網路應用於角膜內皮細胞辨識的結果,並基於最佳結果上進行結構改良,在不增加網絡參數數量以及增加特徵多樣性的前提下設計出一套最適合應用於角膜內皮細胞生長情況評估的模型。
With the changing of modern lifestyle, eye strain caused by long-time usage leads to corneal diseases, which impairs our vision and even leads to blindness. It requires corneal transplants to restore vision. In all kinds of corneal transplants, endothelium replacement takes up a large proportion. However, owing to the shortage of human donor tissue, lots of patients spend a long time waiting for a transplant. Therefore, the experts devote to culturing endothelial cells to deal with the situation. This research proposes a time-saving and labor-saving way applying deep learning to automatically evaluate the result of endothelial cell culturing whereas it is difficult to evaluate by naked eyes. In addition, this research optimizes the architecture of the deep learning model to deal with the problem that the small number of labeled medical images produced by medical experts for model training. Therefore, the model proposed in this research still can achieve a good result with the small dataset.

This research applies Mask R-CNN as the deep learning model to recognize human corneal endothelial cells (HCECs), which outputs the distribution of masks corresponding to healthy HCECs in the images. Moreover, this research implements several popular convolutional neural network (CNN) as the backbone of Mask R-CNN to compare each result of recognition. According to the best result of recognition, this research modifies the CNN architecture under the requirement that increases feature diversity without additional parameters.
摘要 I
Abstract II
Table of Contents IV
List of Figures VI
List of Tables IX
Chapter 1 Introduction 1
1.1 Motivation and Objective 1
1.2 Literature Review 4
1.3 Outline 7
Chapter 2 Deep Learning 8
2.1 Convolutional Neural Network (CNN) 8
2.1.1 Convolution Layer 8
2.1.2 Pooling layer 9
2.1.3 Activation Function 9
2.2 Convolutional Neural Network for Mask R-CNN 11
2.2.1 ResNet 11
2.2.2 ResNeXt 13
Chapter 3 Data Processing and Deep Learning Model 16
3.1 The Structure of Cornea and HCECs 16
3.2 Instance Segmentation 21
3.3 Mask R-CNN Architecture 22
3.3.1 Future Pyramid Network (FPN) 23
3.3.2 RoIAlign 24
3.4 Training Data and Validation Data 26
3.5 Ground Truths of Masks 28
3.6 Model Adjustment 30
Chapter 4 Experiment and Result 34
4.1 Experiment Setup 34
4.2 Detection and Evaluation Metrics 36
4.3 Discussion and Result 38
Chapter 5 Conclusions and Future Work 43
5.1 Conclusions 43
5.2 Future Work 44
References 45
[1]Cornea Research Foundation of America. Artificial cornea [Online]. Available: http://www.cornea.org/Learning-Center/Cornea-Transplants/Artificial-Cornea.aspx.
[2]University of IOWA Health Care. (2015). Penetrating keratoplasty (PK) [Online]. Available: https://webeye.ophth.uiowa.edu/eyeforum/tutorials/cornea-transplant-intro/2-PK.htm.
[3]University of IOWA Health Care. (2016). Deep anterior lamellar keratoplasty (DALK) [Online]. Available: https://webeye.ophth.uiowa.edu/eyeforum/tutorials/Cornea-Transplant-Intro/3-DALK.htm.
[4]University of IOWA Health Care. (2016). Descemet stripping automated endothelial keratoplasty (DSAEK) [Online]. Available: https://webeye.ophth.uiowa.edu/eyeforum/tutorials/Cornea-Transplant-Intro/4-DSAEK.htm.
[5]University of IOWA Health Care. (2016). Descemet membrane endothelial keratoplasty (DMEK) [Online]. Available: https://webeye.ophth.uiowa.edu/eyeforum/tutorials/Cornea-Transplant-Intro/5-DMEK.htm.
[6]S. W. S. Chan, Y. Yucel, and N. Gupta, New trends in corneal transplants at the University of Toronto, Canadian Journal of Ophthalmology, vol. 53, no. 6, pp. 580-587, 2018.
[7]G. E. Boynton and M. A. Woodward, Evolving techniques in corneal transplantation, Current surgery reports, vol. 3, no. 2, p. 2, 2015.
[8]G. S. Peh, R. W. Beuerman, A. Colman, D. T. Tan, and J. S. Mehta, Human corneal endothelial cell expansion for corneal endothelium transplantation: an overview, Transplantation, vol. 91, no. 8, pp. 811-819, 2011.
[9]B. Yue, J. Sugar, J. Gilboy, and J. Elvart, Growth of human corneal endothelial cells in culture, Investigative ophthalmology & visual science, vol. 30, no. 2, pp. 248-253, 1989.
[10]S. Chen et al., Advances in culture, expansion and mechanistic studies of corneal endothelial cells: a systematic review, Journal of biomedical science, vol. 26, no. 1, p. 2, 2019.
[11]S. Ari, I. Çaça, K. Ünlü, Y. Nergiz, and I. Aksit, Effects of trypan blue on corneal endothelium and anterior lens capsule in albino wistar rats: An investigator-masked, controlled, two-period, experimental study, Current therapeutic research, vol. 67, no. 6, pp. 366-377, 2006.
[12]Y. Liu et al., Detecting cancer metastases on gigapixel pathology images, arXiv preprint arXiv:1703.02442, 2017.
[13]X. Gao, S. Lin, and T. Y. Wong, Automatic feature learning to grade nuclear cataracts based on deep learning, IEEE Transactions on Biomedical Engineering, vol. 62, no. 11, pp. 2693-2701, 2015.
[14]M. J. van Grinsven, B. van Ginneken, C. B. Hoyng, T. Theelen, and C. I. Sánchez, Fast convolutional neural network training using selective data sampling: Application to hemorrhage detection in color fundus images, IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1273-1284, 2016.
[15]G. Litjens et al., A survey on deep learning in medical image analysis, Medical image analysis, vol. 42, pp. 60-88, 2017.
[16]Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[17]A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks, in Advances in neural information processing systems, 2012, pp. 1097-1105.
[18]M. Lin, Q. Chen, and S. Yan, Network in network, arXiv preprint arXiv:1312.4400, 2013.
[19]C. Szegedy et al., Going deeper with convolutions, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9.
[20]S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, arXiv preprint arXiv:1502.03167, 2015.
[21]C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, Rethinking the inception architecture for computer vision, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818-2826.
[22]K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[23]A. Fabijańska, Segmentation of corneal endothelium images using a U-Net-based convolutional neural network, Artificial intelligence in medicine, vol. 88, pp. 1-13, 2018.
[24]S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, Aggregated residual transformations for deep neural networks, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1492-1500.
[25]T. G. Rowsey and D. Karamichos, The role of lipids in corneal diseases and dystrophies: a systematic review, Clinical and translational medicine, vol. 6, no. 1, p. 30, 2017.
[26]S. Sperling, Evaluation of the endothelium of human donor corneas by induced dilation of intercellular spaces and trypan blue, Graefe's archive for clinical and experimental ophthalmology, vol. 224, no. 5, pp. 428-434, 1986.
[27]B. T. van Dooren, W. H. Beekhuis, and E. Pels, Biocompatibility of trypan blue with human corneal cells, Archives of ophthalmology, vol. 122, no. 5, pp. 736-742, 2004.
[28]C. Cassata and S. Sinha. (2016). What is an ophthalmologist? [Online]. Available: https://www.everydayhealth.com/ophthalmologist/guide/.
[29]I. P. Weber, M. Rana, P. B. Thomas, I. B. Dimov, K. Franze, and M. S. Rajan, Effect of vital dyes on human corneal endothelium and elasticity of Descemet’s membrane, PloS one, vol. 12, no. 9, p. e0184375, 2017.
[30]P. Viola and M. Jones, Rapid object detection using a boosted cascade of simple features, CVPR (1), vol. 1, no. 511-518, p. 3, 2001.
[31]J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, You only look once: Unified, real-time object detection, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779-788.
[32]J. Shi and J. Malik, Normalized cuts and image segmentation, Departmental Papers (CIS), p. 107, 2000.
[33]J. Long, E. Shelhamer, and T. Darrell, Fully convolutional networks for semantic segmentation, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431-3440.
[34]J. Jordan. (2018). Evaluating image segmentation models [Online]. Available: https://www.jeremyjordan.me/evaluating-image-segmentation-models/.
[35]K. He, G. Gkioxari, P. Dollár, and R. Girshick, Mask r-cnn, in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961-2969.
[36]W. Zhang, C. Witharana, A. Liljedahl, and M. Kanevskiy, Deep convolutional neural networks for automated characterization of arctic ice-wedge polygons in very high spatial resolution aerial imagery, Remote Sensing, vol. 10, no. 9, p. 1487, 2018.
[37]Y. Su, Object detection and segmentation for a surgery robot using Mask-RCNN, 2018.
[38]H.-F. Tsai, J. Gajda, T. F. Sloan, A. Rares, and A. Q. Shen, Usiigaci: Instance-aware cell tracking in stain-free phase contrast microscopy enabled by machine learning, SoftwareX, vol. 9, pp. 230-237, 2019.
[39]C. Lim. (2017). Mask R-CNN [Online]. Available: https://www.slideshare.net/windmdk/mask-rcnn.
[40]T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, Feature pyramid networks for object detection, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2117-2125.
[41]MIT. Computer Science and Artificial Intelligence Laboratory. (2012). LabelMe, the open annotation tool [Online]. Available: http://labelme2.csail.mit.edu/Release3.0/index.php.
[42]Matterport. (2017). Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow [Online]. Available: https://github.com/matterport/Mask_RCNN.
[43]Google. (2015). TensorFlow [Online]. Available: https://www.tensorflow.org/.
[44]F. Chollet, Keras, 2015.
[45]D. Shen, G. Wu, and H.-I. Suk, Deep learning in medical image analysis, Annual review of biomedical engineering, vol. 19, pp. 221-248, 2017.
[46]R. Alencar. (2019). Dealing with very small datasets [Online]. Available: https://www.kaggle.com/rafjaa/dealing-with-very-small-datasets.
[47]P. Yakubovskiy. (2018). Classification models trained on ImageNet, Keras [Online]. Available: https://github.com/qubvel/classification_models.
[48]Y. Ioannou. (2017). A Tutorial on Filter Groups (Grouped Convolution) [Online]. Available: https://blog.yani.io/filter-group-tutorial/.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊