( 您好!臺灣時間:2024/06/21 22:16
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::


論文名稱(外文):Colorectal Polyp Classification Based on Deep Learning Technique
指導教授(外文):Chen, Wen-JanLin, Guo-Shiang
口試委員(外文):Lin, Jen-YungHuang, Ching-ChunLin, Guo-ShiangChen, Wen-Jan
外文關鍵詞:Colorectal polypsImage recognitionDeep learningTransfer learningYOLO
  • 被引用被引用:0
  • 點閱點閱:262
  • 評分評分:
  • 下載下載:11
  • 收藏至我的研究室書目清單書目收藏:0
大腸癌在世界各國的死亡人數都名列前茅,在台灣也是如此。目前常見檢測大腸癌的方式有兩種,分別為糞便潛血檢查與大腸內視鏡檢查。在當前,大腸內視鏡檢查為預防大腸癌的重要方式。本論文針對BLI模式的大腸息肉影像,提出了一種基於深度神經網路(DNN)的大腸息肉分類方法。由於息肉為可是影像中的對象,因此選擇了One-Stage的網路YOLO(You Only Look Once)來開發大腸息肉檢測與分類的電腦輔助系統。並使用資料增強(Data augmentation)和遷移學習(Transfer learning)的技術,對YOLO網路進行改造並進行訓練,將息肉分為增生性息肉與腺瘤性息肉兩種類。為了評估所提出的方法性能,收集了許多大腸息肉影像進行測試。在訓練集以外的541張大腸息肉影像中,Precision與Recall分別可以達到100%與99%。此外,與SSD和YOLO v3相比,基於YOLO v4的系統可以提供更好的效能。實驗結果表明,所提出的基於YOLO v4的CAD方法不僅可以檢測到BLI影像中的腸息肉,並且可以對其進行分類。
So far, colorectal cancer ranks high among cancer deaths in various countries, especially in Taiwan. The current common examine for detecting colorectal cancer are stool tests and colonoscopy. In fact, detecting and removing polyps via colonoscopy is the most important prevention method for colorectal cancer. In this thesis, a colorectal polyp classification method based on deep neural network (DNN) was proposed for BLI images. Since polyps can be considered as objects in an image, an one-stage object detection network, YOLO (You Only Look Once), is selected to develop a computer-aided system for detecting and classifying polyps. Based on data augmentation and transfer learning, the YOLO network was re-modified and re-trained to classify polyps into two classes: hyperplastic and adenomatous. To evaluate the performance of the proposed method, many colonoscopic images are collected for testing. The precision and recall rates can achieve 100% and 99% for 541 cases outside the training set. In addition, compared with SSD and YOLO V3, the proposed system based on YOLO V4 can provide a better performance. Experimental results show that the proposed CAD method based on YOLO V4 can not only detect but also classify colorectal Polyps in BLI images.

中文摘要 iii
誌謝 v
目錄 vi
圖目錄 x
表目錄 xiii

第1章 緒論 1
1.1 研究背景與動機 1
1.2 研究目的 6
第2章 文獻回顧 7
2.1 卷積神經網路 8
2.2 殘差網路 9
2.3 Blue-Laser-Imaging 10
2.4 Single Shot MultiBox Detector 11
2.5 Faster R-CNN 16
2.6 Cross Stage Partial Network 17
第3章 You Only Look Once 19
3.1 Grid Cell 19
3.2 Bounding Box 20
3.3 Confidence置信度 20
3.4 YOLO系統架構 20
3.5 IOU 22
3.6 Non-Maximum Suppression 26
3.7 Multi-scale Training 27
3.8 Cross Stage Partial Network 28
3.9 Loss function 29
3.10 CSPDarknet-53 29
3.11 Mish 31
3.12 Mosaic 數據增強 32
第4章 系統架構 33
4.1 訓練前準備 35
4.2 資料增量 38
4.3 資料庫分配 40
4.4 預訓練模型 41
4.5 遷移學習 42
第5章 實驗結果與分析 44
5.1 實驗環境設備 44
5.2 評估方式 46
5.3 系統效能分析 50
5.4 效能比較 65
第6章 結論與未來方向 78
6.1 結論 78
6.2 未來方向 78
參考文獻 79
[2]Chougrad, H., Hamid Z., and Omar A., "Deep convolutional neural networks for breast cancer screening." Computer methods and programs in biomedicine 157 (2018): 19-30.
[3]C. Szegedy, A. Toshev, and D. Erhan, "Deep neural networks for object detection," in Advances in neural information processing systems, 2013, pp. 2553-2561.
[4]A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, 2012, pp. 1097-1105.
[5]E. Ribeiro, A. Uhl, and M. Häfner, "Colonic polyp classification with convolutional neural networks," in 2016 IEEE 29th International Symposium on Computer-Based Medical Systems (CBMS), 2016: IEEE, pp. 253-258.
[6]Shin, Y., Qadir, Hemin A., Aabakken, L.,Bergsland, J. and Balasingham I."Automatic colon polyp detection using region based deep cnn and post learning approaches." IEEE Access 6 (2018): 40950-40962.
[7]Tian, Y., Pu, L. Z., Singh, R., Burt, A. D., & Carneiro, G., "One-stage five-class polyp detection and classification." 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). IEEE, 2019.
[8]K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[9]H. Machida et al., "Narrow-band imaging in the diagnosis of colorectal mucosal lesions: a pilot study," Endoscopy, vol. 36, no. 12, pp. 1094-1098, 2004.
[10]K. Kaneko et al., "Effect of novel bright image enhanced endoscopy using blue laser imaging (BLI)," Endoscopy international open, vol. 2, no. 4, p. E212, 2014.
[11]K. Togashi et al., "A comparison of conventional endoscopy, chromoendoscopy, and the optimal-band imaging system for the differentiation of neoplastic and non-neoplastic colonic polyps," Gastrointestinal endoscopy, vol. 69, no. 3, pp. 734-741, 2009.
[12]N. Yoshida et al., "Improvement in the visibility of colorectal polyps by using blue laser imaging (with video)," Gastrointestinal Endoscopy, vol. 82, no. 3, pp. 542-549, 2015.
[13]N. Yoshida et al., "The ability of a novel blue laser imaging system for the diagnosis of invasion depth of colorectal neoplasms," Journal of gastroenterology, vol. 49, no. 1, pp. 73-80, 2014.
[14]N. Yoshida et al., "A bility of a novel blue laser imaging system for the diagnosis of colorectal polyps," Digestive Endoscopy, vol. 26, no. 2, pp. 250-258, 2014.
[15]Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., and Berg, A. C., "Ssd: Single shot multibox detector." European conference on computer vision. Springer, Cham, 2016.
[16]Ren, S., He, K., Girshick, R., and Sun, J., "Faster r-cnn: Towards real-time object detection with region proposal networks." Advances in neural information processing systems (2015): 91-99.
[17]C.-Y. Wang, H.-Y. Mark Liao, Y.-H. Wu, P.-Y. Chen, J.-W. Hsieh, and I.-H. Yeh, "CSPNet: A new backbone that can enhance learning capability of cnn," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 390-391.
[18]J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779-788.
[19]J. Redmon and A. Farhadi, "YOLO9000: better, faster, stronger," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263-7271.
[20]J. Redmon and A. Farhadi, "Yolov3: An incremental improvement," arXiv preprint arXiv:1804.02767, 2018.
[21]A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, "YOLOv4: Optimal Speed and Accuracy of Object Detection," arXiv preprint arXiv:2004.10934, 2020.
[22]H. Rezatofighi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid, and S. Savarese, "Generalized intersection over union: A metric and a loss for bounding box regression," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 658-666.
[23]Z. Zheng, P. Wang, W. Liu, J. Li, R. Ye, and D. Ren, "Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression," in AAAI, 2020, pp. 12993-13000.
[24]D. Misra, "Mish: A self regularized non-monotonic neural activation function," arXiv preprint arXiv:1908.08681, 2019.
[25]M. A. Tanner and W. H. Wong, "The calculation of posterior distributions by data augmentation," Journal of the American statistical Association, vol. 82, no. 398, pp. 528-540, 1987.
[26]S. J. Pan and Q. Yang, "A survey on transfer learning," IEEE Transactions on knowledge and data engineering, vol. 22, no. 10, pp. 1345-1359, 2010.

第一頁 上一頁 下一頁 最後一頁 top