跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.208) 您好!臺灣時間:2025/10/03 05:45
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:鍾佳恩
研究生(外文):Chung, Chia-En
論文名稱:以深度學習進行分段式學習並建立分類器: 以子宮頸癌前病變之內視鏡影像為例
論文名稱(外文):Segmenting learning with deep learning and building the classifier: an application to endoscope image classification for cervical precancerous lesions
指導教授:盧鴻興盧鴻興引用關係林聖軒
指導教授(外文):Lu, Henry Horng-ShingLin, Sheng-Hsuan
口試委員:黃禮珊謝文萍盧鴻興林聖軒
口試委員(外文):Huang , Li-ShanHsieh, Wen-PingLu, Henry Horng-ShingLin, Sheng-Hsuan
口試日期:2018-06-15
學位類別:碩士
校院名稱:國立交通大學
系所名稱:統計學研究所
學門:數學及統計學門
學類:統計學類
論文種類:學術論文
論文出版年:2018
畢業學年度:106
語文別:英文
論文頁數:42
中文關鍵詞:深度學習機器學習邏輯斯回歸卡方檢定Hosmer-Lemeshow 檢定
外文關鍵詞:Deep learningMachine learningLogistic regressionChi-square testHosmer-Lemeshow test
相關次數:
  • 被引用被引用:0
  • 點閱點閱:241
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
隨著人工智慧的發展,深度學習技術(Deep Neural Network)被廣泛應用在電子商務、 金融、工程、科學、醫療保健等不同領域的研究。在醫學領域,科學家們嘗試利用深度 學習技術來開發電腦輔助診斷輔助系統以協助醫療人員診斷疾病。然而,儘管深度學習 之電腦診斷系統能在影像判別上有相當高的準確率,但科學家卻無法從系統中得知診斷 的過程,並給予最終結果一個合理的解釋。在這項研究當中,我們希望結合統計分析及 深度學習技術來模擬醫療人員在診斷病患的過程,將類神經網絡作為輔助的方法來學習 特定特徵,同時計算可能在圖像中看到的各項特徵的機率。最後,我們使用這些選定的 特徵作為可預測的因素來構建邏輯回歸。本研究利用 Kaggle 數據分析平台所提供的資 料,以辨認子宮頸癌前病變分期為例,該模型在這批資料的準確性為 84.28%(+/- 3.1 %),曲線下面積(AUC)為 86.05%(+/- 2.3%);在另一方面,卷積神經網絡(CNN) 模型則以原始圖像作為輸入並直接判別分期,在透過相同的 20 次交叉驗證下,其僅達 到 46.85%(+/- 9.62%)的準確度和 62.33%(+/- 7.29%)的曲線下面積。本研究 提供的方法不僅說明了有目的的逐步學習模型的性能優於沒有任何方向學習的 CNN 模 型,並且可以以醫學背景知識來解釋。此外,以往醫護人員需要透過子宮頸篩查檢查來 確認細胞的型態,但該分類器考慮了在移行帶中可能看見鱗狀細胞的可能性。如此一來, 此診斷系統能夠考慮協助醫護人員進行診斷,並縮短在決定循環電外科切除手術(LEEP) 方法的時間。
In the medical field, scientists have developed a system of computer-aided diagnosis with the deep neural network (DNN-CAD) in analyzing images to assist medical staff in the diagnosis of disease. However, while DNN-CAD can obtain much higher accuracy than other algorithms, it cannot be explained by scientists why such a diagnosis is given. In this study, we attempt to combine both statistical knowledge and deep neural network to build an algorithm in order to simulate medical staff in the process of diagnosing patients. We view DNN as a supplementary method to learn specific features and calculate the probability, which may be seen in the image. Finally, we use these selected features as predictable factors to build logistic regression. The data we used are provided by Kaggle, which is a platform of competition of data analysis and main issue is about identifying the stages of cervical precancerous lesions. The model in this case could achieve an accuracy of 84.28% (+/- 3.1%) and the area under the curve (AUC) of 86.05% (+/- 2.3%) whereas the Convolutional Neural Network (CNN) model, which train with the origin image directly, only achieve an accuracy of 46.85% (+/- 9.62%) and AUC of 62.33% (+/- 7.29%) with the same CNN model structure through 20 cross-validations. The approach provided in this study not only shows that the performance of the model with purposeful learning step by step is better than the CNN model, but also can be explained by medical background knowledge. Furthermore, this classifier takes into account the probability that squamous cells may be seen in the transformation zone (T-zone) as the chance of seeing squamous cells through the Cervical screening test.
Contents
摘要........................................................ i
Abstract.................................................... ii
Acknowledgment............................................. iii
Contents......................................................1Chapter 1 General Introduction................................1
1.1 Generalities..............................................1
1.2 Motivation............................................... 1
1.3 Delimitation........................ .....................2
Chapter 2 Medical background................................. 3
2.1 Cervical cancer and cervical precancerous lesion......... 3
2.2 Structure of cervix...................................... 3
Chapter 3 Literature review on related method................ 7
3.1 Deep neural network...................................... 7
3.2 Logistic regression ..................................... 8
Chapter 4 Material and methodology ......................... 10
4.1 Introduction of data.................................... 10
4.2 Data preprocessing...................................... 11
4.2.1 Data clearing......................................... 11
4.2.2 Image processing ..................................... 11
4.3 Feature selection and creation ......................... 11
4.3.1 Feature selection..................................... 11
4.3.2 Deep neural network architecture ..................... 13
4.4 Verify and set up....................................... 16
4.4.1 Hypothesis test....................................... 16
4.4.2 Proportional odd model................................ 17
Chapter 5 Performance and comparison ....................... 19
5.1 Performance............................................. 19
5.2 Comparison ..............................................20
Chapter 6 Discussion........................................ 22
Chapter 7 Conclusion........................................ 24
Reference................................................... 25
Appendix A ................................................. 28
Appendix B.................................................. 33
Appendix C.................................................. 34
Appendix D ................................................. 37
Appendix E ................................................. 40
Reference
1. World Cancer Report 2014. 2014: World Health Organization. 2. Zeng, T., et al., Deep convolutional neural networks for annotating gene expression patterns in the mouse brain. BMC bioinformatics, 2015. 16(1): p. 147.
3. Chen, J.H. and S.M. Asch, Machine learning and prediction in medicine-beyond the peak of inflated expectations. N Engl J Med, 2017. 376(26): p. 2507-2509.
4. Litjens, G., et al., A survey on deep learning in medical image analysis. Medical image analysis, 2017. 42: p. 60-88.
5. Muñoz, N., et al., Epidemiologic classification of human papillomavirus types associated with cervical cancer. New England Journal of Medicine, 2003. 348(6): p. 518-527.
6. Bosch, F.X., et al., Prevalence of human papillomavirus in cervical cancer: a worldwide perspective. JNCI: Journal of the National Cancer Institute, 1995. 87(11): p. 796-802.
7. Richart, R.M., The patient with an abnormal Pap smear—screening techniques and management. New England Journal of Medicine, 1980. 302(6): p. 332-334.
8. Wright, J.T., et al., Treatment of cervical intraepithelial neoplasia using the loop electrosurgical excision procedure. Obstetrics and gynecology, 1992. 79(2): p. 173-178.
9. Sellors, J.W. and R. Sankaranarayanan, Colposcopy and treatment of cervical intraepithelial neoplasia: a beginner's manual. 2003: Diamond Pocket Books (P) Ltd.
10. McCulloch, W.S. and W. Pitts, A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 1943. 5(4): p. 115-133.
11. LeCun, Y., Y. Bengio, and G. Hinton, Deep learning. nature, 2015. 521(7553): p. 436.
12. Nair, V. and G.E. Hinton. 3D object recognition with deep belief nets. in Advances in neural information processing systems. 2009.
13. Jia, Y., et al. Caffe: Convolutional architecture for fast feature embedding. in Proceedings of the 22nd ACM international conference on Multimedia. 2014. ACM.
14. Ren, S., et al. Faster r-cnn: Towards real-time object detection with region proposal networks. in Advances in neural information processing systems. 2015.
15. Erhan, D., et al. Scalable object detection using deep neural networks. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014.
16. He, K., et al. Deep residual learning for image recognition. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
17. Hinton, G., et al., Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012. 29(6): p. 82-97.
18. Graves, A., A.-r. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. in Acoustics, speech and signal processing (icassp), 2013 ieee international conference on. 2013. IEEE.
19. Moyer, C., How Google’s AlphaGo beat a Go world champion. The Atlantic, 2016. 28.
20. Wang, F.-Y., et al., Where does AlphaGo go: From church-turing thesis to AlphaGo thesis and beyond. IEEE/CAA Journal of Automatica Sinica, 2016. 3(2): p. 113-120.
21. LeCun, Y., et al., Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998. 86(11): p. 2278-2324.
22. Krizhevsky, A., I. Sutskever, and G.E. Hinton. Imagenet classification with deep convolutional neural networks. in Advances in neural information processing systems. 2012.
23. Zeiler, M.D. and R. Fergus. Visualizing and understanding convolutional networks. in European conference on computer vision. 2014. Springer.
24. Szegedy, C., et al. Going deeper with convolutions. 2015. Cvpr.
25. Simonyan, K. and A. Zisserman, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
26. Zeng, X., et al., Crafting gbd-net for object detection. IEEE transactions on pattern analysis and machine intelligence, 2017.
27. Hu, J., L. Shen, and G. Sun, Squeeze-and-excitation networks. arXiv preprint arXiv:1709.01507, 2017.
28. Ronneberger, O., P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. in International Conference on Medical image computing and computer-assisted intervention. 2015. Springer.
29. Tataru, C., et al., Deep Learning for abnormality detection in Chest X-Ray images. 2017.
30. Chen, P.-J., et al., Accurate Classification of Diminutive Colorectal Polyps Using Computer-Aided Analysis. Gastroenterology, 2017.
31. Rajpurkar, P., et al., CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning. arXiv preprint arXiv:1711.05225, 2017.
32. Mogull, R.G., Second-semester applied statistics. Kendall. 2004, Hunt Publishing Company.
33. Walker, S.H. and D.B. Duncan, Estimation of the probability of an event as a function of several independent variables. Biometrika, 1967. 54(1-2): p. 167-179.
34. Engel, J., Polytomous logistic regression. Statistica Neerlandica, 1988. 42(4): p. 233252.
35. McCullagh, P., Regression models for ordinal data. Journal of the royal statistical society. Series B (Methodological), 1980: p. 109-142.
36. Zimmerman, G., S. Gordon, and H. Greenspan. Automatic landmark detection in uterine cervix images for indexing in a content-retrieval system. in Biomedical Imaging: Nano to Macro, 2006. 3rd IEEE International Symposium on. 2006. IEEE.
37. Jordan, J., et al., The cervix. 2009: John Wiley & Sons.
38. Hand, D.J. and R.J. Till, A simple generalization of the area under the ROC curve for multiple class classification problems. Machine learning, 2001. 45(2): p. 171-186.
39. Chia-En Chung, Sheng-Hsuan Lin, Heng-Cheng Hsu, Henry Horng-Shing Lu, Segmenting learning with convolutional Neural Network to extract features and build the classifier: an application to endoscope image classification for treatment planning of cervical pre-cancerous lesions, The Annual Meeting of Society of Epidemiologic Research, Baltimore, USA, 2018.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊