跳到主要內容

臺灣博碩士論文加值系統

(44.192.247.184) 您好!臺灣時間:2023/01/30 13:15
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:梁定能
研究生(外文):Liang, Ding-Neng
論文名稱:基於深度學習之牙齒辨識與根尖片排列
論文名稱(外文):Deep Learning for Teeth Recognition, Numbering and Automatically Place Periapical Films
指導教授:張明峰張明峰引用關係
指導教授(外文):Chang, Ming-Feng
口試委員:張明峰王才沛林勻蔚林斯寅
口試委員(外文):Chang, Ming-FengWang, Tsai-PeiLin, Yun-WeiLin, Szu-Yin
口試日期:2021-07-22
學位類別:碩士
校院名稱:國立陽明交通大學
系所名稱:資訊科學與工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2021
畢業學年度:109
語文別:中文
論文頁數:54
中文關鍵詞:深度學習根尖片牙齒辨識牙齒編號深層殘差網路
外文關鍵詞:Deep LearningPeriapical FilmTeeth RecognitionTeeth NumberingDeep Residual Convolutional Networks
相關次數:
  • 被引用被引用:0
  • 點閱點閱:105
  • 評分評分:
  • 下載下載:12
  • 收藏至我的研究室書目清單書目收藏:0
根尖片影像的牙齒形狀、編號以及位置都是牙醫師對病人症狀判斷的依據,然而往往在診斷的過程中,因為長時間的壓力、疲勞或是缺少經驗等等因素造成判斷上的人為錯誤,因此如果能提供專業牙醫師的更多的判讀資訊,降低人為錯誤的影響,對病人的醫療品質將會有一定程度上的幫助。本篇論文是利用UT Health所提供的牙齒醫學影像以及標記資料,基於深度學習的技術,從影像擷取出牙齒,判斷牙齒編號及型態,再基於預測的編號將影像擺放至全口根尖片排列擺放至正確的位置,以期望結果能輔助牙醫師並且給予病人更為精確的判讀與診斷。

我們使用病人的根尖片影像,將根尖片影像做影像分割的處理,進一步辨識牙齒位置,由於醫學影像的資料較少,我們在資料前處理的部份透過旋轉邊界框的方式將資料擴增,這麼處理的好處在於一般的卷積神經網路對於旋轉的判斷並不是這麼優秀,因此我們透過角度調整邊界框,進而減少因為旋轉判斷錯誤的變因,再來我們會利用神經網路模型預測牙齒的編號以及型態,再基於預測的結果修正有問題的結果,最後根據修正的預測,以及利用醫生提供的擺放編號所對應的牙齒編號資訊,在考慮到擺放編號所涵蓋的牙齒以及擺放編號的中心點等特性,將影像擺放至全口根尖片排列上,並且將結果與醫生擺放的結果做比較,以評估我們結果的好壞。

我們透過旋轉牙齒的方式資料擴增,以及在訓練的過程中,加入多任務訓練的機制,包含牙齒編號以及牙齒型態的兩個相似任務的訓練,使得我們在牙齒編號準確率能達到87%,經過考慮牙齒相對位置修正,牙齒的編號準確率能到94.9%,而牙齒型態的磨牙、前磨牙、犬齒以及前牙準確率也分別達到了99%、98%、95%以及99%,而影像擺放的準確率也能達到97.5%。
Shape, number and position of teeth are the main targets of a dentist when screening for patient’s problems on periapical films. Due to excessive time pressures, fatigue and low experience levels, it usually results in human errors. Therefore, it will be helpful for improving the medical quality if we can provide more interpretable information. This paper uses dental periapical films and corresponding information which provided by UT Health, and the overall flow which is based on deep learning techniques extracts important information from periapical radiographs, such as the shape and number of teeth Based on the predictions of teeth, we place each patient's radiographs to the corresponding positions for the panoramic view. Our results can support dentists to perform more precise diagnosis for patients.

First, we apply U-net to extract teeth from periapical films.. Generally, it’s hard for convolutional neural network to determine tooth with different tilted angle. Therefore, we use teeth with different tilted angles as training data to reduce error rates. Next, we use neural networks to identify the tooth number and type, and then correct the results by considering the relative positioning of the teeth in a film. Finally, we place a patient's images for the panoramic view according to the corrected numbering, and we can compare our placements to those placed by dentists to evaluate the results.

By identifying the number and type of a tooth at the same time, our initial numbering accuracy is 87%, and type accuracies for molar, premolar, incisor and canine are 98%, 96%, 93% and 96%, respectively. After the correcting process, teeth numbering accuracy can achieve 94.9%, and the type accuracies of molar, premolar, incisor and canine can also achieve 99%, 98%, 99% and 95% respectively. Based on the corrected numbering, the accuracy of film placements can achieve 97.5%.
摘要 i
ABSTRACT ii
致謝 iii
目錄 iv
表目錄 vi
圖目錄 vii
一、 緒論 1
1.1 研究背景 1
1.2 研究動機 2
1.3 研究目的 2
二、 相關研究與探討 4
2.1 2D CONVOLUTIONAL NEURAL NETWORK 4
2.2 DEEP RESIDUAL NETWORKS 4
2.3 U-NET 5
2.4 相關研究 6
三、 研究方法 9
3.1 資料前處理 9
3.1.1 牙齒資料 9
3.1.2 資料擴充 11
3.1.3 牙齒編號系統與牙齒型態 11
3.1.4 全口根尖片排列 12
3.1.5 影像分割資料 13
3.2 流程架構設計 14
3.2.1 牙齒影像切割 15
3.2.2 牙齒編號預測 19
3.2.3 牙齒編號再修正 22
3.2.4 全口根尖片排列擺放 28
四、 實驗結果 34
4.1 牙齒型態、編號預測及修改 34
4.1.1 標記影像資料 34
4.1.2 自動切割影像 39
4.2 全口根尖片排列擺放 42
4.3 綜合比較 44
五、 結論與未來展望 47
參考文獻 48
附錄 52
[1] Chih-Yu Hsu “Deep Learning for Classification of Periodontal Bone Loss Severity’’ February, 2021
[2] Muramatsu, C.; Morishita, T.; Takahashi, R.; Hayashi, T.; Nishiyama, W.; Ariji, Y.; Zhou, X.; Hara, T.;Katsumata, A.; Ariji, E.; et al. Tooth detection and classification on panoramic radiographs for automatic dental chart filing: Improved classification by multi-sized input data. Oral Radiol. 2020.
[3] Chen H, Zhang K, Lyu P, Li H, Zhang L, Wu J, et al. A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical flms. Sci Rep. 2019;9(1):3840.
[4] K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image Recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 770-778, doi 10.1109/CVPR.2016.90.
[5] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, pages 234–241. Springer, 2015.
[6] Tuzof DV, Tuzova LN, Bornstein MM, Krasnov AS, Kharchenko MA, Nikolenko SI, et al. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofac Radiol. 2019;48:20180051.
[7] Zhang K, Wu J, Chen H, Lyu P. An effective teeth recognition method using label tree with cascade network structure. Comput Med Imaging Graph 2018; 68: 61-70
[8] Labelme, https://github.com/wkentaro/labelme
[9] Mahdi, F. P., Motoki, K. and Kobashi, S., Optimization technique combined with deep learning method for teeth recognition in dental panoramic radiographs, Scientific Reports, Vol.10, No.1, 2020.
[10] https://www.adobe.com/tw/products/photoshop.html
[11] Krois J, Ekert T, Meinhold L et al. (2019) Deep learning for the radiographic detection of periodontal bone loss. Scientific Reports 9, 8495.
[12] Gameiro, G.R.; Sinkunas, V.; Liguori, G.R.; Auler-Junior, J.O.C. Precision medicine: Changing the way we think about healthcare. Clinics (Sao Paulo) 2018, 73, e723.
[13] D. Shen, G. Wu, H.I. Suk, Deep learning in medical image analysis. Annu. Rev. Biomed. Eng., 19 (2017), pp. 221-248
[14] Dental notation, https://www.wikidoc.org/index.php/Dental_notation
[15] H. A. Khan, M. A. Haider, H. A. Ansari, H. Ishaq, A. Kiyani, K. Sohail, M. Muhammad, and S. A. Khurram, “Automated feature detection in dental periapical radiographs by using deep learning,” Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology, 2020.
[16] Collins FS, Varmus H. A new initiative on precision medicine. N Engl J Med 2015;372(9): 793–795.
[17] Ashley, E. A. Towards precision medicine. Nat. Rev. Genet. 17, 507–522 (2016).
[18] G.S. Ginsburg, KA. Phillips Precision medicine: from science to value Health Affairs, 37 (2018), pp. 694-704
[19] Wang C-W, Huang C-T, Lee J-H, Li C-H, Chang SW, Siao M-J, et al. A benchmark for comparison of dental radiography analysis algorithms. Med Image Anal 2016;31:63–76.
[20] Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 Int
[21] Getao Du, Xu Cao, Jimin Liang, Xueli Chen, and Yonghua Zhan. Medical image segmentation based on u-net: A review. Journal of Imaging Science and Technology, 64(2):20508–1, 2020
[22] Ahmed, I., Ahmad, M., Khan, F. A. & Asif, M. Comparison of Deep-Learning-Based Segmentation Models: Using Top View Person Images. IEEE Access 8, 136361-136373, doi:10.1109/access.2020.3011406 (2020).
[23] J. Long, E. Shelhamer, and T. Darrell, ‘‘Fully convolutional networks for semantic segmentation,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 3431–3440.
[24] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, ‘‘DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 4, pp. 834–848, Apr. 2018.
[25] Yu H, Yang LT, Zhang Q, Armstrong D, Deen MJ. Convolutional neural networks for medical image analysis: state-of-the-art, comparisons, improvement and perspectives. Neurocomputing. 2021.
[26] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In Proceedings of NIPS, pages 1106–1114, 2012.
[27] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of CVPR, pages 1–9, 2015.
[28] Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. arXiv:1311.2524 (2014)
[29] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
[30] M. Farooq and A. Hafeez, “COVID-ResNet: A deep learning framework for screening of COVID19 from radiographs,” 2020, arXiv:2003.14395. [Online]. Available: http://arxiv.org/abs/2003.14395
[31] M, Pławiak P, Wang K, Acharya UR (2020) ResNetattention model for human authentication using ECG signals. Expert Syst.
[32] S.S. Yadav, S.M. Jadhav, Deep convolutional neural network based medical image classification for disease diagnosis, J. Big Data 6 (2019)
[33] PCA, https://alyssaq.github.io/2015/computing-the-axes-or-orientation-of-a-blob/
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top