(3.230.76.48) 您好!臺灣時間:2021/04/15 01:42
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果

詳目顯示:::

我願授權國圖
: 
twitterline
研究生:宋怡萱
研究生(外文):Yi-Syuan Sung
論文名稱:開發具深度學習應用於自動追蹤耳膜功能之數位耳鏡於中耳炎輔助系統
論文名稱(外文):Implementation of a Digital Otoscope with Deep Learning for an Automatic Tracking Function in Otitis Media Assisted System
指導教授:林澂林澂引用關係
指導教授(外文):Chen Lin
學位類別:碩士
校院名稱:國立中央大學
系所名稱:生醫科學與工程學系
學門:工程學門
學類:生醫工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:82
中文關鍵詞:中耳炎耳膜耳鏡
外文關鍵詞:Otitis media(OM)EardrumOtoscope
相關次數:
  • 被引用被引用:0
  • 點閱點閱:52
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
中耳炎是一種在兒童裡普遍存在的疾病,通常是感冒引起的併發症。根據流行病學研究統計,高達80%的兒童在5歲之前患有中耳炎,其中有46%得過三次以上的急性中耳炎,所以在中耳炎的診斷上就非常具有挑戰性。然而,父母容易把中耳炎症狀與普通感冒相互混淆,沒有及時做好處理,而耽誤治療的黃金時間。如果這時家中具有耳鏡輔助診斷系統的設備就可以及時觀察出耳膜是否有異狀,及時去做治療。
本研究提出了一種半自動耳膜追蹤演算法系統,結合經認證過的數位耳鏡應用於居家照護的環境中,並考量到非專業用戶使用者並沒有相關的醫學背景和缺乏解剖學相關知識,藉由使用者引導介面這套系統可以引導使用者拍攝出完整耳膜。我們描繪出耳膜輪廓示意圖讓使用者依據畫面上的示意圖得知耳膜的形狀,增加箭頭引導標誌讓使用者根據箭頭指向的方向去做移動,最後再根據耳膜面積占總畫面的面積大小,達到一定比例時,就可以捕獲出完整的耳膜。我們的結果表明,半自動耳膜追蹤演算法可以抓出耳膜影像具有90.43%準確度。其中,正常耳膜影像抓出耳膜影像的準確度為95.66%,而非正常影像包括急性中耳炎(AOM) 抓到耳膜影像的準確度也有84.92%,而慢性中耳炎(COM)和積液性中耳炎(OME) 抓到耳膜影像的準確度也都分別有87.88%和84.11%。在後端影像辨識分析上,我們也增加深度學習的概念使用FCN-AlexNet和FCN-Vgg16兩種語義分割模組,去優化耳膜影像切割技術,讓電腦自動學習得到最佳的耳膜影像,以便於特徵提取對耳膜進行自動分類。
將智慧耳鏡結合手機APP設計一個耳膜拍攝引導介面,目的是為了讓使用者能夠有效率地去操作耳鏡以達成拍攝高品質耳膜影像,可以幫助使用者在家中利用隨身的裝置即可操作本商品去做及時檢測和連續監測耳朵內部是否有異狀產生。增加機器學習的概念有效地讓電腦自動去學習,診斷出耳膜疾病,幫助醫生給予適當的治療並減少復發性,避免造成兒童聽力受損和語言發展遲緩等問題。
Otitis media is defined as infection in the middle ear. Acute otitis media (AOM) is one of the most common infections in children under 15 years of age. According to epidemiological studies, children with otitis media have an infection rate of more than 60% before one year old. More than 80 percent of children have at least one episode of otitis media by the time they are 5 years of age and 46% of them have had more than three times of acute otitis media. Therefore, the diagnosis of otitis media in children is very challenging. However, many parents confuse otitis media with a common cold and only half of the patients with otitis media would have a fever. If children are not able to describe the symptoms related to otitis media, parents often ignore the symptoms and even for the physician other than otorhinolaryngologist can misjudge the symptoms, as a consequence, losing the golden time for treatment. At this time, the equipment with the otoscope-assisted diagnosis system in the home can timely observe whether the eardrum is abnormal or not.
Therefore, we proposed a semi-automatic eardrum tracking function implemented to the device, which can guide the user to capture the complete eardrum based on the eardrum illustration diagram. We sketched the outline of the eardrum on the screen so that the user can know the shape of the eardrum. We also add a guide sign to allow the user to move the direction to find eardrum. Finally, it is decided to capture the eardrum according to the ratio of the area of the eardrum to the area of the total picture. Our results demonstrated that this semi-automatic eardrum tracking algorithm can capture the complete eardrum with 90.43% accuracy for total images. Among them, the accuracy of 95.66 % for normal images, the accuracy of 84.92 % for AOM images, the accuracy of 87.88 % for COM images and the accuracy of 84.11 % for OME images. In the back-end image recognition analysis, we also add the concept of deep learning using FCN-AlexNet and FCN-Vgg16 modules to optimize the eardrum image segmentation technology. The computer automatically can learn to get the best and complete eardrum image in order to feature extraction on the eardrum perform automatic classification.
The smart otoscope will be combined with the mobile APP to design an eardrum shooting guide interface, in order to the user efficiently operate the otoscope to achieve high quality eardrum photographs. The smart otoscope can help parents to continuously detect and monitor the internal structure of the ear in time. Through the concept of machine learning, you can diagnose the symptoms of the eardrum and give appropriate treatment to reduce recurrence of the disease. This can avoid hearing loss and slow language development in children.
中文摘要 I
ABSTRACT II
誌謝 IV
目錄 V
圖目錄 VII
表目錄 X
第一章 緒論 1
1-1 研究背景與動機 1
1-2 研究目的 3
1-3 論文架構 4
第二章 文獻探討 5
2-1 耳膜與中耳炎疾病分類 5
2-2 影像處理 10
2-2-1 影像色彩空間轉換 12
2-2-2 影像二值化 15
2-2-3 FloodFill 泛洪填充演算法 17
2-2-4 輪廓提取與形狀描述 18
2-3 深度學習背景介紹 20
2-3-1 深度學習於電腦視覺應用 21
2-3-2 捲積神經網路(Convolution Neural Network, CNN) 22
2-3-2 FCN(Fully Convolutional Network)語義分割模組 25
第三章 研究方法 27
3-1 資料蒐集方法與程序 27
3-2 耳膜半自動追蹤方法 29
3-2-1 影像前處理 29
3-2-2 區別耳膜與耳洞 31
3-2-3 輪廓提取與捕捉 31
3-3 耳膜拍攝引導方法 33
3-3-1 計算耳膜輪廓質心 33
3-3-2 影像明暗度差異作為耳膜方向 34
3-4 深度學習之影像追蹤 35
3-4-1 影片轉影像之篩選 35
3-4-2 影像切割標記 38
3-4-3 FCN-AlexNet語義分割模組 39
3-4-4 FCN-VGG16語義分割模組 44
第四章 研究結果分析與討論 48
4-1 半自動耳膜捕捉結果 48
4-2 耳膜拍攝引導介面 50
4-2-1 耳膜輪廓示意圖 50
4-2-2 箭頭引導標誌 51
4-2-3 識別標誌 52
4-3 耳膜辨識分析結果 53
4-3-1 耳膜引導功能與深度學習優化之比較結果 60
第五章 結論與未來展望 64
5-1 結論 64
參考文獻Reference 65
[1] Kerschner JE, Preciado D. Otitis media. In: Kliegman RM, Stanton BF, St Geme JW, Schor N, eds. Nelson Textbook of Pediatrics. 20th ed. Philadelphia: Elsevier, 2016:3085-3100.
[2] Klein JO. Otitis Externa, Otitis Media, and Mastoiditis. In: Bennett JE, Dolin R, Blaser MJ, eds. Mandell, Douglas, and Bennett's Principles and Practice of Infectious Diseases, 8th ed. Philadelphia: Saunders, 2015;767-73.
[3] Wang PC, Chang YH, Chuang LJ, Su HF, Li CY. Incidence and recurrence of acute otitis media in Taiwan's pediatric population. Clinics (Sao Paulo) 2011;66:395-9.
[4] Takata GS, Chan LS, Morphew T, Mangione-Smith R, Morton SC, Shekelle P. Evidence assessment of the accuracy of methods of diagnosing middle ear effusion in children with otitis media with effusion. Pediatrics. 2003; 112:1379-87.
[5] Muderris T, Yazıcı A, Bercin S, Yalçıner G, Sevil E, Kırıs M. Consumer acoustic reflectometry: accuracy in diagnosis of otitis media with effusion in children. Int J Pediatr Otorhinolaryngol. 2013; 77:1771-4.
[6] Block SL, Mandel E, McLinn S, Pichichero ME, Bernstein S, Kimball S, Kozikowski J. Spectral gradient acoustic reflectometry for the detection of middle ear effusion by pediatricians and parents. Pediatr Infect Dis J. 1998; 17:560-4.
[7] Da Lilly-Tariah OB, Somefun AO. Traumatic perforation of the tympanic membrane in University of Port Harcourt Teaching Hospital, Port Harcourt. Nigeria. Niger Postgrad Med J 2007;14:121-4.
[8] Ott MC, Lundy LB. Tympanic membrane perforation in adults. How to manage, when to refer. Postgrad Med 2001;110:81-4.
[9] Mitchell KS, MD. Trauma to the Middle Ear, Inner Ear, and Temporal Bone. Ballenger's Otorhinolaryngology Head and Neck Surgery. 16th ed. Chapter 14. 2003. p. 345-56. (Edition James B. Snow Jr, MD Professor Emeritus University of Pennsylvania Philadelphia, Maryland John Jacob Ballenger, MD Associate Professor Department of Otolaryngology–Head and Neck Surgery Northwestern University Chicago, Illinois Chief Emeritus Division of Otolaryngology–Head and Neck Surgery Evanston Hospital Evanston, Illinois).
[10] Ologe FE. Traumatic perforation of tympanic membrane in Ilorin Nigeria. Niger J Surg 2002;8:9-12.
[11] Griffin WL Jr. A retrospective study of traumatic tympanic membrane perforations in a clinical practice. Laryngoscope 1979;89 (2 Pt 1):261-82.
[12] Gacek RR, Gacek MR. Anatomy of the auditory and vestibular systems. In: Snow JB Jr., Ballenger JJ, editors. Ballenger's Otorhinolaryngology Head and Neck Surgery. 16th ed., Vol. 1. Ontario: DC Becker Inc.; 2003. p. 1-5.
[13] Berger G, Finkelstein Y, Harell M. Non-explosive blast injury of the ear. J Laryngol Otol 1994;108:395-8.
[14] da Lilly-Tariah OB, Somefun AO. Traumatic perforation of the tympanic membrane in University of Port Harcourt Teaching Hospital, Port Harcourt. Nigeria. Niger Postgrad Med J 2007;14:121-4.
[15] Berger G, Finkelstein Y, Harell M. Non-explosive blast injury of the ear. J Laryngol Otol 1994;108:395-8.
[16] Lindeman P, Edström S, Granström G, Jacobsson S, von Sydow C, Westin T, et al. Acute traumatic tympanic membrane perforations. Cover or observe? Arch Otolaryngol Head Neck Surg 1987;113:1285-7.
[17] F.T. Orji, C.C. Agu. Determinants of spontaneous healing in traumatic perforations of the tympanic membrane. Clinical otolaryngology, 2008;33, 420-426.
[18] Chun SH, Lee DW, Shin JK. A Clinical Study of Traumatic Perforation of Tympanic Membrane. Seoul, Korea: Department of O tolaryngology, Hanil General Hospital 2010;113:679-86.
[19] B. Chen and Y. Lei, “Indoor and Outdoor People Detection and Shadow Suppression by Exploiting HSV Color Information,” Fourth International Conference on Computer and Information Technology, pp 137 −142, 2004.
[20] K. Ohba, Y. Sato, and K. Ikeuchi, “Appearance-based visual learning and object recognition with illumination invariance,” Machine Vision and Application, vol. 12, no. 4, pp. 189 −196, 2000.
[21] E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley.Color Transfer between Images, IEEE Computer Graphics and Applications, pp. 34-40, September/October 2001.
[22] 陳舒菁 ,“ ROI Recoverable Digital Watermarking for Medical Images ”,Chung Yuan Christian University,June, 2007.
[23] Zhang D S,Lu G J. Study and evaluation of different Fourier methods for image retrieval[J]. Image and Vision Computing, 2005,23(1):33-49.
[24] Alajlan N,Kamel M S,Freeman G. Multi-object image retrieval base on shape and topology[J]. Signal Processing: Image Communication, 2006,21:904-918.
[25] Y. Bengio, "Learning deep architectures for AI," Foundations and trends® in Machine Learning, vol. 2, pp. 1-127, 2009.
[26] G. E. Hinton, S. Osindero, and Y.-W. Teh, "A fast learning algorithm for deep belief nets," Neural computation, vol. 18, pp. 1527-1554, 2006.
[27] McCulloch W S, Pitts W. A logical calculus of the ideas immanent in nervous activity[J]. The bulletin of mathematical biophysics, 1943, 5(4): 115-133.
[28] Hebb D. The organization of behavior[J]. 1968.
[29] Mohamed A, Dahl G, Hinton G. Deep belief networks for phone recognition[C].Nips workshop on deep learning for speech recognition and related applications. 2009, 1(9): 39.
[30] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587.
[31] S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," in Advances in neural information processing systems, 2015, pp. 91-99.
[32] J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431-3440.
[33] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, 2012, pp. 1097-1105.
[34] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[35] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
[36] Tan, Pang-Ning; Steinbach, Michael; Kumar, Vipin (2005), Introduction to Data Mining, ISBN 0-321-32136-7.
[37] Tan, Pang-Ning; Steinbach, Michael; Kumar, Vipin (2005), Introduction to Data Mining, ISBN 0-321-32136-7.
電子全文 電子全文(網際網路公開日期:20210715)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關論文
 
無相關期刊
 
無相關點閱論文
 
系統版面圖檔 系統版面圖檔