跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.62) 您好!臺灣時間:2025/11/16 05:10
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:黃雍智
研究生(外文):Yong-Jhih Huang
論文名稱:基於集成式深度學習建構濾波後之心音頻譜與倒譜預測模型
論文名稱(外文):Applying Deep Learning and Ensemble Learning to Construct Spectrum and Cepstrum of Filtered Phonocardiogram Prediction Model
指導教授:蔡孟勳蔡孟勳引用關係
口試委員:曾新穆楊谷洋楊谷章
口試日期:2018-07-09
學位類別:碩士
校院名稱:國立中興大學
系所名稱:資訊管理學系所
學門:電算機學門
學類:電算機一般學類
論文種類:學術論文
論文出版年:2018
畢業學年度:106
語文別:中文
論文頁數:56
中文關鍵詞:冠狀動脈疾病心音圖濾波器卷積神經網路集成學習
外文關鍵詞:Coronary Artery DiseasePhonocardiogramFilterConvolutional Neural Net- workEnsemble Learning
相關次數:
  • 被引用被引用:0
  • 點閱點閱:212
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:1
冠狀動脈疾病是一種常見的慢性疾病,起因於心臟供血不足所引起的心肌機能障礙或病變,也稱做缺血性心臟病,每年在世界上都造成無數人的死亡,近年來更高居全球十大死因的第一位。目前心臟聽診依然是診斷心臟疾病的重要檢查之一,許多的心臟疾病能夠有效的藉由醫生的聽診來診斷出來,然而心臟聽診仰賴醫師們的主觀經驗,因此本研究使用心音圖來建立自動分類模型提供客觀的診斷結果,協助醫師在臨床上進行心音的診斷,也期望運用於缺乏醫療的偏鄉地區。
本研究提出一個運用濾波器搭配深度學習與集成學習的心音圖自動分類方法。方法的步驟如下: 首先,使用Savitzky-Golay 與巴特沃斯濾波器對心音圖進行濾波。步驟二,應用短時距傅立葉轉換與離散餘弦轉換等方法將心音圖轉換為時頻譜與倒譜。步驟三,訓練卷積神經網路來建立心音圖預測模型。步驟四,運用兩種集成策略建立集成分類器,進行心音圖的預測。步驟五,平衡正反類別的樣本數提高模型的敏感度。最後的實驗結果顯示,我們所提出的方法能達到與其他競賽模型相當的預測水準,非常具有競爭力。在保留法的測試集中,MAcc 達到86.04% (86.46% sensitivity, 85.63% specificity),十折交叉驗證中,MAcc 為89.81%(91.73% sensitivity, 87.91% specificity)。
Coronary artery disease is a common chronic disease, as known as ischemic heart
disease, which is cardiac dysfunction caused by insufficient blood supply to the heart and kills countless people every year in the world. In recent years, coronary artery disease ranks first in the world’s top ten cause of death. Until now, cardiac auscultation is still an important examination for diagnosing heart diseases. Many heart diseases can be diagnosed effectively by auscultation. However, cardiac auscultation relies on the subjective experience of physicians. In order to provide objective diagnostic and assist physicians in the diagnosis of heart sounds in clinic, this study uses phonocardiograms to build an automatic classification model.

This study proposes an automatic classification approach for phonocardiograms using deep learning and ensemble learning with filters. The steps of approach are as follows:First, Savitzky-Golay and Butterworth filters are used to filter the phonocardiograms. Second, phonocardiograms are converted into spectrograms and cepstrums using methods such as short-time Fourier transform and discrete cosine transform. Third: Training convolutional neural networks to build classification models for phonocardiogram. Fourth: Use two ensemble strategies to build ensemble models. Lastly: Balance the quantity of positive and negative samples to increase the sensitivity of the model. The experimental results show that the proposed method is very competitive, which show that the performance of phonocardiogram classification model in the hold out testing is 86.04% MAcc (86.46% sensitivity, 85.63% specificity), and in the 10-fold cross validation is 89.81% MAcc(91.73% sensitivity, 87.91% specificity).
誌謝辭. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
論文摘要. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
圖目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
表目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
第一章緒論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 研究背景. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 研究動機與目的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 論文貢獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 論文架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
第二章文獻探討. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1 心音介紹. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.1 何謂心音. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 心音圖. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 心音圖研究之相關文獻討論. . . . . . . . . . . . . . . . . . . . . . . 7
2.4 深度學習文獻討論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.1 LeNet-5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.2 AlexNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4.3 VGGNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.5 資料來源. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
第三章研究方法. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1 方法概述. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 資料預處理. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.1 濾波器. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.2 Z-score 標準化. . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.3 獨熱編碼. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 訊號處理與特徵工程. . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3.1 時頻譜. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3.2 梅爾時頻譜. . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3.3 梅爾頻率倒頻譜係數. . . . . . . . . . . . . . . . . . . . . . . 21
3.4 深度學習與神經網路之架構. . . . . . . . . . . . . . . . . . . . . . . 22
3.5 集成學習. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.5.1 樣本式集成. . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.5.2 特徵式集成. . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
第四章實驗結果與分析. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1 實驗流程與設計. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1.1 實驗流程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1.2 實驗環境. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1.3 分類模型評估. . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2 訊號處理與特徵擷取. . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2.1 Savitzky Golay 之濾波結果. . . . . . . . . . . . . . . . . . . . 34
4.2.2 巴特沃斯之濾波結果. . . . . . . . . . . . . . . . . . . . . . . 36
4.2.3 時頻譜與倒譜特徵圖. . . . . . . . . . . . . . . . . . . . . . . 37
4.3 深度學習與集成學習之實驗結果. . . . . . . . . . . . . . . . . . . . . 38
4.3.1 未使用濾波器之樣本式集成. . . . . . . . . . . . . . . . . . . 39
4.3.2 使用SG 濾波器之樣本式集成. . . . . . . . . . . . . . . . . . 43
4.3.3 使用巴特沃斯濾波器之樣本式集成. . . . . . . . . . . . . . . 46
4.3.4 使用SG 濾波器之特徵式集成. . . . . . . . . . . . . . . . . . 48
4.4 與其他研究之比較. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
第五章結論與未來展望. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.1 未來展望. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
參考文獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
[1] S. Mendis, P. Puska, B. Norrving, W. H. Organization, et al., Global atlas on cardiovascular disease prevention and control. Geneva: World Health Organization, 2011.
[2] H. Wang, M. Naghavi, C. Allen, R. Barber, A. Carter, D. Casey, F. Charlson, A. Chen, M. Coates, M. Coggeshall, et al., “Global, regional, and national life expectancy, all-cause mortality, and cause-specific mortality for 249 causes of death,
1980–2015: a systematic analysis for the global burden of disease study 2015,” The Lancet, vol. 388, no. 10053, pp. 1459–1544, 2016.
[3] N. D. Wong, “Epidemiological studies of chd and the evolution of preventive cardiology,”Nature Reviews Cardiology, vol. 11, no. 5, p. 276, 2014.
[4] C. Mathers, G. Stevens, W. Mahanani, J. Ho, D. Fat, and D. Hogan, “Who methods and data sources for country-level causes of death 2000-2015,” Global Health Estimates Technical Paper WHO/HIS/IER/GHE/2016.3, 2016.
[5] 衛生福利部, “中華民國105 年死因統計.” https://www.mohw.gov.tw/
dl-40393-d934023c-02f2-472e-a9d8-8004162865b8.html. 中華民國106 年 10 月編印.
[6] G. D. Clifford, C. Liu, B. Moody, D. Springer, I. Silva, Q. Li, and R. G. Mark, “Classification of normal/abnormal heart sound recordings: The physionet/computing in cardiology challenge 2016,” in Computing in Cardiology Conference (CinC), 2016, pp. 609–612, IEEE, 2016.
[7] W. H. Organization, “Top 10 causes of death.” http://www.who.int/gho/mortality_burden_disease/causes_death/top_10/en/. Global Health Observatory(GHO) data.
[8] A. Raghu, D. Praveen, D. Peiris, L. Tarassenko, and G. Clifford, “Engineering a mobile health tool for resource-poor settings to assess and manage cardiovascular disease risk: Smarthealth study,” BMC medical informatics and decision making, vol. 15, no. 1, p. 36, 2015.
[9] A. Leatham, Auscultation of the Heart and Phonocardiography. Churchill Livingstone, 1975.
[10] D. B. Springer, L. Tarassenko, and G. D. Clifford, “Logistic regression-hsmm-based heart sound segmentation,” IEEE Transactions on Biomedical Engineering, vol. 63, no. 4, pp. 822–832, 2016.
[11] K. D. Pickrell, “Miller-keane encyclopedia and dictionary of medicine, nursing, and allied health,” Hospitals & Health Networks, vol. 77, no. 8, p. 70, 2003.
[12] V. L. Clark and J. A. Kruse, “Clinical methods: the history, physical, and laboratory examinations,” Jama, vol. 264, no. 21, pp. 2808–2809, 1990.
[13] R. M. Rangayyan and R. J. Lehner, “Phonocardiogram signal analysis: a review.,” Critical reviews in biomedical engineering, vol. 15, no. 3, pp. 211–236, 1987.
[14] R. M. Youngson and R. M. Youngson, Collins dictionary of Medicine. HarperCollins, 1992.
[15] C. Potes, S. Parvaneh, A. Rahman, and B. Conroy, “Ensemble of feature-based and deep learning-based classifiers for detection of abnormal heart sounds,” in Computing in Cardiology Conference (CinC), 2016, pp. 621–624, IEEE, 2016.
[16] P. Langley and A. Murray, “Heart sound classification from unsegmented phonocardiograms,”Physiological measurement, vol. 38, no. 8, p. 1658, 2017.
[17] M. N. Homsi, N. Medina, M. Hernandez, N. Quintero, G. Perpiñan, A. Quintana, and P. Warrick, “Automatic heart sound recording classification using a nested set of ensemble algorithms,” in Computing in Cardiology Conference (CinC), 2016, pp. 817–820, IEEE, 2016.
[18] C. Liu, D. Springer, Q. Li, B. Moody, R. A. Juan, F. J. Chorro, F. Castells, J. M. Roig, I. Silva, A. E. Johnson, et al., “An open access database for the evaluation of heart sound algorithms,” Physiological Measurement, vol. 37, no. 12, p. 2181, 2016.
[19] Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel, “Handwritten digit recognition with a back-propagation network,”in Advances in neural information processing systems, pp. 396–404, 1990.
[20] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[21] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems
25 (F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, eds.), pp. 1097–1105, Curran Associates, Inc., 2012.
[22] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, “Improving neural networks by preventing co-adaptation of feature detectors,” arXiv preprint arXiv:1207.0580, 2012.
[23] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[24] A. Savitzky and M. J. Golay, “Smoothing and differentiation of data by simplified least squares procedures.,” Analytical chemistry, vol. 36, no. 8, pp. 1627–1639, 1964.
[25] R. W. Schafer, “What is a savitzky-golay filter?[lecture notes],” IEEE Signal processing magazine, vol. 28, no. 4, pp. 111–117, 2011.
[26] S. Butterworth, “On the theory of filter amplifiers,” Wireless Engineer, vol. 7, no. 6, pp. 536–541, 1930.
[27] D. O’shaughnessy, Speech communication: human and machine. Universities press, 1987.
[28] S. B. Davis and P. Mermelstein, “Comparison of parametric representations for
monosyllabic word recognition in continuously spoken sentences,” in Readings in
speech recognition, pp. 65–74, Elsevier, 1990.
[29] J.-S. R. Jang, Audio Signal Processing and Recognition. available at the links for online courses at the author’s homepage at. http://www.cs.nthu.edu.tw/~jang.
[30] I. Hadji and R. P. Wildes, “What do we understand about convolutional networks?,”arXiv preprint arXiv:1803.08834v1, 2018.
[31] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org.
[32] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning(ICML-10), pp. 807–814, 2010.
[33] A. Karpathy, “Cs231n convolutional neural networks for visual recognition,” 2016.
[34] J. Han, J. Pei, and M. Kamber, Data mining: concepts and techniques. Elsevier, 2011.
[35] C. Zhang and Y. Ma, Ensemble machine learning: methods and applications. Springer, 2012.
[36] L. Breiman, “Bagging predictors,” Machine learning, vol. 24, no. 2, pp. 123–140, 1996.
[37] N. E. Singh-Miller and N. Singh-Miller, “Using spectral acoustic features to identify abnormal heart sounds,” in Computing in Cardiology Conference (CinC), 2016, pp. 557–560, IEEE, 2016.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊