跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.14) 您好!臺灣時間:2025/12/25 13:39
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:徐偉恩
研究生(外文):WEI EN HSU
論文名稱:應用深度學習自動計算心胸比率於臨床判別
論文名稱(外文):Application of Deep Learning for Automated Cardiothoracic Ratio Calculation and Cardiomegaly Detection
指導教授:莊政宏莊政宏引用關係王昭能
指導教授(外文):CHUANG, CHENG-HUNGWANG, CHAO-NENG
口試委員:許承瑜程大川莊政宏王昭能
口試委員(外文):HSU, CHENG-YUCHENG, DA-CHUANCHUANG, CHENG-HUNGWANG, CHAO-NENG
口試日期:2020-06-12
學位類別:碩士
校院名稱:亞洲大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:59
中文關鍵詞:Faster RCNN物件偵測胸腔 X 光深度學習醫學影像
外文關鍵詞:Faster RCNNobject detectionchest X-raydeep learningmedical image
相關次數:
  • 被引用被引用:0
  • 點閱點閱:2355
  • 評分評分:
  • 下載下載:1
  • 收藏至我的研究室書目清單書目收藏:0
肺部 X 光檢查 (chest X-ray) 是臨床上最常進行的檢查之一,也是一般身體 健康檢查會進行的項目之一,但目前臨床上肺部 X 光檢查的判讀主要由放射科 醫師或臨床醫師進行,有時在繁忙的臨床工作中,不免有遺漏之處,因此本研究 的目的在建立一個自動影像判讀系統,協助臨床醫師肺部 X 光檢查的判讀。 在肺部 X 光檢查的判讀中,心胸比的計算是一個基本的項目,目前主要由醫 師在心臟劃一條垂直的參考線,量出心臟的左緣及右緣間的距離,除以胸部的寬 度,即是心胸比。心胸比能協助判斷患者是否有心臟肥大或高血壓心臟病,心胸 比大於 0.6 表示病人有左心室肥大,同時也是一項預後指標,心胸比大於 0.6 的 患者死亡率高於心胸比小於 0.6 的患者,同時也可以做為心血管疾病的參考,心 胸比大於 0.6 的患者較容易發生心血管疾病,如心肌梗塞或中風。左心室肥大同 時也是心血管疾病及心衰竭的預後指標,心胸比大於 0.6 患者死亡率上升。 在洗腎患者,心胸比大於 0.6 是一項獨立的預後指標,因此治療的目標是使 心胸比下降。台灣腎臟醫學會規定,所有洗腎病人每年需檢查一次,如果心胸比 大於0.6,會儘可能幫病人把水份脫乾 (請病人減少喝水,增加洗腎中的脫水量), 如果以上的處置成功,就會看到心胸比下降。另一方面,如果病人的心胸比小於 0.4, 則可以減少脫水量。有時患者的心胸比小於 0.6,但在每半年一次的影像 中,可以看到心胸比不斷地上升,這也表示患者的水份太多,需加強限水或增加 透析時的脫水量。本研究使用了 Faster RCNN 取代繁雜的手工測量 CTR 的步驟,並將後面的 計算步驟自動化,並使用統計工具驗證模型預測的 CTR 誤差是否在容許範圍。
Lung X-ray (chest X - ray) is one of the most common examination, clinical and general health check will be one of the project, but the clinical interpretation of lung X- ray mainly conducted by the radiologist or clinical doctors, sometimes in a busy clinical work, inevitably there are omissions, so the purpose of this study was to establish an automated image interpretation system, assist clinicians about the interpretation of the lung X-ray. In the interpretation of lung X-ray examination, the calculation of the cardiothoracic ratio is a basic item. At present, the physician draws a vertical reference line in the heart to measure the distance between the left and right margins of the heart, divided by the width of the chest, that is, the cardiothoracic ratio. Cardiothoracic ratio can help determine whether patients with cardiac hypertrophy or high blood pressure, heart disease, heart than patients with left ventricular hypertrophy is greater than 0.6, is also a prognostic indicator, cardiothoracic ratio greater than 0.6 patients mortality is higher than the heart than patients with less than 0.6, at the same time also can be as reference for the cardiovascular disease, cardiothoracic ratio greater than 0.6 patients more prone to cardiovascular disease, such as myocardial infarction, or stroke. Left ventricular hypertrophy was also a prognostic indicator of cardiovascular disease and heart failure, with increased mortality in patients with a cardiothoracic ratio greater than 0.6. Cardiothoracic ratio greater than 0.6 is an independent prognostic indicator in renal wash patients, so the therapeutic objective is to reduce cardiothoracic ratio. Taiwan Kidney Medical Association, all nephrosis people need to check once a year, if the cardiothoracic ratio is greater than 0.6, will help the patient as far as possible to remove water (please reduce the patient to drink water, increase the amount of water in the kidney wash), if the above treatment is successful, you will see the cardiothoracic ratio decline. On the other hand, if the patient's cardiothoracic ratio is less than 0.4, dewatering can be reduced. Sometimes the patient's cardiothoracic ratio is less than 0.6, but it can be seen in the semi-annual image that the cardiothoracic ratio keeps rising, which also indicates that the patient has too much water, so it is necessary to strengthen water restriction or increase water dewatering during dialysis.
目錄
目錄 i
圖目錄 iii
表目錄 v
摘要 vi
ABSTRACT vii
第一章 緒論 1
1.1 研究背景 1
1.2 研究動機 2
1.3 研究目的 4
第二章 文獻探討 6
2.1 醫學影像 6
2.2 RCNN、FAST RCNN、FASTER RCNN 8
2.3 深度學習結合臨床經驗 10
第三章 研究方法 11
3.1 資料集來源 12
3.1.1 資料統計 13
3.1.2 訓練資料標註 15
3.1.3 醫療影像處理 16
3.2 FASTER RCNN 20
3.3 STAGE 1 VGG16 22
3.3.1 卷積神經網路 (CNN) 介紹 22
3.3.2 卷積層 (Convolution Layer, conv) 22
3.3.3 激活函數 (Activation functions) 24
3.3.4 池化層 (Pooling Layer) 25
3.3.5 全連接層 (Fully connected dense layers, FC) 26
3.4 STAGE 2 RPN 28
3.4.1 錨點 (Anchor) 28
3.4.2 重疊度評價函數 (Intersection over Union, IOU) 29
3.4.3 RPN程式流程 30
3.5 STAGE 3 候選識別區域池化 (ROI POOLING) 33
3.5.1 非極大值抑制 (Non-Maximum Suppression, NMS) 33
3.5.2 RoI Pooling程式流程 35
3.6 STAGE 4 FC分類、回歸層 (FAST RCNN) 36
3.7 FASTER RCNN的損失函數 36
3.7.1 常用的損失或成本函數 (Loss function) 37
3.7.2 交叉熵 (Cross Entropy) 37
3.7.3 光滑L1函數 (Smooth L1) 38
3.7.4 RPN的分類損失函數 39
3.7.5 Fast RCNN的分類損失函數 39
3.7.6 RPN和Fast RCNN的標注框回歸損失函數 40
3.8 訓練網路 40
3.9 模型預測之臨床數據評估(BLAND ALTMAN) 41
第四章 實證分析 43
4.1 網路性能評估結果 43
4.2 一致性分析 48
4.3 相關性分析 50
第五章 結論與建議 54
參考文獻 55

圖目錄
圖1.1. 97年到105年的年底洗腎人數。來源:衛生福利部中央健康保險署 3
圖1.2. 97年到105年新發生的洗腎人數。來源:衛生福利部中央健康保險署 3
圖1.3. 97年到105年的洗腎人數。來源:衛生福利部中央健康保險署 4
圖3.1. 實驗流程圖 12
圖3.2. NIH CHEST X-RAY DATASET資料預覽 13
圖3.3. 資料集疾病類別統計 (PA) 14
圖3.4. 資料集疾病類別統計 (AP) 14
圖3.5. 資料集無症狀統計 15
圖3.6. 彩圖展開 16
圖3.7. 各通道的灰度圖 17
圖3.8. 單通道圖轉三通道 18
圖3.9. 數據除錯流程圖 19
圖3.10. FASTER RCNN網路架構 21
圖3.11. 卷積運算過程 24
圖3.12. MAX POOLING的過程 26
圖3.13. 卷積神經網絡的架構 27
圖3.14. 9個尺寸的ANCHORS示意圖 29
圖3.15. ANCHORS鋪滿FEATURE MAP後映射回原圖的示意圖 31
圖3.16. RESHAPE的直觀圖解 32
圖3.17. 修正和刪減後的預選框 34
圖3.18. NMS虛擬碼,來源:[73] 35
圖3.19. 三種函數的圖形 39
圖3.20. BLAND ALTMAN PLOT示意圖 42
圖4.1. RPN的邊框LOSS 43
圖4.2. RPN的分類LOSS 44
圖4.3. FAST RCNN的邊框LOSS 45
圖4.4. FAST RCNN的分類LOSS 45
圖4.5. 整體FASTER RCNN的總LOSS 46
圖4.6. BLAND ALTMAN PLOT 49
圖4.7. 散點圖 51
圖4.8. 心臟和胸腔的真實標注框 52
圖4.9. 心臟和胸腔的預測框 53

表目錄
表3.1. 常用激勵函數的圖形和公式 25
表 4.1. 使用VGG16為主幹模型的最優指標 47
表 4.2. 使用RESNET為主幹模型的最優指標 48
表4.3. 預測資料和醫生們的BLAND ALTMAN分析 48
表4.4. 相關係數表 50


參考文獻
1.H. Abe, et al., Computer-aided diagnosis in chest radiography: results of large-scale observer tests at the 1996-2001 RSNA scientific assemblies. Radiographics, 2003. 23(1): p. 255-65.
2.S. Katsuragawa and K. Doi, Computer-aided diagnosis in chest radiography. Comput Med Imaging Graph, 2007. 31(4-5): p. 212-23.
3.B. van Ginneken, B. M. ter Haar Romeny, and M. A. Viergever, Computer-aided diagnosis in chest radiography: a survey. IEEE Trans Med Imaging, 2001. 20(12): p. 1228-41.
4.G. Coppini, et al., A computer-aided diagnosis approach for emphysema recognition in chest radiography. Med Eng Phys, 2013. 35(1): p. 63-73.
5.B. van Ginneken, L. Hogeweg, and M. Prokop, Computer-aided diagnosis in chest radiography: beyond nodules. Eur J Radiol, 2009. 72(2): p. 226-30.
6.E. Bohn, et al., Predicting risk of mortality in dialysis patients: a retrospective cohort study evaluating the prognostic value of a simple chest X-ray. BMC Nephrol, 2013. 14: p. 263.
7.K. H. Chen, et al., Cardiothoracic ratio, malnutrition, inflammation, and two-year mortality in non-diabetic patients on maintenance hemodialysis. Kidney Blood Press Res, 2008. 31(3): p. 143-51.
8.R. R. Quinn, et al., Predicting the risk of 1-year mortality in incident dialysis patients: accounting for case-mix severity in studies using administrative data. Med Care, 2011. 49(3): p. 257-66.
9.R. Yotsueda, et al., Cardiothoracic Ratio and All-Cause Mortality and Cardiovascular Disease Events in Hemodialysis Patients: The Q-Cohort Study. Am J Kidney Dis, 2017. 70(1): p. 84-92.
10.T. H. Yen, et al., Cardiothoracic ratio, inflammation, malnutrition, and mortality in diabetes patients on maintenance hemodialysis. Am J Med Sci, 2009. 337(6): p. 421-8.
11.R. S. Loomba, et al., Cardiothoracic ratio for prediction of left ventricular dilation: a systematic review and pooled analysis. Future Cardiol, 2015. 11(2): p. 171-5.
12.K. Kajimoto, et al., Sex Differences in Left Ventricular Cavity Dilation and Outcomes in Acute Heart Failure Patients With Left Ventricular Systolic Dysfunction. Can J Cardiol, 2018. 34(4): p. 477-484.
13.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks. NIPS, 2012.
14.Jia Deng, et al., ImageNet: A Large-Scale Hierarchical Image Database. CVPR09, 2009.
15.D. H. Hubel and T. N. Wiesel, Receptive fields, binocular interaction, and functional architecture in the cat’s visual cortex. J. physiology (London), 1962.
16.Kunihiko Fukushima, Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biology, 1980.
17.Y. LeCun, L. Bottou, and Y. Bengio, Gradient-Based Learning Applied to Document Recognition. IEEE, 1998.
18.J. Bouvrie, Notes on Convolutional Neural Networks. 2006.
19.C Szegedy, et al., Going deeper with convolutions. IEEE, 2015.
20.C. Szegedy, et al., Rethinking the Inception Architecture for Computer Vision. IEEE, 2016.
21.J. Redmon, et al., You Only Look Once: Unified, Real-Time Object Detection. IEEE, 2016.
22.J. Redmon and A. Farhadi, YOLO9000: Better, Faster, Stronger. IEEE, 2017.
23.J. Redmon and A. Farhadi, YOLOv3: An Incremental Improvement. 2018.
24.S. Ren, R Girshick K. He, and J. Sun, Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. NIPS, 2015.
25.M. Hayden and P. J. Nacher, History and physical principles of MRI. 2016.
26.T. M. Blodgett, C. C. Meltzer, and D. W. Townsend, PET/CT: Form and Function. Radiology, 2007.
27.A. Assmus, Early History of X Rays. 1995.
28.JF. Havlice and JC. Taenzer, Medical Ultrasonic Imaging: An Overview of Principles and Instrumentation. Proc. IEEE, 1979.
29.JS. Burchfield, M. Xie, and JA. Hil, Pathological ventricular remodeling: mechanisms: part 1 of 2. Circulation, 2013.
30.J. Canny, A Computational Approach to Edge Detection. IEEE, 1986.
31.T. Ojala, M. Pietikainen, and D. Harwood, Performance Evaluation of Texture Measures with Classification Based on Kullback Discrimination of Distributions. IEEE, 1994.
32.R. M. Haralick, K. Shanmugam, and IH. Dinstein, Textural features for image classification. IEEE Trans. Med. Imaging, 1973.
33.A. Hunter, et al., Elements of Morphology: Standard Terminology for the Ear. Am. J. Med. Genet., 2009.
34.JE. Allanson, et al., Elements of Morphology: Standard Terminology for the Head and Face. Am. J. Med. Genet., 2009.
35.JE. Allanson, et al., Elements of Morphology: Introduction. Am. J. Med. Genet., 2009.
36.Y. Furukawa and J. Ponce, Accurate, Dense, and Robust Multi-View Stereopsis. CVPR, 2007.
37.Z. Lei and Z. Yi, Big data analysis by infinite deep neural networks. J Res Dev, 2016.
38.ZH. Ling, et al., Deep Learning for Acoustic Modeling in Parametric Speech Generation. IEEE Trans. Signal Process., 2015.
39.D. Amodei, et al., Deep Speech 2: End-to-End Speech Recognition in English and Mandarin. 2016.
40.A. Hannun, et al., Deep speech: Scaling up end-to-end speech recognition. arXiv:1412.5567v2, 2014.
41.T. Young, D. Hazarika, and S. Poria, Recent Trends in Deep Learning Based Natural Language Processing. IEEE, 2018.
42.R. Collobert, et al., Natural Language Processing (Almost) from Scratch. JMLR, 2011.
43.S. Sun, C. Luo, and J. Chen, A review of natural language processing techniques for opinion mining systems. Inf Fusion, 2017.
44.M. Narvekar and P. Fargose, Weather Forecasting Using Artificial Neural Network. IJCA, 2015.
45.SD. Sawaitul, KP. Wagh, and PN. Chatur, Classification and Prediction of Future Weather by using Back Propagation Algorithm-An Approach. IJETAE, 2012.
46.AG. Salman, B. Kanigoro, and Y. Heryadi, Weather forecasting using deep learning techniques. ICACSIS 2015, 2015.
47.B. Lyu and A. Haque, Deep Learning Based Tumor Type Classification Using Gene Expression Data. Proc. ACM Int. Conf. Bioinf. Comput. Biol. Health Informat. (BCB), 2018.
48.P. Danaee, R. Ghaeini, and DA. Hendrix, A deep learning approach for cancer detection and relevant gene identification. PSB, 2017.
49.Y. Chen, et al., Gene expression inference with deep learning. Bioinformatics, 2016. 32(12): p. 1832–1839.
50.C. TU, et al., Network representation learning: an overview. 2017.
51.D. Shen, G. Wu, and HI. Suk, Deep Learning in Medical Image Analysis. Annu. Rev. Biomed. Eng, 2017.
52.JG. Lee, et al., Deep Learning in Medical Imaging: General Overview. KJR, 2017.
53.A. Hosny, C. Parmar, and J. Quackenbush, Artificial intelligence in radiology. Nat Rev Cancer, 2018.
54.JZ. Cheng, et al., Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans. Scientific reports, 2016.
55.J. Shiraishi, et al., Computer-Aided Diagnosis and Artificial Intelligence in Clinical Imaging. Seminars in nuclear medicine, 2011.
56.HC. Shin, et al., Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans. Med. Imaging, 2016: p. 1285-1298.
57.O. Ronneberger, P. Fischer, and T. Brox, U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, 2015.
58.R. Girshick, et al., Rich feature hierarchies for accurate object detection and semantic segmentation. CVPR, 2014.
59.J.R.R. Uijlings, et al., Selective Search for Object Recognition. IJCV, 2013.
60.CC. Chang and CJ. Lin, LIBSVM: A Library for Support Vector Machines. ACM Trans Intell Syst Technol, 2011.
61.R. Girshick, Fast R-CNN. ICCV, 2015.
62.K. He, et al., Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell, 2015.
63.R. Sa, et al., Intervertebral disc detection in X-ray images using faster R-CNN. EMBC (2017), 2017.
64.B. Pardamean, TW. Cenggoro, and R. Rahutomo, Transfer learning from chest X-ray pre-trained convolutional neural network for learning mammogram data. Procedia Comput. Sci., 135 (2018), 2018: p. 400-407.
65.A. Ismail, T. Rahmat, and S. Aliman, CHEST X-RAY IMAGE CLASSIFICATION USING FASTER R-CNN. Malaysian J. Comput. Sci., 2019.
66.X. Wang, et al., ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases. CVPR, 2017.
67.A. Dutta and A. Zisserman, The VIA Annotation Software for Images, Audio and Video. proceedings of the 27th acm international conference on multimedia, 2019.
68.K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
69.K. He, et al., Deep Residual Learning for Image Recognition. CVPR, 2016.
70.P. Murugan, Feed forward and backward run in deep convolution neural network. arXiv preprint arXiv:1711.03278, 2017.
71.D. C. Ciresan, et al., Flexible, high performance convolutional neural networks for image classification. IJCAI Proceedings-International Joint Conference on Artificial Intelligence, 2011. 22: p. 1237.
72.J. Schmidhuber, Deep learning in neural networks: An overview. Neural networks, 2015. 61: p. 85–117.
73.A. Neubeck and L. Van Gool, Efficient Non-Maximum Suppression. IEEE, 2006.
74.N. Bodla, et al., Improving Object Detection With One Line of Code. CVPR, 2017.
75.S. Ruder, An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747, 2016.
76.JM. Bland and DG. Altman, Statistical methods for assessing agreement between two methods of clinical measurement. The lancet, 1986.


QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊