跳到主要內容

臺灣博碩士論文加值系統

(44.213.60.33) 您好!臺灣時間:2024/07/21 11:26
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:魏全奕
研究生(外文):WEI, CYUAN-YI
論文名稱:基於人工智慧判斷胸部X光影像肺部病灶區域之研究
論文名稱(外文):Study on the Detection of Lung Lesions in Chest X-ray Images Based on Artificial Intelligence
指導教授:張軒庭張軒庭引用關係
指導教授(外文):CHANG, HSUAN-TING
口試委員:石勝文李宗錞何前程歐芷瑩
口試委員(外文):SHIH, SHENG-WENLEE, TSUNG-CHUNHO, CHIAN-CHENGOu, Chih-Ying
口試日期:2022-06-23
學位類別:碩士
校院名稱:國立雲林科技大學
系所名稱:電機工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2022
畢業學年度:110
語文別:中文
論文頁數:107
中文關鍵詞:人工智慧胸部X光影像肺結核深度學習U-Net語義分割堆疊集成
外文關鍵詞:Artificial intelligenceChest X-raysTuberculosisDeep learningU-NetSemantic segmentationStacking ensemble
相關次數:
  • 被引用被引用:0
  • 點閱點閱:173
  • 評分評分:
  • 下載下載:28
  • 收藏至我的研究室書目清單書目收藏:0
本論文提出了基於深度學習網路方法用於胸部X光(Chest X-ray, CXR)影像分析,深度學習網路主要採用U-Net網路架構能夠偵測肺結核型態學病灶區域,加入改善機制如注意力門(Additive attention gate)機制、密集連接機制及金字塔空間池化模組(Pyramid scene parsing),對於少量的數據資料加入影像擴增方法增加影像特徵量並採用影像增強方法增強病灶特徵強度。本研究採用的模型架構分別為U-Net、Attention U-Net 、U-Net++、Attention U-Net++及PSP Attention U-Net++並進行最佳參數的優化比較,依據各模型的測試結果挑選最佳參數,採用集成方法組合模型結果輸出預測的病灶分割影像。主幹網路(Backbone)預訓練模型使用ImageNet資料集訓練權重,本論文實驗中我們使用736張訓練影像、9張驗證影像及12張測試影像,實驗結果顯示,本論文提出的堆疊集成方法(Stacking ensemble method)模型中最高平均聯集比(Mean intersection-over-union, Mean IoU)為0.747、平均精確率(Mean precision rate)為0.947、平均召回率(Mean recall rate)為0.784、Mean F1-score為0.853與準確率(Accuracy rate)為1.0,優於單一個深度學習網路模型的效果,此模型之預測結果可提供給醫師作為判斷肺部X光影像中病灶區域之輔助參考。
This paper presents a deep learning (DL) network-based approach for analyzing chest X-ray (CXR) images. In the proposed method, we mainly the U-Net architecture to detect tuberculosis morphological lesion areas in CXR images. In order to improve the accuracy, we also utilize the mechanisms such as attention, dense connection and pyramid spatial pooling (PSP module). When the amount of training data is small, we use the data augmentation and feature enhancement methods to increase the image number and strength of features in CXR images. The model architectures used in this study are U-Net, Attention U-Net, U-Net++, Attention U-Net++ and PSP Attention U-Net++, which are optimized and compared based on the test results of each model to find the best parameters. The final stage uses an ensemble classifier approach, which combines the various model results to obtain the predicted lesion segmentation masks. Backbone pre-training model using ImageNet dataset to train weights. The experimental dataset contains 736 training images, 9 validation images, and 12 test images. The experimental results show that the proposed Stacking ensemble model has a maximum mean intersection-over-union (Mean IoU) of 0.747, mean precision of 0.947, mean recall of 0.784, mean f1-score of 0.853, and accuracy of 1.0, which are all better than that of using a single deep learning network model. The proposal method can be used by clinicians as a computer-aided diagnosis of tuberculosis lesions on lung X-ray images.
摘要 i
ABSTRACT ii
誌謝 iii
目錄 iv
表目錄 vii
圖目錄 viii
第一章 緒論 1
1.1 研究動機 1
1.2 研究目的 1
1.3 研究方法 1
1.4 論文架構 2
第二章 相關技術及研究 3
2.1 胸部X光影像 3
2.2 肺結核之病灶區域判斷 4
2.2.1 Infiltrations/Bronchiectasis 5
2.2.2 Opacity/Consolidation 5
2.3 卷積神經網路 6
2.3.1 卷積層 7
2.3.2 活化函數層 8
2.3.3 池化層 9
2.3.4 反卷積層 10
2.3.5 全連結層 12
2.3.6 語義分割 13
2.4 U-Net網路變體架構 14
2.4.1 U-Net網路架構 14
2.4.2 U-Net++網路架構 15
2.4.3 注意力門機制 15
2.4.4 金字塔空間池化模組 18
2.5 殘差網路 19
2.6 損失函數 21
2.7 遷移學習 23
2.8 Dropout Layer 23
2.9 集成學習 25
第三章 研究方法 27
3.1 系統架構與流程圖 27
3.2 影像來源及影像標籤 29
3.3 影像前處理 31
3.3.1 DICOM格式轉PNG格式 31
3.3.2 肺部ROI與肺部分割 32
3.3.3 骨骼抑制 34
3.3.4 自適應直方圖均衡化 35
3.3.5 影像前處理組合類別 36
3.4 數據擴增 39
3.5 影像後處理 40
第四章 實驗結果與討論 42
4.1 實驗環境 42
4.2 實驗介紹 43
4.2.1 病灶影像分類評估函數 43
4.2.2 多類別語義分割評估函數 44
4.3 肺部分割實驗 45
4.4 基於不同前處理方法不同網路之實驗結果比較 46
4.4.1 U-Net搭配不同前處理方法的病灶偵測結果 47
4.4.2 Attention U-Net搭配不同前處理方法的病灶偵測結果 50
4.4.3 U-Net++搭配不同前處理方法的病灶偵測結果 53
4.4.4 Attention U-Net++搭配不同前處理方法的病灶偵測結果 57
4.4.5 PSP Attention U-Net++搭配不同前處理方法的病灶偵測結果 60
4.5 基於不同損失函數比較不同網路之實驗結果比較 63
4.5.1 U-Net搭配不同損失函數的病灶偵測結果 64
4.5.2 Attention U-Net搭配不同損失函數的病灶偵測結果 66
4.5.3 U-Net++搭配不同損失函數的病灶偵測結果 68
4.5.4 Attention U-Net++搭配不同損失函數的病灶偵測結果 70
4.5.5 PSP Attention U-Net++搭配不同損失函數的病灶偵測結果 72
4.6 交叉驗證實驗 74
4.7 集成式網路模型實驗 76
4.8 消融研究 79
4.9 實驗結果探討 80
第五章 結論 86
參考文獻 87


[1]Global tuberculosis report 2021. Geneva: World Health Organization, 2021.
[2]Global tuberculosis report 2019. Geneva: World Health Organization, 2019.
[3]P. Cudahy and S. V. Shenoi, “Diagnostics for pulmonary tuberculosis,” (in eng), Postgraduate medical journal, vol. 92, no. 1086, pp. 187-193, 2016.
[4]L. Alzubaidi et al., “Review of deep learning: Concepts, cnn architectures, challenges, applications, future directions,” Journal of Big Data, vol. 8, no. 1, p. 53, 2021.
[5]A.-M. Brady, “Ethnicity and the state in contemporary china,” Journal of Current Chinese Affairs, vol. 41, no. 4, pp. 3-9, 2012.
[6]F. Commandeur et al., “Deep learning for quantification of epicardial and thoracic adipose tissue from non-contrast ct,” IEEE Transactions on Medical Imaging, vol. 37, no. 8, pp. 1835-1846, 2018.
[7]S. Raoof, D. Feigin, A. Sung, S. Raoof, L. Irugulpati, and E. C. Rosenow, 3rd, “Interpretation of plain chest roentgenogram,” (in eng), Chest, vol. 141, no. 2, pp. 545-558, 2012.
[8]J. Irvin et al., “Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison,” p. arXiv:1901.07031, 2019.
[9]G. Aresta et al., “Iw-net: An automatic and minimalistic interactive lung nodule segmentation deep network,” Scientific Reports, vol. 9, no. 1, p. 11591, 2019.
[10]S. Jaeger, S. Candemir, S. Antani, Y. X. Wáng, P. X. Lu, and G. Thoma, “Two public chest x-ray datasets for computer-aided screening of pulmonary diseases,” (in eng), Quant Imaging Med Surg, vol. 4, no. 6, pp. 475-7, 2014.
[11]S. Rajaraman, I. Kim, and S. K. Antani, “Detection and visualization of abnormality in chest radiographs using modality-specific convolutional neural network ensembles,” PeerJ, vol. 8, p. e8693, 2020.
[12]H. Farhat, G. E. Sakr, and R. Kilany, “Deep learning applications in pulmonary medical imaging: Recent updates and insights on covid-19,” Machine vision and applications, vol. 31, no. 6, pp. 1-42, 2020.
[13]X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, “Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 21-26 July 2017 2017, pp. 3462-3471.
[14]P. Rajpurkar et al., “Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning,” p. arXiv:1711.05225, 2017.
[15]J. Liu, J. Lian, and Y. Yu, “Chestx-det10: Chest x-ray dataset on detection of thoracic abnormalities,” p. arXiv:2006.10550, 2020.
[16]Y. Liu, Y. H. Wu, Y. Ban, H. Wang, and M. M. Cheng, “Rethinking computer-aided tuberculosis diagnosis,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13-19 June 2020 2020, pp. 2643-2652.
[17]B. Milliron, T. S. Henry, S. Veeraraghavan, and B. P. Little, “Bronchiectasis: Mechanisms and imaging clues of associated common and uncommon diseases,” (in eng), Radiographics, vol. 35, no. 4, pp. 1011-30, 2015.
[18]N. S. Paul et al., “Radiologic pattern of disease in patients with severe acute respiratory syndrome: The toronto experience,” (in eng), Radiographics, vol. 24, no. 2, pp. 553-63, 2004.
[19]S. Gündel, A. A. A. Setio, S. Grbic, A. Maier, and D. Comaniciu, “Extracting and leveraging nodule features with lung inpainting for local feature augmentation,” presented at the Machine Learning in Medical Imaging: 11th International Workshop, MLMI 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Proceedings, Lima, Peru, 2020.
[20]A. Karargyris et al., “Combination of texture and shape features to detect pulmonary abnormalities in digital chest x-rays,” International Journal of Computer Assisted Radiology and Surgery, vol. 11, no. 1, pp. 99-106, 2016.
[21]Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[22]J. Olveres et al., “What is new in computer vision and artificial intelligence in medical image analysis applications,” Quantitative Imaging in Medicine and Surgery, vol. 11, no. 8, pp. 3830-3853, 2021.
[23]T. Rahman et al., “Reliable tuberculosis detection using chest x-ray with deep learning, segmentation and visualization,” IEEE Access, vol. 8, pp. 191586-191601, 2020.
[24]P. Lakhani and B. Sundaram, “Deep learning at chest radiography: Automated classification of pulmonary tuberculosis by using convolutional neural networks,” Radiology, vol. 284, no. 2, pp. 574-582, 2017.
[25]M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus, “Deconvolutional networks,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 13-18 June 2010 2010, pp. 2528-2535.
[26]V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,” p. arXiv:1603.07285, 2016.
[27]J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 7-12 June 2015 2015, pp. 3431-3440.
[28]G. Wang, W. Li, S. Ourselin, and T. Vercauteren, “Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks,” in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Cham, A. Crimi, S. Bakas, H. Kuijf, B. Menze, and M. Reyes, Eds., 2018// 2018: Springer International Publishing, pp. 178-190.
[29]O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” p. arXiv:1505.04597, 2015.
[30]G. Huang, Z. Liu, L. V. D. Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 21-26 July 2017 2017, pp. 2261-2269.
[31]Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: Redesigning skip connections to exploit multiscale features in image segmentation,” IEEE Transactions on Medical Imaging, vol. 39, no. 6, pp. 1856-1867, 2020.
[32]N. Siddique, S. Paheding, C. P. Elkin, and V. Devabhaktuni, “U-net and its variants for medical image segmentation: A review of theory and applications,” IEEE Access, vol. 9, pp. 82031-82057, 2021.
[33]O. Oktay et al., “Attention u-net: Learning where to look for the pancreas,” p. arXiv:1804.03999, 2018.
[34]C. Li et al., “Attention unet++: A nested attention-aware u-net for liver ct image segmentation,” in 2020 IEEE International Conference on Image Processing (ICIP), 25-28 Oct. 2020 2020, pp. 345-349.
[35]H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 21-26 July 2017 2017, pp. 6230-6239.
[36]K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 27-30 June 2016 2016, pp. 770-778.
[37]K. K. Darrow, “Entropy,” The Bell System Technical Journal, vol. 21, no. 1, pp. 51-74, 1942.
[38]N. Abraham and N. M. Khan, “A novel focal tversky loss function with improved attention u-net for lesion segmentation,” in 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), 8-11 April 2019 2019, pp. 683-687.
[39]F. Milletari, N. Navab, and S. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 2016 Fourth International Conference on 3D Vision (3DV), 25-28 Oct. 2016 2016, pp. 565-571.
[40]W. Zhu et al., “Anatomynet: Deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy,” Medical Physics, vol. 46, no. 2, pp. 576-589, 2019.
[41]J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?,” presented at the Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, Montreal, Canada, 2014.
[42]U. K. Lopes and J. F. Valiati, “Pre-trained convolutional neural networks as feature extractors for tuberculosis detection,” Computers in Biology and Medicine, vol. 89, pp. 135-143, 2017/10/01/ 2017.
[43]I. M. Baltruschat, H. Nickisch, M. Grass, T. Knopp, and A. Saalbach, “Comparison of deep learning approaches for multi-label chest x-ray classification,” Scientific Reports, vol. 9, no. 1, p. 6381, 2019.
[44]N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.
[45]D. H. Wolpert, “Stacked generalization,” Neural Networks, vol. 5, no. 2, pp. 241-259, 1992.
[46]Z.-H. Zhou, Ensemble methods: Foundations and algorithms. CRC press, 2012.
[47]C.-Y. J. Peng, K. L. Lee, and G. M. Ingersoll, “An introduction to logistic regression analysis and reporting,” The Journal of Educational Research, vol. 96, no. 1, pp. 3-14, 2002.
[48]E. Y. Boateng and D. Abaye, “A review of the logistic regression model with emphasis on medical research,” Journal of Data Analysis and Information Processing, vol. 07, pp. 190-207, 01/01 2019.
[49]M. Bari Antor et al., “A comparative analysis of machine learning algorithms to predict alzheimer’s disease,” Journal of Healthcare Engineering, vol. 2021, p. 9917919, 2021.
[50]R. Selvan et al., “Lung segmentation from chest x-rays using variational data imputation,” arXiv pre-print server, 2020.
[51]A. Nazábal, P. M. Olmos, Z. Ghahramani, and I. Valera, “Handling incomplete heterogeneous data using vaes,” Pattern Recognition, vol. 107, p. 107501, 2020.
[52]S. Jaeger et al., “Automatic screening for tuberculosis in chest radiographs: A survey,” (in eng), Quantitative imaging in medicine and surgery, vol. 3, no. 2, pp. 89-99, 2013.
[53]S. Rajaraman, G. Zamzmi, L. Folio, P. Alderson, and S. Antani, “Chest x-ray bone suppression for improving classification of tuberculosis-consistent findings,” Diagnostics, vol. 11, no. 5, p. 840, 2021.
[54]K. Zuiderveld, “Contrast limited adaptive histogram equalization,” in Graphics gems iv: Academic Press Professional, Inc., 1994, pp. 474–485.
[55]S. H. Yoo et al., “Deep learning-based decision-tree classifier for covid-19 diagnosis from chest x-ray imaging,” (in eng), Frontiers in medicine, vol. 7, pp. 427-427, 2020.
[56]I. R. Terol-Villalobos, “Morphological image enhancement and segmentation,” in Advances in imaging and electron physics, vol. 118, P. W. Hawkes Ed.: Elsevier, 2001, pp. 207-273.
[57]S. Jaeger, S. Candemir, S. Antani, Y.-X. J. Wáng, P.-X. Lu, and G. Thoma, “Two public chest x-ray datasets for computer-aided screening of pulmonary diseases,”, Quantitative imaging in medicine and surgery, vol. 4, no. 6, pp. 475-477, 2014.
[58]D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” p. arXiv:1412.6980, 2014.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊