跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.172) 您好!臺灣時間:2025/02/10 12:16
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:陳珮綺
研究生(外文):CHEN, PEI-CHI
論文名稱:乳房超音波可解釋人工智慧輔助檢測系統
論文名稱(外文):Breast Ultrasound Assisted Examination System with Explainable AI
指導教授:林斯寅林斯寅引用關係
指導教授(外文):LIN, SZU-YIN
口試委員:葉向原李鍾斌陳昀暄曾曉珽
口試委員(外文):YEH, HSIANG-YUANLI, CHUNG-PINCHEN, YUN-HSUANTSENG, HSIAO-TING
口試日期:2023-07-26
學位類別:碩士
校院名稱:國立宜蘭大學
系所名稱:多媒體網路通訊數位學習碩士在職專班
學門:電算機學門
學類:網路學類
論文種類:學術論文
論文出版年:2023
畢業學年度:111
語文別:英文
論文頁數:70
中文關鍵詞:乳癌超音波人工智慧卷積神經網路VGG16-bnResNet-50DenseNet-121可解釋人工智慧Grad-CAMSHAP遠端醫療
外文關鍵詞:breast cancerultrasoundartificial intelligenceconvolutional neural networkVGG16-bnResNet-50DenseNet-121interpretable artificial intelligenceGrad-CAMSHAPtelemedicine
相關次數:
  • 被引用被引用:0
  • 點閱點閱:131
  • 評分評分:
  • 下載下載:38
  • 收藏至我的研究室書目清單書目收藏:0
2021年,世界衛生組織(WHO)宣佈乳癌仍然是女性最常見的癌症,雖然這些年人工智慧(AI)蓬勃發展,AI對於乳房檢測的準確率已經非常高,但是目前臨床上仍然依賴醫生的專業和經驗,AI的參與非常有限。的確,醫學是一門循證學科,我們要打開AI的黑盒子,證明其判斷結果的正確性和穩定性,如此智慧醫療才能更往前邁進。
本研究提出的系統使用U-Net切割出乳房超音波影像中的可疑部位,使用卷積神經網路(CNN)VGG16-bn、ResNet-50和DenseNet-121做分類,還有定量分析的設計,可以立即測量可疑部位的大小和面積,與其它類似的檢測系統相比,本系統更大的優勢是包含「可解釋人工智慧(XAI)」,建置Grad-CAM和SHAP可視化AI的決策。
我們使用U-Net進行影像切割有好的效果,Dice coefficient達到94%,如此才能進一步有好的分類,三種演算法分類的效果差不多,準確度也都很好,至少能達到98%,從可視化結果來看,Gad-Cam比較適合用於解釋乳房超音波影像,heat map顯示CNN演算法關注的區域,紅色是最強的部份,SHAP的效果不好,可視化呈現不清楚,反而難以判讀。
本輔助檢測系統適用於遠端醫療,當地醫生使用本系統得到之檢測結果通過網路傳輸到專門的醫療單位以接手病患進一步的治療;超音波檢測率高、方便攜帶,價格也比X光設備便宜很多,除此操作的簡單性使非專業人員也能夠勝任。
關於乳癌的研究非常多,但是藉由AI輔助檢查仍然未見普及化,全世界有數百萬女性正在為此所苦,本研究結果顯示AI表現得很好,由衷冀盼AI在輔助醫療上的腳步能夠加快。
關鍵字:乳癌、超音波、人工智慧、卷積神經網路、VGG16-bn、ResNet-50、DenseNet-121、可解釋人工智慧、Grad-CAM、SHAP、遠端醫療
In 2021, the World Health Organization announced that breast cancer (BC) remained the most common form of cancer in women. The accuracy of artificial intelligence (AI)-based breast examination is already very high. However, clinics still rely on doctors’ expertise and experience, and AI involvement is very limited. Accordingly, we must open the black box of AI to increase confidence and trust in its use in the medical field.
In this research, U-Net is used for segmentation, and convolutional neural networks (CNNs) VGG16-bn, ResNet-50, and DenseNet-121 are employed for classification. Quantitative analysis can help in measuring the size and area of suspicious regions instantly. The great advantage over other similar examination systems is the inclusion of “explainable AI (XAI),” where gradient-weighted class activation mapping (Grad-CAM) and SHapley Additive exPlanations (SHAP) are incorporated to visualize AI decision-making. The proposed auxiliary examination system is suitable for telemedicine.
The utilization of U-Net for image segmentation provided good results, with a Dice coefficient of 94%. The classification efficiency of the three algorithms was similar, and the accuracy was at least 98%. Visualization results revealed that Grad-CAM was more suitable for the interpretation of breast ultrasound. The efficiency of SHAP was not good, and the visual presentation was unclear, making the interpretation difficult.
The results of this research show that AI can be an excellent tool for the early detection and diagnosis of BC, and the authors sincerely hope that the pace of AI-assisted medical care can be accelerated.
Keywords: breast cancer, ultrasound, artificial intelligence, convolutional neural network, VGG16-bn, ResNet-50, DenseNet-121, interpretable artificial intelligence, Grad-CAM, SHAP, telemedicine
Contents
摘 要 I
Abstract II
Acknowledgments III
Contents IV
Table of Contents VII
Diagram of Contents VIII
List of Abbreviations X
Chapter 1 Introduction 1
1-1 Motivation for the Research 1
1-1-1 About Cancer 1
1-1-2 Breast Cancer 4
1-1-3 Screening Methods for BC 5
1-2 Background of the Research 7
1-2-1 Artificial Intelligence 7
1-2-2 Medical AI 8
1-2-3 Explainable Artificial Intelligence 10
1-3 Purpose of the Research 11
Chapter 2 Related Work 12
2-1 Cancer and BC 12
2-1-1 Cancer 12
2-1-2 BC 13
2-2 Deep Learning 13
2-3 Segmentation 14
2-3-1 Thresholding Method 14
2-3-2 U-Net 16
2-3-3 U-Net for Digital Breast Tomosynthesis 19
2-4 Classification 20
2-5 XAI 22
2-5-1 SHAP 25
2-5-2 Grad-CAM 25
Chapter 3 Design and Method 28
3-1 System Design Architecture 28
3-2 System Processes 29
3-3 System Equipment for the Experiment 29
3-4 Dataset 30
3-5 Pre-processing of Data 33
3-5-1 Data Augmentation 33
3-5-2 Data Allocation 37
3-6 Segmentation and Classification 38
3-6-1 Segmentation 38
3-6-2 Classification 40
3-7 Interpretable AI 43
3-8 Quantitative Analysis 44
Chapter 4 Results 45
4-1 Segmentation 45
4-2 Classification 47
4-2-1 Binary Classification 47
4-2-2 Multiclassification 49
4-2-3 Performance of the Model for Testing Data 50
4-2-4 Comparison of Models 52
4-3 Explainable AI 54
4-3-1 Grad-CAM 54
4-3-2 SHAP 56
57
4-4 Quantitative Analysis 59
4-5 Comparison with Previous Relevant Studies 60
Chapter 5 Conclusion and Discussion 61
5-1 Segmentation 61
5-2 Classification 62
5-3 XAI 63
5-4 Quantitative Analysis 63
5-5 Purpose of the Research 64
Chapter 6 Research Constraints and Future Prospects 65
References 68


Table of Contents
Table 1 The incidences of the top six types of cancer in 2020. (WHO) 1
Table 2 The top five types of cancers that cause death in 2020. (WHO) 2
Table 3 Top 10 causes of death in 2020. (HPA) 2
Table 4 Top 10 cancers in Taiwan in 2020. (HPA) 3
Table 5 Comparison of BU and mammography. 6
Table 6 The widely used cancer course is the TNM staging system. 13
Table 7 Results obtained by expanded training and general training. [22] 18
Table 8 Results of 8 CNN classification models. [24] 21
Table 9 Results of 12 CNN classification models. [25] 22
Table 10 Computer specifications for the experiment. 30
Table 11 Division of the dataset into three categories. [31] 30
Table 12 Depth ranges of the LOGIQ E9 ultrasonic system. 31
Table 13 Four methods of data augmentation as given. 34
Table 14 Parameters of the U-Net model. 39
Table 15 Hyperparameters of VGG16_bn, ResNet-50, and DenseNet-121. 42
Table 16 Confusion Matrix 47
Table 17 Definitions of the four primary indexes of the confusion matrix. 48
Table 18 Definitions of the four second-level indexes. 48
Table 19 Definitions of the third-level indexes. 49
Table 20 Definition of the macro and weighted averages. 49
Table 21 Comparison of accuracy. 52
Table 22 Comparison of precision, recall, and F1-score. 52
Table 23 Results of the model performance. 53
Table 24 Axis major, axis minor and area of the suspicious regions. 59


Diagram of Contents
Figure 1 The cross-sectional surface of the breast. 4
Figure 2 The relationship between AI, ML, and DL. 7
Figure 3 The trend and focus of medical care in the future. 8
Figure 4 A brief overview of the research conducted by Ge-Ge Wu et al. [3] 9
Figure 5 Experimental architecture proposed by Jaeil Kim et al. 11
Figure 6 FELs proposed by Ismail Yaqub Maolood et al. [18] 15
Figure 7 FEL provides good results for BUI. [18] 16
Figure 8 U-Net Architecture [21]. 17
Figure 9 U-Net-based framework proposed by Guo et al. [22] 17
Figure 10 Comparison of expansion training and general training. [22] 18
Figure 11 Classification of XAI methods [27]. 23
Figure 12. DL models combined with XAI proposed by Fajin Dong et al.. [28] 24
Figure 13. AUC values for the model with coarse and fine ROIs. [28] 24
Figure 14 An overview of Grad-CAM. [29] 26
Figure 15 Examples of Grad-CAM visualization. 27
Figure 16 Breast Ultrasound Detection System with Explainable AI. 28
Figure 17 De-identified original BUIs. 32
Figure 18 Removal of the boundaries of the original BUIs. 32
Figure 19 Using MATLAB to determine the “ground truth” for segmentation. 32
Figure 20 Classification and number of images: benign (200); on the left is the 35
Figure 21 Classification and number of images: benign (232); on the left is the 35
Figure 22 5° rotation range and a 10% shift range of benign (1). 36
Figure 23 A typical overfitting. 37
Figure 24 Original architecture of U-Net. 39
Figure 25 Examples of the segmentation result. 45
Figure 26 Dice of the testing dataset. 46
Figure 27 Confusion matrix of VGG16_bn. 50
Figure 28 Confusion matrix of ResNet-50. 51
Figure 29 Confusion matrix of DenseNet-121. 51
Figure 30 Heat maps visualized by Grad-CAM. 55
Figure 31 The visualization of SHAP (malignant 14). 57
Figure 32 The visualization of SHAP (benign 64). 58
Figure 33 Segmentation by U-Net compared to the ground truth. 62
Figure 34. Breast Ultrasound Detection System with Explainable AI for telemedicine. 65
[1]T. J. Key, N. E. Allen, E. A. Spencer, and R. C. Travis, "The effect of diet on risk of cancer," The Lancet, vol. 360, no. 9336, pp. 861-868, 2002.
[2]I. S. Fentiman, A. Fourquet, and G. N. Hortobagyi, "Male breast cancer," The Lancet, vol. 367, no. 9510, pp. 595-604, 2006.
[3]G.-G. Wu et al., "Artificial intelligence in breast ultrasound," World Journal of Radiology, vol. 11, no. 2, p. 19, 2019.
[4]J. Kim et al., "Weakly-supervised deep learning for ultrasound diagnosis of breast cancer," Scientific reports, vol. 11, no. 1, p. 24382, 2021.
[5]R. A. Weinberg, "How cancer arises," Scientific American, vol. 275, no. 3, pp. 62-70, 1996.
[6]D. M. Gress et al., "Principles of cancer staging," AJCC cancer staging manual, vol. 8, pp. 3-30, 2017.
[7]S. E. Singletary and J. L. Connolly, "Breast cancer staging: working with the sixth edition of the AJCC Cancer Staging Manual," CA: a cancer journal for clinicians, vol. 56, no. 1, pp. 37-47, 2006.
[8]T. Sørlie et al., "Gene expression patterns of breast carcinomas distinguish tumor subclasses with clinical implications," Proceedings of the National Academy of Sciences, vol. 98, no. 19, pp. 10869-10874, 2001.
[9]M. J. Ellis et al., "Whole-genome analysis informs breast cancer response to aromatase inhibition," Nature, vol. 486, no. 7403, pp. 353-360, 2012.
[10]L. Deng and D. Yu, "Deep learning: methods and applications," Foundations and trends® in signal processing, vol. 7, no. 3–4, pp. 197-387, 2014.
[11]Y. Bengio, "Learning deep architectures for AI," Foundations and trends® in Machine Learning, vol. 2, no. 1, pp. 1-127, 2009.
[12] S. Albawi, T. A. Mohammed, and S. Al-Zawi, "Understanding of a convolutional neural network," in 2017 international conference on engineering and technology (ICET), 2017: Ieee, pp. 1-6.
[13]K. O'Shea and R. Nash, "An introduction to convolutional neural networks," arXiv preprint arXiv:1511.08458, 2015.
[14] G. E. Dahl, T. N. Sainath, and G. E. Hinton, "Improving deep neural networks for LVCSR using rectified linear units and dropout," in 2013 IEEE international conference on acoustics, speech and signal processing, 2013: IEEE, pp. 8609-8613.
[15]G. E. Hinton, "Deep belief networks," Scholarpedia, vol. 4, no. 5, p. 5947, 2009.
[16] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng, "Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations," in Proceedings of the 26th annual international conference on machine learning, 2009, pp. 609-616.
[17]N. Otsu, "A threshold selection method from gray-level histograms," IEEE transactions on systems, man, and cybernetics, vol. 9, no. 1, pp. 62-66, 1979.
[18]I. Y. Maolood, Y. E. A. Al-Salhi, and S. Lu, "Thresholding for medical image segmentation for cancer using fuzzy entropy with level set algorithm," Open Medicine, vol. 13, no. 1, pp. 374-383, 2018.
[19]J. A. Sethian and P. Smereka, "Level set methods for fluid interfaces," Annual review of fluid mechanics, vol. 35, no. 1, pp. 341-372, 2003.
[20]P. Jaganathan and R. Kuppuchamy, "A threshold fuzzy entropy based feature selection for medical database classification," Computers in Biology and Medicine, vol. 43, no. 12, pp. 2222-2229, 2013.
[21] O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention, 2015: Springer, pp. 234-241.
[22]Y. Guo, X. Duan, C. Wang, and H. Guo, "Segmentation and recognition of breast ultrasound images based on an expanded U-Net," Plos one, vol. 16, no. 6, p. e0253202, 2021.
[23]X. Lai, W. Yang, and R. Li, "DBT masses automatic segmentation using U-net neural networks," Computational and mathematical methods in medicine, vol. 2020, 2020.
[24]M. Masud, A. E. E. Rashed, and M. S. Hossain, "Convolutional neural network-based models for diagnosis of breast cancer."
[25]İ. PACAL, "Deep learning approaches for classification of breast cancer in ultrasound (US) images," Journal of the Institute of Science and Technology, vol. 12, no. 4, pp. 1917-1927.
[26] M. Van Lent, W. Fisher, and M. Mancuso, "An explainable artificial intelligence system for small-unit tactical behavior," 2004.
[27]Y. Zhang, Y. Weng, and J. Lund, "Applications of Explainable Artificial Intelligence in Diagnosis and Surgery," Diagnostics, vol. 12, no. 2, p. 237, 2022.
[28]F. Dong et al., "One step further into the blackbox: a pilot study of how to build more confidence around an AI-based decision system of breast nodule assessment in 2D ultrasound," European Radiology, 2021.
[29]"."
[30]M. Masud, A. E. Eldin Rashed, and M. S. Hossain, "Convolutional neural network-based models for diagnosis of breast cancer," Neural Computing and Applications, pp. 1-12.
[31]W. Al-Dhabyani, M. Gomaa, H. Khaled, and A. Fahmy, "Dataset of breast ultrasound images," Data Brief, vol. 28, p. 104863, Feb 2020, doi: 10.1016/j.dib.2019.104863.
[32]C. Cortes, M. Mohri, and A. Rostamizadeh, "L2 Regularization for Learning Kernels."
[33]X. Glorot and Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks."
[34]K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
[35] S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," in International conference on machine learning, 2015: PMLR, pp. 448-456.
[36] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[37] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708.
[38]D. Masters and C. Luschi, "Revisiting small batch training for deep neural networks," arXiv preprint arXiv:1804.07612, 2018.
[39] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, "Grad-cam: Visual explanations from deep networks via gradient-based localization," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618-626.
[40]S. M. Lundberg and S.-I. Lee, "A unified approach to interpreting model predictions," Advances in neural information processing systems, vol. 30, 2017.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊