跳到主要內容

臺灣博碩士論文加值系統

(44.221.66.130) 您好!臺灣時間:2024/06/24 05:12
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:黃泓叡
研究生(外文):Horng-Ruey Huang
論文名稱:乳房超音波影像之自動BI-RADS分級與電腦輔助診斷
論文名稱(外文):Automatic BI-RADS Grading and Computer-aided Diagnosis of Breast Ultrasound Images
指導教授:張瑞峰張瑞峰引用關係
口試委員:羅崇銘陳鴻豪
口試日期:2019-07-30
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:資訊工程學研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:英文
論文頁數:68
中文關鍵詞:乳癌超音波影像電腦輔助診斷乳房影像報告暨資料分析系統卷積神經網絡生成對抗網路
DOI:10.6342/NTU201903834
相關次數:
  • 被引用被引用:0
  • 點閱點閱:271
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
乳癌是常見的癌症,也是女性癌症死亡的主因。然而,早期的檢查和改善治療可以提升存活率。在臨床檢查上,超音波影像(US)經常用於評估乳房腫瘤的良惡性。乳房影像報告暨資料分析系統(BI-RADS)針對超音波影像的腫塊組織定義五種語彙用於評定BI-RADS等級以評估腫瘤的惡性程度。於是,我們提出一個自動BI-RADS分級系統用於腫瘤診斷。首先,我們使用基於生成對抗網路(GAN)的切割方法將每張超音波影像分成不同的影像區域以提供不同的影像資訊。然後,我們透過卷積神經網絡(CNN)預測每個語彙用於評定BI-RADS等級與評估腫瘤的良惡性。本研究使用335個經過病理驗證的腫瘤來評估我們提出的系統,其中有148個良性腫瘤和187個惡性腫瘤。在病理驗證前,所有腫瘤的最終BI-RADS分級評定為BI-RADS 3有90個,BI-RADS 4有114個,BI-RADS 5有131個。提出的系統在BI-RADS等級評定與腫瘤診斷的正確率分別為71.64% (240/335)和85.97% (288/335)。我們進一步使用CNN模型與不同的輸入影像用於評定BI-RADS等級與評估腫瘤的良惡性以比較提出的系統。使用CNN模型與原始的超音波影像在BI-RADS等級評定與腫瘤診斷的正確率分別為60.00% (201/335)和78.51% (263/335)。因此,提出的系統可以提供精確的BI-RADS等級和診斷結果給放射科醫師,而且提出的系統相較於使用CNN模型與不同的輸入影像在BI-RADS等級評定與腫瘤診斷有更良好的效能。
Breast cancer is the common and leading cause of cancer death in women worldwide. However, early examination and improved treatment can increase the survival rate. In the clinical examination, ultrasound (US) images are usually used to evaluate the malignancy of breast tumors. Breast Imaging Reporting and Data System (BI-RADS) defines five lexicons in the masses tissue of ultrasound images to assess the BI-RADS grade for evaluating tumor malignancy. Hence, we proposed an automatic BI-RADS grading system for tumor diagnosis. At first, we adopted the generative adversarial network (GAN)-based segmentation method to separate each US image into different image regions for providing different image information. Then, we predict each lexicon by the convolutional neural networks (CNN) models to assess the BI-RADS grade and evaluate tumor malignancy. There are 335 biopsy-proven tumors used to evaluate our proposed system in this study, including 148 benign tumors and 187 malignant tumors. The final BI-RADS grade assessment of all the tumors before biopsies is BI-RADS 3 for 90 cases, BI-RADS 4 for 114 cases, and BI-RADS 5 for 131 cases. The accuracy of the proposed system in the BI-RADS grade assessment and tumor diagnosis was 71.64% (240/335) and 85.97% (288/335), respectively. We further employed the CNN models directly with different input images to assess the BI-RADS grade and evaluate tumor malignancy to compare with the proposed system. The accuracy of using the CNN models with the original US images in the BI-RADS grade assessment and tumor diagnosis was 60.00% (201/335) and 78.51% (263/335), respectively. In conclusion, the proposed system can provide accurate BI-RADS grades and diagnostic results for radiologists, and the proposed system has the better performances than using the CNN models directly with different input images in the BI-RADS grade assessment and tumor diagnosis.
口試委員會審定書 i
致謝 ii
摘要 iii
Abstract iv
Table of Contents vi
List of Figures vii
List of Tables x
Chapter 1 Introduction 1
Chapter 2 Material 5
Chapter 3 Method 7
3.1 Tumor Segmentation 8
3.1.1 GAN-Based Segmentation Method 8
3.1.2 Training Details 10
3.2 Lexicon Prediction 14
3.2.1 BI-RADS Lexicons 15
3.2.2 Image Fusion Method 20
3.2.3 Classifier 22
3.3 BI-RADS Grade Assessment and Tumor Diagnosis 23
Chapter 4 Experiment Results 25
4.1 Comparison of Single Lexicon Prediction 25
4.1.1 Shape Lexicon 26
4.1.2 Orientation Lexicon 29
4.1.3 Margin Lexicon 32
4.1.4 Echo Pattern Lexicon 35
4.1.5 Posterior Features Lexicon 38
4.2 Comparison of BI-RADS Grade Assessment 43
4.3 Comparison of Tumor Diagnosis 46
4.3.1 Results of Using BI-RADS Grade 47
4.3.2 Results of Using CNN Models/NN Approach 51
Chapter 5 Discussion and Conclusions 57
References 63
[1]F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, "Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries," A Cancer Journal for Clinicians, vol. 68, pp. 394-424, Nov. 2018.
[2]R. Wang, Z. Yin, L. Liu, W. Gao, W. Li, Y. Shu, and J. Xu, "Second Primary Lung Cancer After Breast Cancer: A Population-Based Study of 6,269 Women," Frontiers in oncology, vol. 8, pp. 427-427, Oct. 2018.
[3]Y.-S. Sun, Z. Zhao, Z.-N. Yang, F. Xu, H.-J. Lu, Z.-Y. Zhu, W. Shi, J. Jiang, P.-P. Yao, and H.-P. Zhu, "Risk Factors and Preventions of Breast Cancer," International journal of biological sciences, vol. 13, pp. 1387-1397, Nov. 2017.
[4]L. Wang, "Early Diagnosis of Breast Cancer," Sensors (Basel), vol. 17, p. 1572, Jul. 2017.
[5]J. A. Baker, P. J. Kornguth, M. S. Soo, R. Walsh, and P. Mengoni, "Sonography of solid breast lesions: observer variability of lesion description and assessment," American Journal of Roentgenology, vol. 172, pp. 1621-1625, Jun. 1999.
[6]H.-J. Lee, E.-K. Kim, M. J. Kim, J. H. Youk, J. Y. Lee, D. R. Kang, and K. K. Oh, "Observer variability of Breast Imaging Reporting and Data System (BI-RADS) for breast ultrasound," European Journal of Radiology, vol. 65, pp. 293-298, Feb. 2008.
[7]J.-H. Choi, B. J. Kang, J. E. Baek, H. S. Lee, and S. H. Kim, "Application of computer-aided diagnosis in breast ultrasound interpretation: improvements in diagnostic performance according to reader experience," Ultrasonography, vol. 37, pp. 217-225, Jul. 2018.
[8]K. Horsch, M. Giger, C. J Vyborny, and L. Venta, "Performance of Computer-Aided Diagnosis in the Interpretation of Lesions on Breast Sonography," Academic radiology, vol. 11, pp. 272-80, Apr. 2004.
[9]Y.-L. Huang, "Computer-aided Diagnosis Using Neural Networks and Support Vector Machines for Breast Ultrasonography," Journal of Medical Ultrasound, vol. 17, pp. 17-24, Jan. 2009.
[10]A. Jalalian, S. B. T. Mashohor, H. R. Mahmud, M. I. B. Saripan, A. R. B. Ramli, and B. Karasfi, "Computer-aided detection/diagnosis of breast cancer in mammography and ultrasound: a review," Clinical Imaging, vol. 37, pp. 420-426, May 2013.
[11]H. Zhi, B. Ou, B.-M. Luo, X. Feng, Y.-L. Wen, and H.-Y. Yang, "Comparison of Ultrasound Elastography, Mammography, and Sonography in the Diagnosis of Solid Breast Lesions," Journal of Ultrasound in Medicine, vol. 26, pp. 807-815, Jun. 2007.
[12]L. Liberman and J. H. Menell, "Breast imaging reporting and data system (BI-RADS)," Radiologic Clinics, vol. 40, pp. 409-430, May 2002.
[13]X. Xiao, Q. Jiang, H. Wu, X. Guan, W. Qin, and B. Luo, "Diagnosis of sub-centimetre breast lesions: combining BI-RADS-US with strain elastography and contrast-enhanced ultrasound—a preliminary study in China," European Radiology, vol. 27, pp. 2443-2450, Jun. 2017.
[14]J. A. Baker, P. J. Kornguth, J. Y. Lo, M. E. Williford, and C. E. Floyd, "Breast cancer: prediction with artificial neural network based on BI-RADS standardized lexicon," Radiology, vol. 196, pp. 817-822, Sep. 1995.
[15]Y. Huang, L. Han, H. Dou, H. Luo, Z. Yuan, Q. Liu, J. Zhang, and G. Yin, "Two-stage CNNs for computerized BI-RADS categorization in breast ultrasound images," BioMedical Engineering OnLine, vol. 18, p. 8, Jan. 2019.
[16]S. M. Kim, H. Han, J. M. Park, Y. J. Choi, H. S. Yoon, J. H. Sohn, M. H. Baek, Y. N. Kim, Y. M. Chae, J. J. June, J. Lee, and Y. H. Jeon, "A comparison of logistic regression analysis and an artificial neural network using the BI-RADS lexicon for ultrasonography in conjunction with introbserver variability," Journal of digital imaging, vol. 25, pp. 599-606, Oct. 2012.
[17]A. Rodríguez-Cristerna, W. Gómez-Flores, and W. C. de Albuquerque Pereira, "A computer-aided diagnosis system for breast ultrasound based on weighted BI-RADS classes," Computer Methods and Programs in Biomedicine, vol. 153, pp. 33-40, Jan. 2018.
[18]W.-C. Shen, R.-F. Chang, and W. K. Moon, "Computer Aided Classification System for Breast Ultrasound Based on Breast Imaging Reporting and Data System (BI-RADS)," Ultrasound in Medicine & Biology, vol. 33, pp. 1688-1698, Nov. 2007.
[19]E. Uzunhisarcikli and V. Goreke, "A novel classifier model for mass classification using BI-RADS category in ultrasound images based on Type-2 fuzzy inference system," Sādhanā, vol. 43, p. 138, Jul. 2018.
[20]F. Zhang, Q. Huang, and X. Li, "The application of BI-RADS feature in the ultrasound breast tumor CAD system," in 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, 2017, pp. 1-5.
[21]L. Nanni, S. Ghidoni, and S. Brahnam, "Handcrafted vs. non-handcrafted features for computer vision classification," Pattern Recognition, vol. 71, pp. 158-172, Nov. 2017.
[22]Q. Pan, Y. Zhang, D. Chen, and G. Xu, "Character-Based Convolutional Grid Neural Network for Breast Cancer Classification," in International Conference on Green Informatics, 2017, pp. 41-48.
[23]K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770-778.
[24]T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, "Focal Loss for Dense Object Detection," in IEEE International Conference on Computer Vision, 2017, pp. 2999-3007.
[25]K. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask R-CNN," in IEEE International Conference on Computer Vision, 2017, pp. 2980-2988.
[26]S. Bauer, N. Carion, P. Schüffler, T. Fuchs, P. Wild, and J. Buhmann, "Multi-Organ Cancer Classification and Survival Analysis," in ArXiv, 2016.
[27]A. Mahbod, G. Schaefer, I. Ellinger, R. Ecker, A. Pitiot, and C. Wang, "Fusing fine-tuned deep features for skin lesion classification," Computerized Medical Imaging and Graphics, vol. 71, pp. 19-29, Jan. 2019.
[28]A. Nibali, Z. He, and D. Wollersheim, "Pulmonary nodule classification with deep residual networks," International Journal of Computer Assisted Radiology and Surgery, vol. 12, pp. 1799-1808, Oct. 2017.
[29]Y. Bengio, J. Louradour, R. Collobert, and J. Weston, "Curriculum learning," in Proceedings of the 26th Annual International Conference on Machine Learning, 2009, p. 6.
[30]R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. Wichmann, and W. Brendel, "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness," in International Conference for Learning Representations, 2018.
[31]J. Deng, W. Dong, R. Socher, L. Li, L. Kai, and F.-F. Li, "ImageNet: A large-scale hierarchical image database," in IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248-255.
[32]V. K. Singh, S. Romani, H. A. Rashwan, F. Akram, N. Pandey, M. M. K. Sarker, S. Abdulwahab, J. Torrents-Barrena, A. Saleh, M. Arquez, M. Arenas, and D. Puig, "Conditional Generative Adversarial and Convolutional Networks for X-ray Breast Mass Segmentation and Shape Classification," in Medical Image Computing and Computer Assisted Intervention, 2018, pp. 833-840.
[33]O. Ronneberger, P. Fischer, and T. Brox, "U-Net: Convolutional Networks for Biomedical Image Segmentation," in Medical Image Computing and Computer-Assisted Intervention, 2015, pp. 234-241.
[34]I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative Adversarial Nets," ArXiv, 06/01 2014.
[35]S. Izadi, Z. Mirikharaji, J. Kawahara, and G. Hamarneh, "Generative adversarial networks to segment skin lesions," in IEEE 15th International Symposium on Biomedical Imaging, 2018, pp. 881-884.
[36]M. Zhao, L. Wang, J. Chen, D. Nie, Y. Cong, S. Ahmad, A. Ho, P. Yuan, S. H. Fung, H. H. Deng, J. Xia, and D. Shen, "Craniomaxillofacial Bony Structures Segmentation from MRI with Deep-Supervision Adversarial Learning," in Medical image computing and computer-assisted intervention, 2018, pp. 720-727.
[37]L. R. Dice, "Measures of the Amount of Ecologic Association Between Species," Ecology, vol. 26, pp. 297-302, Jul. 1945.
[38]W. K. Moon, S.-C. Chang, C.-S. Huang, and R.-F. Chang, "Breast Tumor Classification Using Fuzzy Clustering for Breast Elastography," Ultrasound in Medicine & Biology, vol. 37, pp. 700-708, May 2011.
[39]D. A. Spak, J. S. Plaxco, L. Santiago, M. J. Dryden, and B. E. Dogan, "BI-RADS® fifth edition: A summary of changes," Diagnostic and Interventional Imaging, vol. 98, pp. 179-190, Mar. 2017.
[40]M. Costantini, P. Belli, R. Lombardi, G. Franceschini, A. Mulè, and L. Bonomo, "Characterization of Solid Breast Masses," Journal of Ultrasound in Medicine, vol. 25, pp. 649-659, May 2006.
[41]E. B. Mendelson, W. A. Berg, and C. R. B. Merritt, "Toward a standardized breast ultrasound lexicon, BI-RADS: Ultrasound," Seminars in Roentgenology, vol. 36, pp. 217-225, Jul. 2001.
[42]S. Raza, A. L. Goldkamp, S. A. Chikarmane, and R. L. Birdwell, "US of Breast Masses Categorized as BI-RADS 3, 4, and 5: Pictorial Review of Factors Influencing Clinical Management," RadioGraphics, vol. 30, pp. 1199-1213, Sep. 2010.
[43]L. Levy, M. Suissa, J. F. Chiche, G. Teman, and B. Martin, "BIRADS ultrasonography," European Journal of Radiology, vol. 61, pp. 202-211, Feb. 2007.
[44]N. Sannomiya, Y. Hattori, N. Ueda, A. Kamida, Y. Koyanagi, H. Nagira, S. Ikunishi, K. Shimabayashi, Y. Hashimoto, A. Murata, K. Sato, Y. Hirooka, K. Hosoya, K. Ishiguro, Y. Murata, and Y. Hirooka, "Correlation between Ultrasound Findings of Tumor Margin and Clinicopathological Findings in Patients with Invasive Ductal Carcinoma of the Breast," Yonago Acta Medica, vol. 59, pp. 163-168, Jun. 2016.
[45]A. S. Hong, E. L. Rosen, M. S. Soo, and J. A. Baker, "BI-RADS for Sonography: Positive and Negative Predictive Values of Sonographic Features," American Journal of Roentgenology, vol. 184, pp. 1260-1265, Apr. 2005.
[46]G. Rahbar, A. C. Sie, G. C. Hansen, J. S. Prince, M. L. Melany, H. E. Reynolds, V. P. Jackson, J. W. Sayre, and L. W. Bassett, "Benign versus Malignant Solid Breast Masses: US Differentiation," Radiology, vol. 213, pp. 889-894, Dec. 1999.
[47]A. T. Stavros, D. Thickman, C. L. Rapp, M. A. Dennis, S. H. Parker, and G. A. Sisney, "Solid breast nodules: use of sonography to distinguish between benign and malignant lesions," Radiology, vol. 196, pp. 123-134, Jul. 1995.
[48]J. Cui, B. Sahiner, H.-P. Chan, C. Paramagul, A. Nees, L. M. Hadjiiski, and Y.-T. Wu, "Characterization of posterior acoustic features of breast masses on ultrasound images using artificial neural network," in Proceedings of SPIE - The International Society for Optical Engineering, 2008.
[49]S. Liu and Z. Liu, "Multi-Channel CNN-based Object Detection for Enhanced Situation Awareness," in The Sensors & Electronics Technology (SET) panel Symposium SET-241 on 9th NATO Military Sensing Symposium, 2017.
[50]D. Mishra and B. Palkar, "Image Fusion Techniques: A Review," International Journal of Computer Applications, vol. 130, pp. 7-13, Nov. 2015.
[51]R. Singh and R. Gupta, "Improvement of Classification Accuracy Using Image Fusion Techniques," in International Conference on Computational Intelligence and Applications, 2016, pp. 36-40.
[52]S. Tara, A. Prof, S. Venkatesh, A. Prof, and M. Ameen Uddin, "Comparison Techniques of Image Fusion in Image Segmentation," in International Conference on Computer & Communication Technologies, 2014, pp. 2248-9584.
[53]B. Yang, Z.-l. Jing, and H.-t. Zhao, "Review of pixel-level image fusion," Journal of Shanghai Jiaotong University (Science), vol. 15, pp. 6-12, Feb. 2010.
[54]K. Nakayama, H. Horita, and A. Hirano, "A BCI System Based on Orthogonalized EEG Data and Multiple Multilayer Neural Networks in Parallel Form," in Artificial Neural Networks (ICANN), 2010, pp. 205-210.
[55]Z. Wang, W. Yan, and T. Oates, "Time series classification from scratch with deep neural networks: A strong baseline," in International Joint Conference on Neural Networks, 2017, pp. 1578-1585.
[56]S. K. Alam, E. J. Feleppa, M. Rondeau, A. Kalisz, and B. S. Garra, "Ultrasonic Multi-Feature Analysis Procedure for Computer-Aided Diagnosis of Solid Breast Lesions," Ultrasonic Imaging, vol. 33, pp. 17-38, Jan. 2011.
[57]M. Okeji, K. K. Agwu, K. K Agwuna, and I. C Nwachukwu, "Sonographic Features and Its Accuracy in Differentiating between Benign and Malignant Breast Lesions in Nigerian Women," World Journal of Medical Sciences, vol. 12, pp. 370-374, Jan. 2015.
[58]T. Fujioka, K. Kubota, M. Mori, Y. Kikuchi, L. Katsuta, M. Kasahara, G. Oda, T. Ishiba, T. Nakagawa, and U. Tateishi, "Distinction between benign and malignant breast masses at breast ultrasound using deep learning method with convolutional neural network," Japanese Journal of Radiology, vol. 37, pp. 466-472, Jun. 2019.
[59]P. H. Arger, C. M. Sehgal, E. F. Conant, J. Zuckerman, S. E. Rowling, and J. A. Patton, "Interreader Variability and Predictive Value of US Descriptions of Solid Breast Masses: Pilot Study," Academic Radiology, vol. 8, pp. 335-342, Apr. 2001.
[60]K. M. Ting, "Confusion Matrix," in Encyclopedia of Machine Learning and Data Mining, C. Sammut and G. I. Webb, Eds., first ed, 2017, pp. 260-260.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊