跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.87) 您好!臺灣時間:2025/03/19 21:50
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:蘇柏豪
研究生(外文):Bo-Hao Su
論文名稱:使用三維卷積神經網路對星狀細胞瘤進行分級:探討資料增量與多對比磁共振影像的影響
論文名稱(外文):Grading astrocytoma by using 3D convolutional neural network: investigation on data augmentation and multi-contrast MRI
指導教授:莊子肇
指導教授(外文):Chuang, Tzu-chao
學位類別:碩士
校院名稱:國立中山大學
系所名稱:電機工程學系研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2023
畢業學年度:111
語文別:中文
論文頁數:58
中文關鍵詞:星狀細胞瘤三維卷積神經網路資料增量顯影後T1權重影像磁化率權重影像
外文關鍵詞:Astrocytoma3D convolutional neural networkData augmentationContrast-enhanced T1-weighted imagingSusceptibility-weighted imaging
相關次數:
  • 被引用被引用:0
  • 點閱點閱:84
  • 評分評分:
  • 下載下載:12
  • 收藏至我的研究室書目清單書目收藏:0
腦部腫瘤的惡性程度根據世界衛生組織的分級標準共可分成四級,等級越高表示惡性程度越高,患者存活率也就越低。過去幾年,卷積神經網路(Convolutional neural network, CNN)大幅被應用於醫學影像,特別是基於磁共振影像進行腫瘤切割與腫瘤惡性分級,其中以2D CNN最為常見。然而,腫瘤的結構與形狀屬於不規則狀的三維結構,因此使用3D CNN有機會擷取更多空間上的資訊,進而提升惡性分級的準確性。然而,3D CNN以單一受試者的三維影像、而非如2D CNN是以單張二維影像作為輸入資料,相較於二維模型更容易面臨到資料不足的挑戰,在臨床資料收集不易的情況下,資料擴充策略的使用就顯得特別重要。

本研究的影像資料來自65位星狀細胞瘤患者,其中WHO第二級有15位,第三級有5位,第四級有45位,每位患者皆使用1.5T的磁共振造影儀收取三維的磁化率權重影像(Susceptibility-weighted imaging, SWI)與顯影後T1權重影像,(Contrast-enhanced T1-weighted imaging, CE-T1WI),在將影像送入網路進行訓練前,兩組影像皆經過剛體對位、重新取樣、訊號的正規化與適當的資料增量,再送入3D U-Net或2D U-Net進行腫瘤惡性程度判斷,將腫瘤分為高度惡性與低度惡性。模型的訓練與測試過程中採五倍交叉驗證,除此之外每種模型都會進行五次隨機分組,平均其腫瘤切割的Dice相似係數(Dice similarity coefficient, DSC)以及腫瘤惡性判斷正確率,以評估該模型之效能。

本論文主要探討三個議題,分別為:一、觀察擴充方式增加所帶來之影響,在模型訓練過程中比較不同資料擴增方式,包含:以兩種方式(旋轉與翻轉:Aug2)、三種方式(旋轉、翻轉和縮放:Aug3)和四種方式(旋轉、翻轉、縮放和亮度調整:Aug4)進行增量,並與沒有使用影像增量的模型(Aug0)相互比較;二、觀察不同的輸入影像組合對於雙通道3D模型成效的影響,意即合併CE-T1WI和SWI作為網路輸入(Model-Mixed),相較於單一影像對比所訓練的Model-T1或Model-SWI是否能提昇正確率;三、比較3D CNN與2D CNN之間的成效。

研究結果顯示Aug0、Aug2、Aug3、Aug4進行腫瘤分割的DSC分別為0.61、0.68、0.70、0.72,平均分級正確率為0.840、0.862、0.880、0.898,皆隨增量方式的增加而有所提昇,使用最多增量策略的Aug4表現最佳。另外,在比較不同輸入影像的部分,Model-Mixed、Model-T1和Model-SWI的DSC分別為0.72、0.71和0.44,平均分級正確率為0.898、0.874和0.818,同時使用兩種不同影像對比的Model-Mixed皆比單一影像對比的模型之結果來得好。最後,2D與3D CNN所獲得的DSC與分級正確率皆相差不大,分別為0.66、0.68與0.886、0.862。本研究顯示,以影像旋轉、翻轉、縮放與亮度調整作為資料增量的策略與合併CE-T1WI與SWI兩種不同的影像對比有助於提升3D CNN的準確度,儘管在數據僅有65例且不同惡性程度患者數量有明顯差異的情況下,仍可達89.8%的惡性分級正確率。
The malignancy of brain tumors can be classified into four grades based on the World Health Organization''s grading standards. A higher grade indicates a higher malignancy and a lower patient survival rate. In the past few years, convolutional neural network (CNN) has been widely used in medical imaging, especially in tumor segmentation and malignancy grading based on magnetic resonance image (MRI), among which 2D CNN is the most common. However, since tumors usually have an irregular 3D structure, 3D CNN has the potential to capture more spatial information, thereby enhancing the accuracy of tumor classification. But 3D CNN uses a volume obtained from single patient as its input, while 2D CNN takes a single two-dimensional image, making it more challenging against insufficient data compared to 2D models. With the difficulty in collection of clinical data, the use of data augmentation becomes particularly important.

The neurological imaging data of this study was obtained from 65 patients with astrocytoma, including 15 diagnosed as WHO grade II, 5 as grade III, and 45 as grade IV. Each patient underwent 3D susceptibility-weighted imaging (SWI) and 3D contrast-enhanced T1-weighted imaging (CE-T1WI) at 1.5 Tesla. Both sets of images were rigidly aligned, resampled, signal-normalized, and appropriately augmented before model training. 3D U-Net or 2D U-Net was adopted in this study to classify the tumor malignancy as high grade or low grade. During the training and validation process, five-fold cross-validation was performed. In addition, random sampling was repeated five times for each model. The Dice similarity coefficient (DSC) of tumor segmentation and the accuracy of tumor grading was averaged to evaluate the efficacy of the model.

Investigation on three topics is presented in this thesis. First, to examine the impact of various data augmentation, augmentation using different number of methods, including two (rotation and flip: Aug2), three (rotation, flip, and scaling: Aug3), and four methods (rotation, flip, scaling, and brightness adjustment: Aug4), is compared with a model that did not use any image augmentation (Aug0). Second, CE-T1WI and SWI are jointly used to train a two-channel CNN (Model-Mixed), to see if it can improve the accuracy of tumor grading than those models trained with single-modality images (Model-T1 or Model-SWI). Third, the performance of 3D CNN and 2D CNN are evaluated by using identical methods of augmentation.

Results show that DSC of tumor segmentation is 0.61, 0.68, 0.70, and 0.72 for Aug0, Aug2, Aug3, and Aug4, respectively. And the corresponding accuracy of tumor grading for each model is 0.840, 0.862, 0.880, and 0.898. Both indices increase with more strategies of data augmentation. Among all, the Aug4 model has the best performance. Besides, DSC of Model-Mixed, Model-T1, and Model-SWI is 0.72, 0.71, and 0.44 with the corresponding grading accuracy of 0.898, 0.874, and 0.818, respectively. Model-Mixed, which uses two images with different contrast, performs better than those models utilizing only single contrast. Last but not the least, DSC and accuracy of malignancy grading obtained by 2D and 3D CNN in this study are not significantly different, with DSC of 0.66, 0.68, and accuracy of 0.886, 0.862, respectively. In conclusion, this study demonstrated that the use of various data augmentation strategies such as image rotation, flipping, scaling, and brightness adjustment, as well as combination of features extracted from CE-T1WI and SWI, can improve the accuracy of 3D CNN. Despite the small sample size (65) and the unbalanced sampling of patients with different degrees of malignancy, the proposed 3D CNN model still achieved an accuracy of 89.8% on grading astrocytoma.
論文審定書 i
致謝 ii
中文摘要 iii
Abstract v
目錄 vii
圖目錄 ix
表目錄 x
第一章 緒論 1
1.1 研究背景 1
1.2 研究動機與目的 6
第二章 影像資料與方法 8
2.1 影像資料來源 8
2.2 影像前處理 10
2.3 資料增量 12
2.4 網路架構 14
2.5 訓練流程與效能評估 17
2.5.1 不同增量方法組合對星狀細胞瘤分級之影響 20
2.5.2 輸入影像之組合對星狀細胞瘤分級之影響 22
2.5.3 2D與3D CNN模型對之比較 23
第三章 實驗結果 26
3.1 不同增量方法組合對星狀細胞瘤分級之結果 26
3.2 輸入影像之組合對星狀細胞瘤分級之結果 31
3.3 2D 與3D CNN模型之比較結果 33
第四章 討論與結論 35
4.1 不同增量方法組合對星狀細胞瘤分級之結果討論 35
4.2 輸入影像之組合對星狀細胞瘤分級之結果討論 38
4.3 2D 與3D CNN模型之比較結果之討論 42
4.4 本研究的限制與結論 44
參考文獻 45
1.Alzubaidi, L., et al., Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. Journal of big Data, 2021. 8: p. 1-74.
2.Litjens, G., et al., A survey on deep learning in medical image analysis. Medical Image Analysis, 2017. 42: p. 60-88.
3.Arevalo, J., et al., Representation learning for mammography mass lesion classification with convolutional neural networks. Computer Methods and Programs in Biomedicine, 2016. 127: p. 248-257.
4.Gulshan, V., et al., Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. Jama-Journal of the American Medical Association, 2016. 316(22): p. 2402-2410.
5.Havaei, M., et al., Brain tumor segmentation with deep neural networks. Medical image analysis, 2017. 35: p. 18-31.
6.Singh, S.P., et al., 3D Deep Learning on Medical Images: A Review. Sensors, 2020. 20(18): p. 24.
7.Nie, D., et al. 3D deep learning for multi-modal imaging-guided survival time prediction of brain tumor patients. in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19. 2016. Springer.
8.Mehta, R. and T. Arbel. 3D U-Net for brain tumour segmentation. in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers, Part II 4. 2019. Springer.
9.Mzoughi, H., et al., Deep Multi-Scale 3D Convolutional Neural Network (CNN) for MRI Gliomas Brain Tumor Classification. Journal of Digital Imaging, 2020. 33(4): p. 903-915.
10.Goodenberger, M.L. and R.B. Jenkins, Genetics of adult glioma. Cancer Genetics, 2012. 205(12): p. 613-621.
11.Baid, U., et al., A novel approach for fully automatic intra-tumor segmentation with 3D U-Net architecture for gliomas. Frontiers in computational neuroscience, 2020: p. 10.
12.Ostrom, Q.T., et al., Adult Glioma Incidence and Survival by Race or Ethnicity in the United States From 2000 to 2014. Jama Oncology, 2018. 4(9): p. 1254-1262.
13.Ostrom, Q.T., et al., CBTRUS Statistical Report: Primary brain and other central nervous system tumors diagnosed in the United States in 2010-2014. Neuro-Oncology, 2017. 19: p. V1-V88.
14.Ostrom, Q.T., et al., CBTRUS Statistical Report: Primary Brain and Other Central Nervous System Tumors Diagnosed in the United States in 2014-2018. Neuro-Oncology, 2021. 23: p. 1-105.
15.Louis, D.N., et al., The 2016 World Health Organization Classification of Tumors of the Central Nervous System: a summary. Acta Neuropathologica, 2016. 131(6): p. 803-820.
16.Louis, D.N., et al., The 2007 WHO classification of tumours of the central nervous system. Acta Neuropathologica, 2007. 114(2): p. 97-109.
17.Haacke, E.M., et al., Susceptibility weighted imaging (SWI). Magnetic Resonance in Medicine, 2004. 52(3): p. 612-618.
18.Haacke, E.M., et al., Susceptibility-Weighted Imaging: Technical Aspects and Clinical Applications, Part 1. American Journal of Neuroradiology, 2009. 30(1): p. 19-30.
19.Sehgal, V., et al., Susceptibility-weighted imaging to visualize blood products and improve tumor contrast in the study of brain masses. Journal of Magnetic Resonance Imaging, 2006. 24(1): p. 41-51.
20.Pinker, K., et al., High-resolution contrast-enhanced, susceptibility-weighted MR imaging at 3T in patients with brain tumors: Correlation with positron-emission tomography and histopathologic findings. American Journal of Neuroradiology, 2007. 28(7): p. 1280-1286.
21.Kim, H.S., et al., Added Value and Diagnostic Performance of Intratumoral Susceptibility Signals in the Differential Diagnosis of Solitary Enhancing Brain Lesions: Preliminary Study. American Journal of Neuroradiology, 2009. 30(8): p. 1574-1579.
22.陳彥霖, 定量病灶內磁化率信號方法應用於星狀細胞瘤、腦部轉移腫瘤、與腦膿瘍之分析, in 電機工程學系研究所. 2015, 國立中山大學: 高雄市. p. 60.
23.Chuang, T.C., et al., Intra-tumoral susceptibility signal: a post-processing technique for objective grading of astrocytoma with susceptibility-weighted imaging. Quantitative Imaging in Medicine and Surgery, 2022. 12(1): p. 558-567.
24.Zhuge, Y., et al., Automated glioma grading on conventional MRI images using deep convolutional neural networks. Medical Physics, 2020. 47(7): p. 3044-3053.
25.Sajjad, M., et al., Multi-grade brain tumor classification using deep CNN with extensive data augmentation. Journal of Computational Science, 2019. 30: p. 174-182.
26.洪思駿, 使用卷積神經網路對星狀細胞瘤進行分級:利用顯影後T1權重影像與磁化率權重影像, in 電機工程學系研究所. 2020, 國立中山大學: 高雄市. p. 62.
27.廖慶安, 利用卷積神經網路對星狀細胞瘤進行分級:探討磁化率權重影像後處理的影響, in 電機工程學系研究所. 2021, 國立中山大學: 高雄市. p. 55.
28.Wang, Y., et al., Artery and vein separation using susceptibility-dependent phase in contrast-enhanced MRA. Journal of Magnetic Resonance Imaging, 2000. 12(5): p. 661-670.
29.Ashburner, J., et al., SPM12 manual. Wellcome Trust Centre for Neuroimaging, London, UK, 2014. 2464: p. 4.
30.Chlap, P., et al., A review of medical image data augmentation techniques for deep learning applications. Journal of Medical Imaging and Radiation Oncology, 2021. 65(5): p. 545-563.
31.Nalepa, J., M. Marcinkiewicz, and M. Kawulok, Data Augmentation for Brain-Tumor Segmentation: A Review. Frontiers in Computational Neuroscience, 2019. 13: p. 18.
32.Shin, H.-C., et al. Medical image synthesis for data augmentation and anonymization using generative adversarial networks. in Simulation and Synthesis in Medical Imaging: Third International Workshop, SASHIMI 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Proceedings 3. 2018. Springer.
33.Ronneberger, O., P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. in International Conference on Medical image computing and computer-assisted intervention. 2015. Springer.
34.Çiçek, Ö., et al. 3D U-Net: learning dense volumetric segmentation from sparse annotation. in International conference on medical image computing and computer-assisted intervention. 2016. Springer.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊