跳到主要內容

臺灣博碩士論文加值系統

(44.200.194.255) 您好!臺灣時間:2024/07/20 16:07
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:張元阜
研究生(外文):CHANG, YUAN-FU
論文名稱:U-net的3D旋轉影像增強方法對TOF-MRA影像的血管分割
論文名稱(外文):U-net 3D rotation image enhancement method for blood vessel segmentation in TOF-MRA images
指導教授:施子卿蔡豐聲
指導教授(外文):SHIH, TZU-CHINGTSAI, FENG-SHENG
口試委員:彭馨蕾溫慶豐董建郎
口試委員(外文):PENG, SHIN-LEIWEN, CHING-FENGDONG, JIAN-LANG
口試日期:2024-01-03
學位類別:碩士
校院名稱:中國醫藥大學
系所名稱:生物醫學影像暨放射科學學系碩士班
學門:醫藥衛生學門
學類:醫學技術及檢驗學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:中文
論文頁數:33
中文關鍵詞:背景閾值影像增強骰子係數
外文關鍵詞:3D U-NetTOF-MRA
相關次數:
  • 被引用被引用:0
  • 點閱點閱:68
  • 評分評分:
  • 下載下載:15
  • 收藏至我的研究室書目清單書目收藏:0
臨床上的TOF-MRA影像需要人工手動執行把血管部分圈選出來,此過程非常耗時。本研究使用3D U-Net模型處理,以深度學習的方式來進行腦血管的分割,且提出3D旋轉影像增強的方法,透過數學的旋轉矩陣,實現影像的平移與旋轉,並比較標籤過的影像間不同的背景閾值、血管閾值和3D旋轉影像增強對於模型預測的影響。訓練的資料為兩種類型,分別為136張模擬影像與48張TOF-MRA影像,分別擷取出729個與900個32×32×32的小方塊,放入模型中訓練。第一種類型為血管樹模擬影像資料集預測的結果,無3D旋轉影像增強的平均骰子係數最佳為0.9652,有3D旋轉影像增強的平均骰子係數最佳為0.9499;第二種類型為臨床TOF-MRA影像預測的結果,無3D旋轉影像增強的平均骰子係數最佳為0.6965,有3D旋轉影像增強的平均骰子係數最佳為0.7619。全腦TOF-MRA影像預測的結果,無3D旋轉影像增強的平均骰子係數最佳為0.7918,有3D旋轉影像增強的平均骰子係數最佳為0.8184。3D U-Net模型對於使用3D旋轉影像增強方法的血管樹模擬影像,腦血管分割的效果無明顯變化;對於使用3D旋轉影像增強方法的臨床TOF-MRA影像,腦血管分割的效果皆有提升。
In clinical TOF-MRA imaging, the manual selection of the blood vessel portion is a time-intensive process. This study addressed this challenge by employing the 3D U-Net model within a deep learning framework for the segmentation of cerebral blood vessels. Furthermore, a novel 3D rotation image enhancement method was introduced, utilizing mathematical rotation matrices to achieve both image translation and rotation. The research systematically evaluated labeled images under diverse conditions, investigating the impact of different background thresholds, blood vessel thresholds, and the application of 3D rotation image enhancement on the model's predictive performance. We utilized two distinct types of training data: 136 computer-simulated images and 48 clinical TOF-MRA images. From these datasets, we extracted and employed 729 small squares of size 32×32×32 from the computer-simulated images and 900 small squares of the same dimensions from the clinical TOF-MRA images to train our model. Firstly, the prediction results for the vascular tree simulation images demonstrated that the best average Dice coefficient without 3D rotation image enhancement was 0.9652, while with enhancement, it was 0.9499. Secondly, in the case of clinical TOF-MRA images, the best average Dice coefficient without enhancement was 0.6965, which improved to 0.7619 with 3D rotation image enhancement. Lastly, for whole-brain TOF-MRA images, the best average Dice coefficient without enhancement was 0.7918, and with enhancement, it reached 0.8184. Interestingly, the 3D U-Net model displayed consistent cerebral blood vessel segmentation effectiveness for blood vessel tree simulation images, with no significant change when employing the 3D rotating image enhancement method. However, for clinical TOF-MRA images, the segmentation effect remarkably improved with the application of the 3D rotating image enhancement method.
誌謝 I
中文摘要 II
Abstract III
目次 V
圖目次 VII
表目次 VIII
第一章 前言 1
1.1研究目的 1
1.2文獻探討 2
第二章 研究方法 7
2.1資料集 7
2.1.1第一種類型為血管樹模擬影像 7
2.1.2 第二種類型為臨床TOF-MRA影像 8
2.2模型架構 10
2.2.1實驗設定 10
2.2.2模型訓練 11
2.3評估方式 15
第三章 結果 16
3.1模擬影像 16
3.2 TOF-MRA影像 16
3.2.1模型的訓練 16
3.2.2全腦的預測 18
第四章 結論 26
參考文獻 29

[1]Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18 (pp. 234-241). Springer International Publishing.
[2]Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T., & Ronneberger, O. (2016). 3D U-Net: learning dense volumetric segmentation from sparse annotation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19 (pp. 424-432). Springer International Publishing.
[3]Praschl, Christoph, et al.(2023). U-Net based vessel segmentation for murine brains with small micro-magnetic resonance imaging reference datasets. PLOS ONE, 18(10), e0291946.
[4]Celisse, A. (2014). Optimal cross-validation in density estimation with the L^2-loss.
[5]Kuş, Zeki, et al.(2023) Differential evolution-based neural architecture search for brain vessel segmentation. Engineering Science and Technology, an International Journal, 46, 101502.
[6]Ying, C., Klein, A., Christiansen, E., Real, E., Murphy, K., & Hutter, F. (2019, May). Nas-bench-101: Towards reproducible neural architecture search. In International conference on machine learning (pp. 7105-7114). PMLR.
[7]Kuş, Z., Aydın, M., Kiraz, B., & Can, B. (2022, July). Neural Architecture Search Using Metaheuristics for Automated Cell Segmentation. In Metaheuristics International Conference (pp. 158-171). Cham: Springer International Publishing.
[8]Teikari, P., Santos, M., Poon, C., & Hynynen, K. (2016). Deep learning convolutional networks for multiphoton microscopy vasculature segmentation. arXiv preprint arXiv:1606.02382.
[9]Tetteh, Giles, et al. (2020). Deepvesselnet: Vessel segmentation, centerline prediction, and bifurcation detection in 3-d angiographic volumes. Frontiers in Neuroscience, 14, 1285.
[10]Hilbert, Adam, et al.(2020). BRAVE-NET: fully automated arterial brain vessel segmentation in patients with cerebrovascular disease. Frontiers in artificial intelligence, 78.
[11]Martin, Steve Z., et al.(2015). 3D GRASE pulsed arterial spin labeling at multiple inflow times in patients with long arterial transit times: comparison with dynamic susceptibility-weighted contrast-enhanced MRI at 3 Tesla. Journal of Cerebral Blood Flow & Metabolism, 35(3), 392-401.
[12]Madai, Vince I., et al.(2012). Ultrahigh-field MRI in human ischemic stroke–a 7 tesla study. PLOS ONE, 7(5), e37631.
[13]Hotter, Benjamin, et al.(2009). Prospective study on the mismatch concept in acute stroke patients within the first 24 h after symptom onset-1000Plus study. BMC neurology, 9(1), 1-8.
[14]Li, Y., Zhang, Q., Zhou, H., Li, J., Li, X., & Li, A. (2023). Cerebrovascular segmentation from mesoscopic optical images using Swin Transformer. Journal of Innovative Optical Health Sciences, 2350009.
[15]Aktar, M., Rivaz, H., Kersten-Oertel, M., & Xiao, Y. (2023, October). VesselShot: Few-shot learning for cerebral blood vessel segmentation. In International Workshop on Machine Learning in Clinical Neuroimaging (pp. 46-55). Cham: Springer Nature Switzerland.
[16]Wang, K., Liew, J. H., Zou, Y., Zhou, D., & Feng, J. (2019). Panet: Few-shot image semantic segmentation with prototype alignment. In proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 9197-9206).
[17]Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211.
[18]Quintana-Quintana, O. J., De León-Cuevas, A., González-Gutiérrez, A., Gorrostieta-Hurtado, E., & Tovar-Arriaga, S. (2022). Dual U-Net-based conditional generative adversarial network for blood vessel segmentation with reduced cerebral MR training volumes. Micromachines, 13(6), 823.
[19]Teikari, P., Santos, M., Poon, C., & Hynynen, K. (2016). Deep learning convolutional networks for multiphoton microscopy vasculature segmentation. arXiv preprint arXiv:1606.02382.
[20]Yang, C., Li, Y., Bai, Y., Xiao, Q., Li, Z., Li, H., & Li, H. (2023, September). SS-Net: 3D Spatial-Spectral Network for Cerebrovascular Segmentation in TOF-MRA. In International Conference on Artificial Neural Networks (pp. 149-159). Cham: Springer Nature Switzerland.
[21]Farshad, A., Yeganeh, Y., Gehlbach, P., & Navab, N. (2022, September). Y-Net: A spatiospectral dual-encoder network for medical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 582-592). Cham: Springer Nature Switzerland.
[22]Di Noto, Tommaso, et al.(2023). Towards automated brain aneurysm detection in TOF-MRA: Open data, weak labels, and anatomical knowledge. Neuroinformatics, 21(1), 21-34.
[23]Schneider, M., Reichold, J., Weber, B., Székely, G., & Hirsch, S. (2012). Tissue metabolism driven arterial tree generation. Medical Image Analysis, 16(7), 1397-1414.
[24]Tetteh, G. (2019b). Synthetic dataset for training Deepvesselnet. Available online: https://github.com/giesekow/deepvesselnet/wiki/Datasets
[25]Smith, S. M. (2002). Fast robust automated brain extraction. Human Brain Mapping, 17(3), 143-155.
[26]Tustison, N. J., Avants, B. B., Cook, P. A., Zheng, Y., Egan, A., Yushkevich, P. A., & Gee, J. C. (2010). N4ITK: improved N3 bias correction. IEEE Transactions on Medical Imaging, 29(6), 1310-1320.
[27]Markiewicz, Christopher J., et al.(2021). OpenNeuro: An open resource for sharing of neuroimaging data. BioRxiv, 2021-06. https://openneuro.org/datasets/ds003949/versions/1.0.1
[28]Dice, L. R. (1945). Measures of the amount of ecologic association between species. Ecology, 26(3), 297-302.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top