跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.54) 您好!臺灣時間:2026/01/08 15:48
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:黃政憲
研究生(外文):HUANG, CHENG-HSIEN
論文名稱:放射治療中基於深度學習的自動器官分割技術與解決方案
論文名稱(外文):Deep Learning Based Automatic Organ Segmentation Method and Integrated Solution Applied in Radiotherapy
指導教授:劉偉名劉偉名引用關係
指導教授(外文):LIU, WEI-MIN
口試委員:劉耿豪劉偉名張建禕林金樹
口試委員(外文):LIU,GENG-HAOLIU, WEI-MINChang, Chein-ILin, Chin-Su
口試日期:2019-07-22
學位類別:碩士
校院名稱:國立中正大學
系所名稱:資訊工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:56
中文關鍵詞:深度學習自動器官分割電腦斷層影像
外文關鍵詞:Deep learningAutomatic organ segmentationComputed Tomography
相關次數:
  • 被引用被引用:0
  • 點閱點閱:685
  • 評分評分:
  • 下載下載:7
  • 收藏至我的研究室書目清單書目收藏:0
在進行放射治療中,描繪危及器官(OAR, Organ At Risk)是一項耗時耗力但是又非常重要的任務,有必要精確地描繪器官的輪廓。
在醫學影像分析領域當中,影像的複雜度與多樣性非常高,在以往的定規則的傳統方法上往往難以達到臨床的要求,而近年出現深度學習技術逐漸突破瓶頸。在本論文中我們研究深度學習分割技術並應用至實際臨床工作上。
在實驗方法中,我們利用肝臟分割的公開資料集(3Dircadb) 對各個種類的分割模型進行驗證,並且提出一個混和型模型(Fusion Model),此模型利用了Convolutional LSTM Layer來學習CT影像中不同層之間的空間相關性,也使用Attention Mechanism來抑制複雜的影像資訊當中不相干的特徵並且專注於有用的目標器官訊息,最後在測試集的驗證下獲得最高的Dice score。此外使用2015年 MICCAI的器官分割公開資料集對胃部做分割,證明在器官邊界更難判斷的分割任務上,混和型模型(Fusion Model)的Performance依然最佳。
最後的實驗結果說明了深度學習技術在臨床的應用上,所使用的訓練數據集其多樣性非常重要,對於特殊病況的患者,需要額外增加類似特性的資料才能讓模型更精準的預測。而pre-train公開資料集能夠提升模型測試的結果,但若直接用來測試不同的資料集則其測試結果仍然不佳。
我們針對如何應用本研究之技術於放射治療中的實務問題提出三個可行的解決方案,且最後成功將深度學習所預測的輪廓結果寫入DICOM-RT格式後相容於大林慈濟醫院放腫科的系統,使醫師能夠直接對深度學習所預測的輪廓進行微調,節省臨床的工作時間。

In the procedures of radiotherapy, delineating the organ at risk (OAR) is a time-consuming, laborious but still very important task, which is necessary to be done accurately.
In the field of medical imaging analysis, the content of images has high complexity and variability. The traditional rule-based methods are often difficult to meet the clinical requirements. In this work, we study the deep learning segmentation techniques and apply them to clinical imaging systems.
In the chapter of experimentation, we used public liver organ datasets, 3Dircadb, to verify various types of segmentation models. By the inspiration of these models we proposed a fusion model that uses the Convolutional LSTM Layer to study the spatial correlation between layers in a CT image dataset. We also use Attention Mechanism to suppress irrelevant features from the complex image content and focus on the useful messages of target organs. Finally, this model is verified in the testing dataset and achieves the highest Dice score. In addition, the 2015 MICCAI public data set on organ segmentation was used to segment the stomach. We show that the proposed fusion model still has the best performance even when the organ boundaries are more difficult to discriminate.
The last experimental results show that the diversity of training dataset used by deep learning techniques is very important in clinical application. For patients with special disease conditions, extra data with similar characteristics is required to make the prediction more accurate. Pre-trained model parameters can improve the results of testing, but the results are poor when directly applied to test a different dataset.
We also proposed three feasible solutions for practically applying deep learning methods in radiotherapy. It allows us to successfully write the predicted contour results into DICOM-RT format, which is a standard data format compatible with most medical imaging systems. Therefore the clinician could directly fine-tune the predicted contour and save the time for other clinical work.


目錄 VI
圖目錄 IX
表目錄 XI
第一章 緒論 1
1.1 研究背景 1
1.2 研究動機與目的 3
1.3 本文貢獻 3
1.4 論文大綱 4
第二章 相關文獻探討 6
2.1 傳統影像處理方法 6
2.1.1 Automatic Model 6
2.1.2 Interactive Model 7
2.2 深度學習方法 7
2.2.1 2D Model 8
2.2.2 3D Model 10
2.2.3 Sequential Model 12
2.2.4 Fusion Model 12
2.3 Performance Benchmark 13
第三章 實驗與分析方法 15
3.1 臨床應用流程圖 15
3.2 DICOM資料處理 17
3.2.1 DICOM讀取與寫入 17
3.3 深度學習(Deep Learning) 19
3.3.1 2D Model 19
3.3.1-I U-Net Model 19
3.3.1-II DeepLab_v3 + Model 20
3.3.2 3D Model 21
3.3.2-I Volumetric ConvNet Model 21
3.3.3 Sequential Model 22
3.3.3-I Sensor 3D 22
3.3.4 Fusion Model 27
3.3.5 損失函數(Loss Function) 29
3.4 臨床應用方案 30
3.4.1 方案一 31
3.4.1-I 使用情境 31
3.4.1-II 裝置介紹 31
3.4.1-III 注意事項 32
3.4.2 方案二 33
3.4.2-I 使用情境 33
3.4.2-II 裝置介紹 33
3.4.2-III 注意事項 34
3.4.3 方案三 35
3.4.3-I 使用情境 35
3.4.3-II 裝置介紹 35
3.4.3-III 注意事項 35
3.4.4 方案總結 36
第四章 實驗結果 37
4.1 實驗環境 37
4.2 實驗資料集 38
4.2.1 3Dicardb 38
4.2.2 大林慈濟醫院放腫科資料集 39
4.2.3 資料集特性 39
4.3 資料前處理 40
4.4 評估指標 41
4.5 模型驗證 43
4.5.1 3Dicardb資料集測試結果 43
4.5.2 大林慈濟放腫科資料集測試結果 44
4.5.3 MICCAI-Stomach資料集測試結果 49
第五章 結論與未來展望 51
參考文獻 53


[1]Kainmüller, Dagmar, Thomas Lange, and Hans Lamecker. "Shape constrained automatic segmentation of the liver based on a heuristic intensity model." Proc. MICCAI Workshop 3D Segmentation in the Clinic: A Grand Challenge. 2007.
[2]Beichel, Reinhard, et al. "Liver segmentation in CT data: A segmentation refinement approach." Proceedings of" 3D Segmentation in The Clinic: A Grand Challenge (2007): 235-245.
[3]Litjens, Geert, et al. "A survey on deep learning in medical image analysis." Medical image analysis 42 (2017): 60-88.
[4]Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.
[5]Ben-Cohen, Avi, et al. "Fully convolutional network for liver segmentation and lesions detection." Deep Learning and Data Labeling for Medical Applications. Springer, Cham, 2016. 77-85
[6]CHEN, Liang-Chieh, et al. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.
[7]Chen, Liang-Chieh, et al. "Encoder-decoder with atrous separable convolution for semantic image segmentation." Proceedings of the European Conference on Computer Vision (ECCV). 2018.
[8]Lu, Fang, et al. "Automatic 3D liver location and segmentation via convolutional neural network and graph cut." International journal of computer assisted radiology and surgery 12.2 (2017): 171-182.
[9]Dou, Qi, et al. "3D deeply supervised network for automatic liver segmentation from CT volumes." International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2016.
[10]Yu, Lequan, et al. "Volumetric ConvNets with mixed residual connections for automated prostate segmentation from 3D MR images." Thirty-first AAAI conference on artificial intelligence. 2017
[11]Novikov, Alexey A., et al. "Deep Sequential Segmentation of Organs in Volumetric Medical Scans." IEEE transactions on medical imaging (2018)
[12]XINGJIAN, S. H. I., et al. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In: Advances in neural information processing systems. 2015. p. 802-810.
[13]Zhang, Yao, et al. "SequentialSegNet: Combination with Sequential Feature for Multi-Organ Segmentation." 2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018.
[14]WANG, Yan, et al. Abdominal multi-organ segmentation with organ-attention networks and statistical fusion. Medical image analysis, 2019, 55: 88-102.
[15]BILIC, Patrick, et al. The Liver Tumor Segmentation Benchmark (LiTS). arXiv preprint arXiv:1901.04056, 2019
[16]ÇIÇEK, Özgün, et al. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: International conference on medical image computing and computer-assisted intervention. Springer, Cham, 2016. p. 424-432.
[17]ROTH, Holger R., et al. An application of cascaded 3D fully convolutional networks for medical image segmentation. Computerized Medical Imaging and Graphics, 2018, 66: 90-99..
[18]ZHAO, Hengshuang, et al. Pyramid scene parsing network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. p. 2881-2890.
[19]WANG, Guangrun, et al. Learning object interactions and descriptions for semantic image segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017. p. 5859-5867
[20]FU, Jun, et al. Stacked deconvolutional network for semantic segmentation. IEEE Transactions on Image Processing, 2019.
[21]CHUNG, François; DELINGETTE, Hervé. Regional appearance modeling based on the clustering of intensity profiles. Computer Vision and Image Understanding, 2013, 117.6: 705-717.
[22]KIRSCHNER, Matthias. The probabilistic active shape model: From model construction to flexible medical image segmentation. 2013. PhD Thesis. Technische Universität.
[23]LI, Guodong, et al. Automatic liver segmentation based on shape constraints and deformable graph cut in CT images. IEEE Transactions on Image Processing, 2015, 24.12: 5315-5329.
[24]ERDT, Marius, et al. Fast automatic liver segmentation combining learned shape priors with observed shape deviation. In: 2010 IEEE 23rd International Symposium on Computer-Based Medical Systems (CBMS). IEEE, 2010. p. 249-254.
[25]CHRIST, Patrick Ferdinand, et al. Automatic liver and lesion segmentation in CT using cascaded fully convolutional neural networks and 3D conditional random fields. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2016. p. 415-423.
[26]ERDT, Marius, et al. Fast automatic liver segmentation combining learned shape priors with observed shape deviation. In: 2010 IEEE 23rd International Symposium on Computer-Based Medical Systems (CBMS). IEEE, 2010. p. 249-254.
[27]LI, Guodong, et al. Automatic liver segmentation based on shape constraints and deformable graph cut in CT images. IEEE Transactions on Image Processing, 2015, 24.12: 5315-5329.
[28]VASWANI, Ashish, et al. Attention is all you need. In: Advances in neural information processing systems. 2017. p. 5998-6008.
[29]OKTAY, Ozan, et al. Attention U-Net: learning where to look for the pancreas. arXiv preprint arXiv:1804.03999, 2018.
[30]LONG, Jonathan; SHELHAMER, Evan; DARRELL, Trevor. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. p. 3431-3440.
[31]HE, Kaiming, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 770-778.
[32]CHOLLET, François. Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. p. 1251-1258.
[33]LEE, Chen-Yu, et al. Deeply-supervised nets. In: Artificial Intelligence and Statistics. 2015. p. 562-570.
[34]HOCHREITER, Sepp; SCHMIDHUBER, Jürgen. Long short-term memory. Neural computation, 1997, 9.8: 1735-1780.
[35]XINGJIAN, S. H. I., et al. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In: Advances in neural information processing systems. 2015. p. 802-810.
[36]SUDRE, Carole H., et al. Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, Cham, 2017. p. 240-248.
[37]TAHA, Abdel Aziz; HANBURY, Allan. Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool. BMC medical imaging, 2015, 15.1: 29.
[38]Intel OpenVINO Model Optimizer Developer Guide https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[39]Intel NCS2
https://software.intel.com/en-us/neural-compute-stick/where-to-buy
[40]Understanding LSTM Networks
http://colah.github.io/posts/2015-08-Understanding-LSTMs/

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top