跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.81) 您好!臺灣時間:2024/12/05 09:13
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:劉彥均
研究生(外文):Yan-Jun Liu
論文名稱:結合資料擴增與映射資訊的循環生成對抗式網路:運用於自動染色體物件辨識模型的跨院驗證
論文名稱(外文):A CycleGAN with data augmentation and mapping information: application of the automatic chromosome detection model on cross-hospital validation
指導教授:戴佳原郭至恩郭至恩引用關係
指導教授(外文):Jia-Yuan DaiChih-En Kuo
口試委員:梁勝富江振國凃瀞珽
口試委員(外文):Sheng-Fu LiangChen-Kuo ChiangChing-Ting Tu
口試日期:2024-06-18
學位類別:碩士
校院名稱:國立中興大學
系所名稱:應用數學系所
學門:數學及統計學門
學類:數學學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:中文
論文頁數:47
中文關鍵詞:染色體跨院分類深度學習風格轉換生成對抗網路資料擴增映射資訊
外文關鍵詞:cross-hospital chromosome classificationdeep learningstyle transfergenerative adversarial networksdata augmentationmapping information
相關次數:
  • 被引用被引用:0
  • 點閱點閱:15
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
目前在深度學習輔助醫療影像診斷上,由於各家醫院取得醫療影像技術和設備、操作人員等方面的不同,導致各家醫院影像上具有差異。單一數據集訓練的深度學習模型無法適用於各家醫院情況。因此,提高模型的泛化性,能使其在各家醫院適用,從而減少各醫院的人力資源和時間成本是一件重要的議題。
本研究在辨識染色體醫療跨醫院影像的任務上提出了一種結合資料擴增與映射資訊的循環生成對抗式網路(DAMI-CycleGAN),該模型融合了CycleGAN、DistanceGAN和GcGAN三種生成對抗網路技術,用於實現不同醫院之間染色體影像的風格轉換。透過風格轉換方法,減少源域的染色體物件偵測模型在性能上於跨院驗證的影響,從而減少對於不同醫院資料標註的依賴,並提高自動染色體物件偵測模型的落地應用的效率。
在實驗過程我們主要有兩個部份,第一部分,驗證DAMI-CycleGAN風格轉換的有效性,透過訓練好的風格分類器來評估DAMI-CycleGAN風格轉換後的林口長庚醫院染色體影像是否能被分辨為台中榮總醫院染色體影像;第二部份,我們採用了消融性實驗來驗證DAMI-CycleGAN的效能影響。此外,我們也分別評估DAMI-CycleGAN在簡單與困難影像上的效能表現來探討臨床可應用性。第一部分的實驗結果顯示,風格分類器將風格轉換後林口長庚醫院染色體影像皆判別為台中榮總風格,此結果證實了我們所提出DAMI-CycleGAN可以成功地把目標域的風格轉換成源域風格;第二部分的實驗結果顯示,沒有使用目標域影像微調訓練的模型其mAP50為79.73%,而透過DAMI-CycleGAN可以使物件偵測模型其mAP50提升到93.79%(提升了14.06%)與經過fine-tune模型相較基準線模型的可提升性能空間18.18%結果相比,其改善比例約達77 %(14.06% / 18.18%)。並且相較於其他消融性實驗方法,其在訓練過程中的準確率能較穩定的提升。
此外,我們還比較了不同風格轉換策略的效果,結果顯示將台中榮總染色體影像訓練集進行風格轉換並重新訓練物件偵測模型的效果略優於轉換林口長庚染色體影像測試集的方法。總結來說,DAMI-CycleGAN展示了在提升染色體物件偵測模型泛化能力方面的潛力,為跨醫院的染色體檢測提供了一種有效的解決方案。未來我們可以將DAMI-CycleGAN運用在其他醫學影像上,藉此來加速模型的落地應用性的效率。
Currently, in the deep learning-assisted medical image diagnosis, due to the differences in imaging techniques, equipment, and operators across different hospitals result in differences in medical images. Deep learning models trained on a single dataset cannot be applied to various hospital settings. Therefore, improving the generalization ability of these models to ensure their applicability across different hospitals, thereby reducing the human resources and time costs of each hospital, is an important issue.
This study proposes a method that combines data augmentation and mapping information cycle generation adversarial network (DAMI-CycleGAN). The DAMI-CycleGAN model integrates three types of generative adversarial networks: CycleGAN, DistanceGAN, and GcGAN, to achieve style transfer of chromosome images between different hospitals. Through the style transfer methods, the impact on the performance of chromosome object detection models during cross-hospital validation is reduced, thereby reducing the dependence on data annotation from different hospitals and improving the efficiency of the implementation of the automatic chromosome object detection model.
The experimental process consists of two main parts. The first part validates the effectiveness of DAMI-CycleGAN style transfer by using a trained style classifier to evaluate whether the chromosome images from Linkou Chang Gung Memorial Hospital, after style transfer by DAMI-CycleGAN, can be classified as resembling those from Taichung Veterans General Hospital. The second part employs ablation experiments to verify the performance impact of DAMI-CycleGAN. In addition, we also evaluate the performance of DAMI-CycleGAN on both simple and complex images to explore its clinical applicability. The experimental results of the first part show that the style classifier classified the style-transferred chromosome images from Linkou Chang Gung Memorial Hospital as having the style of Taichung Veterans General Hospital. This result confirms that DAMI-CycleGAN can successfully convert the target domain style into the source domain style. The experimental results of the second part show that without using target domain images for fine-tuning, the model's mAP50 is 79.73%, and through DAMI-CycleGAN increases the object detection model's mAP50 to 93.79% (an improvement of 14.06%). Compared to the baseline model's potential performance improvement space of 18.18%, DAMI-CycleGAN achieves an improvement rate of 77%(14.06% / 18.18%). Moreover, compared to other ablation experiment methods, DAMI-CycleGAN shows more stable accuracy improvement during training.
In addition, we compared the effects of different style transfer strategies. The results showed that retraining the object detection model using the style-transferred training set from Taichung Veterans General Hospital yields slightly better results than transferring the style of the test set from Linkou Chang Gung Memorial Hospital. In summary, DAMI-CycleGAN demonstrates potential in enhancing the generalization ability of chromosome object detection models, providing an effective solution for cross hospital chromosome detection. In the future, we can apply DAMI-CycleGAN to other medical images to accelerate the efficiency of model implementation.
摘 要 ii
Abstract iv
目 錄 vi
圖目錄 viii
表目錄 x
第一章、導論 1
1.1 研究動機 1
1.2 文獻回顧 2
1.3 研究目的 3
第二章、資料來源與研究方法 4
2.1 資料來源 4
2.2領域適應於物件偵測方法介紹 9
2.2.1領域不變性特徵學習法 9
2.2.2平均教師法 10
2.2.3圖像風格轉換法 11
2.3 基於資料增廣與映射資訊的循環生成對抗式網路 12
2.3.1 循環生成對抗網路CycleGAN 12
2.3.2 DistanceGAN 15
2.3.3 GcGAN (Geometry consistent GAN) 18
2.3.4 DAMI-CycleGAN 21
2.4 性能評估指標 23
第三章、實驗結果 26
3.1 訓練參數及實驗設置 26
3.2 風格轉換模型性能評估 27
3.3 風格轉換模型應用於物件偵測性能評估 34
第四章、討論 39
4.1 消融性實驗討論 39
4.2 不同風格轉換策略討論 41
第五章、結論 45
參考文獻 46
1.Wapner, R.J., et al., Chromosomal microarray versus karyotyping for prenatal diagnosis. New England Journal of Medicine, 2012. 367(23): p. 2175-2184.
2.Carlson, L.M. and N.L. Vora, Prenatal diagnosis: screening and diagnostic tools. Obstetrics and Gynecology Clinics, 2017. 44(2): p. 245-256.
3.Jindal, S., et al. Siamese networks for chromosome classification. in Proceedings of the IEEE international conference on computer vision workshops. 2017.
4.Karvelis, P.S., et al. A watershed based segmentation method for multispectral chromosome images classification. in 2006 International Conference of the IEEE Engineering in Medicine and Biology Society. 2006. IEEE.
5.Litjens, G., et al., A survey on deep learning in medical image analysis. Medical image analysis, 2017. 42: p. 60-88.
6.Qin, Y., et al., Varifocal-net: A chromosome classification approach using deep convolutional networks. IEEE transactions on medical imaging, 2019. 38(11): p. 2569-2581.
7.Krizhevsky, A., I. Sutskever, and G.E. Hinton, Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 2012. 25.
8.Redmon, J., et al. You only look once: Unified, real-time object detection. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
9.Duan, K., et al. Centernet: Keypoint triplets for object detection. in Proceedings of the IEEE/CVF international conference on computer vision. 2019.
10.He, K., et al. Mask r-cnn. in Proceedings of the IEEE international conference on computer vision. 2017.
11.李峻州, 一套基於深度學習的核型圖染色體自動偵測辨識系統, in 自動控制工程學系. 2022, 逢甲大學: 台中市. p. 43.
12.Wang, M. and W. Deng, Deep visual domain adaptation: A survey. Neurocomputing, 2018. 312: p. 135-153.
13.Guan, H. and M. Liu, Domain adaptation for medical image analysis: a survey. IEEE Transactions on Biomedical Engineering, 2021. 69(3): p. 1173-1185.
14.Tajbakhsh, N., et al., Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE transactions on medical imaging, 2016. 35(5): p. 1299-1312.
15.Goodfellow, I., et al., Generative adversarial nets. Advances in neural information processing systems, 2014. 27.
16.Gatys, L.A., A.S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
17.Oza, P., et al., Unsupervised domain adaptation of object detectors: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
18.Ganin, Y., et al., Domain-adversarial training of neural networks. Journal of machine learning research, 2016. 17(59): p. 1-35.
19.Chen, Y., et al. Domain adaptive faster r-cnn for object detection in the wild. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
20.Tarvainen, A. and H. Valpola, Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Advances in neural information processing systems, 2017. 30.
21.Cai, Q., et al. Exploring object relation in mean teacher for cross-domain detection. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
22.Zhu, J.-Y., et al. Unpaired image-to-image translation using cycle-consistent adversarial networks. in Proceedings of the IEEE international conference on computer vision. 2017.
23.Benaim, S. and L. Wolf, One-sided unsupervised domain mapping. Advances in neural information processing systems, 2017. 30.
24.Fu, H., et al. Geometry-consistent generative adversarial networks for one-sided unsupervised domain mapping. in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.
25.Shah, D., Mean average precision (MAP) explained: Everything you need to know. V7Labs. Retrieved July, 2022. 23: p. 2023.
26.He, K., et al. Deep residual learning for image recognition. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
27.Van der Maaten, L. and G. Hinton, Visualizing data using t-SNE. Journal of machine learning research, 2008. 9(11).
電子全文 電子全文(網際網路公開日期:20260831)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊