跳到主要內容

臺灣博碩士論文加值系統

(98.82.120.188) 您好!臺灣時間:2024/09/17 03:37
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:王景鴻
研究生(外文):WANG, JING-HONG
論文名稱:基於視覺轉換器之針型共焦雷射顯微內視鏡胰臟囊性病變視訊分類
論文名稱(外文):Classification of Pancreatic Cystic Lesions in Needle-Based Confocal Laser Endomicroscopy Videos Based on Vision Transformer
指導教授:張軒庭張軒庭引用關係
指導教授(外文):CHANG, HSUAN-TING
口試委員:許志仲彭徐鈞何前程
口試委員(外文):HSU, CHIH-CHUNGPENG, SYU-JYUNHO, CHIAN-CHENG
口試日期:2024-07-04
學位類別:碩士
校院名稱:國立雲林科技大學
系所名稱:電機工程系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:中文
論文頁數:69
中文關鍵詞:醫學影像分類針型共焦雷射顯微內視鏡胰臟囊性病變視覺轉換器多實例學習幀差異
外文關鍵詞:Medical image classificationNeedle-based confocal laser endomicroscopyPancreatic cystic lesionsVision TransformerMultiple Instance LearningFrame difference
相關次數:
  • 被引用被引用:0
  • 點閱點閱:24
  • 評分評分:
  • 下載下載:2
  • 收藏至我的研究室書目清單書目收藏:0
本研究目的為提高深度學習中Vision transformer (ViT)模型對針型共焦雷射顯微內視鏡視訊的胰臟囊性病變五種類別分類正確率。我們針對視訊資料集存在模糊、特徵不明顯、文字等問題使用人工挑選、限制對比度自適應直方圖均衡化(Contrast Limited Adaptive Histogram Equalization, CLAHE)以及感興趣區域 (Region of interest, ROI)方法來改善這些問題,並使用每旋轉15度來做擴增資料。本研究在模型輸出調整成有兩個Multilayer perceptron (MLP)輸出各用於分類 (Classification, CLS)與回歸(Regression, REG)。分類保持模型訓練並預測幀(Frames)的類別標籤,回歸部分配合使用多實例學習(Multiple instance learning,MIL)概念提供的包級標籤(Bag-level labels)。標籤的值由考慮每一幀的局部病變分佈狀況而設定,並結合洗牌(Shuffle)技術,依訓練組合讓模型提取原始幀與重組幀去訓練。
在測試階段,一樣有做ROI移除幀上不必要的資訊,為了提高模型分類的正確率,除了將每部視訊中的每一幀都做CLAHE提高對比度、特徵強化外。還有做幀差異(Frame difference),此技術是為了移除視訊中移動快速而較不具重要性的幀,以避免這些不重要的幀造成的誤判以致於視訊被分類到錯誤類別的問題。
最後不同損失函數(Loss function)和超參數用於三種訓練組合。這三種組合分別有單純使用分類的損失函數,還有結合回歸的損失函數去進行的模型訓練,最後分別在18部測試視訊分別測得分類正確率67%、67%與78%。


In this study, we aim to improve the accuracy of classification of five categories of pancreatic cystic lesions in needle confocal laser endomicroscopic videos using Vision transformer (ViT) model in deep learning. We use the manual selection, contrast limited adaptive histogram equalization (CLAHE) and region of interest (ROI) schemes to improve the problem of vague, unclear features and text in the video datasets. In addition, every 15 degrees of rotation is used to augment the data. The model output has been adjusted by using two multilayer perceptrons for classification and regression. Classification is used to maintain the model training and predict the frame class labeling, while regression part uses multiple instances learning concepts to provide the bag-level labeling. The value of the label is set by considering the local lesion distribution of each frame and combining it with the shuffle scheme. We then let the model extract original frames and reorganized frames for training according to the training combinations.
In test stage, the same ROI scheme removes the unnecessary information on each frame, so that the accuracy of model classification can be improved. In addition, we apply the CLAHE to each frame to enhance the image contrast and feature. We also remove the unimportant frames with large frame differences in the video to avoid the misjudgment problem, which can make the video be categorized into the wrong class.
Finally, different loss functions and hyperparameters are used for three training combinations. These three combinations include only using the classification loss function and model training combined with regression loss functions, In the 18 test videos the classification accuracy rates of 67%, 67%, and 78% are measured, respectively.

摘要 i
Abstract ii
誌謝 iii
目錄 iv
表目錄 vi
圖目錄 vii
第一章 緒論 1
1.1 研究動機 1
1.2 研究目的 2
1.3 研究方法 2
1.4 論文架構 4
第二章 背景和相關研究 5
2.1 共焦雷射顯微內視鏡 5
2.2 針型共焦雷射顯微內視鏡 6
2.3 胰臟囊性病變之五種病徵判斷 8
2.4 Transformer模型 9
2.5 Vision Transformer (ViT)模型 12
2.5.1 Self-Attention與Multi-Head Attention 13
第三章 研究方法 16
3.1 視訊來源 16
3.2 系統架構與流程 17
3.3 挑選訓練視訊與幀 20
3.4 數據預處理 21
3.4.1 Mask labeling 21
3.4.2 CLAHE特徵強化 22
3.4.3 ROI與旋轉(rotate)擴增數據集 23
3.4.4 切Patches與Shuffle 24
3.4.5 硬標籤(Hard label)/軟標籤(Soft label) 24
3.4.6 計算病變比例 (Lesion Ratio, LR) 25
3.4.7 Multiple Instance Learning (MIL) 26
3.5 Vision Transformer (ViT)輸出調整 29
3.6 Frame difference 31
第四章 實驗結果與討論 33
4.1 實驗環境 33
4.2 實驗資料數量介紹 34
4.3 實驗設置與參數 38
4.4 ViT-b測試結果 39
4.4.1 ViT-b training combination number one的測試結果 40
4.4.2 ViT-b training combination number two的測試結果 42
4.4.3 ViT-b training combination number three的測試結果 44
4.4.4 加上Frame difference的所有測試結果 46
4.4.5 與[44]分類正確率結果比較與分析 48
4.5 實驗結果探討 49
第五章 結論與未來展望 52
參考文獻 53


[1] 112年死因統計結果分析。民國 113 年04 月 13 日,取自:衛生福利部統計處網頁: https://dep.mohw.gov.tw/DOS/lp-5069-113-xCat-y112.html.
[2] T. Viriyasaranon, J.W. Chen., Y.H. Koh, J.H. Cho, M.K. Jung, S.H. Kim, H.J. Kim, W.J. Lee, J.H. Choi, S.M. woo, “Annotation-Efficient Deep Learning Model for Pancreatic Cancer Diagnosis and Classification Using CT Images: A Retrospective Diagnostic Study,” Cancers (Basel), vol. 15, no. 13, Jul. 2023.
[3] X. Li, R. Guo, J. Lu, T. Chen, and X. Qian, “Causality-Driven Graph Neural Network for Early Diagnosis of Pancreatic Cancer in Non-Contrast Computerized Tomography,” IEEE Trans Med Imaging, vol. 42, no. 6, pp. 1656–1667, Jun. 2023.
[4] A. H. Shnawa, G. Mohammed, M. R. Hadi, K. Ibrahim, M. M. Adnan, and W. Hameed, “Optimal Elman Neural Network for Pancreatic Cancer Classification Using Computed Tomography Images,” in 6th Iraqi International Conference on Engineering Technology and its Applications, IICETA 2023, Institute of Electrical and Electronics Engineers Inc., pp. 689–695, 2023.
[5] K. Si, Y. Xue, X. Yu, X. Zhu, Q. Li, W. Gong, T. Liang, S. Duan, “Fully end-to-end deep-learning-based diagnosis of pancreatic tumors,” Theranostics, vol. 11, no. 4, pp. 1982–1990, 2021.
[6] H. Li, M. Reichert, K. Lin, N. Tselousov, R. Braren, D. Fu, R. Schmid, J. Li, B. Menze, and K. Shi, “Differential diagnosis for pancreatic cysts in CT scans using densely-connected convolutional networks,” in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, pp. 2095-2098, 2019.
[7] K.-L. Liu, T. Wu, P.-T Chen, Y.M. Tsai, H. Roth, M.-S. Wu, W.-C. Liao, W. Wang, “Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: a retrospective study with cross-racial external validation,” Lancet Digit Health, vol. 2, no. 6, pp. e303–e313, Jun. 2020.
[8] F. P. Salanitri, G. Bellitto, S. Palazzo, I. Irmakci, M. Wallace, C. Bolan, M. Engels, S. Hoogenboom, M. Aldinucci, U. Bagci, D. Giordano, C. Spampinato, “Neural Transformers for Intraductal Papillary Mucosal Neoplasms (IPMN) Classification in MRI images,” in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, IEEE, pp. 475–479, 2022.
[9] N. Xiao, Z. Li, S. Chen, L. Zhao, Y. Yang, H. Xie, Y. Liu, Y. Quan, J. Duan, “Contrast-enhanced CT Image Synthesis of Thyroid Based on Transfomer and Texture Branching,” 5th International Conference on Artificial Intelligence and Big Data (ICAIBD), IEEE, Chengdu, China, pp. 94–100, 2022.
[10] J. Butke, T. Frick, F. Roghmann, S. F. El-Mashtoly, K. Gerwert, and A. Mosig, “End-to-end Multiple Instance Learning for Whole-Slide Cytopathology of Urothelial Carcinoma,” In MICCAI Workshop on Computational Pathology, 57-68, 2021.
[11] H. Li, F. Yang, Y. Zhao, X. Xing, J. Zhang, M. Gao, J. Huang, L. Wang, J. Yao, “DT-MIL: deformable transformer for multi-instance learning on histopathological image,” In International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), pp. 206–216, 2021.
[12] Z. Shao, H. Bian, Y. Chen, Y. Wang, J. Zhang, X. Ji, Y. zhang, “TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification,” Advances in Neural Information Processing Systems, vol. 34, 2136-2147, Jun. 2021.
[13] C. Hou, Q. Sun, W. Wang, and J. Zhang, “Shuffle Attention Multiple Instances Learning for Breast Cancer Whole Slide Image Classification,” in 2022 IEEE International Conference on Image Processing (ICIP), IEEE, pp. 466–470, Oct. 2022.
[14] T. Zhang, Y. Feng, Y. Feng, Y. Zhao, Y. Lei, N. Ying, Z. Yan, Y. He, G. Zhang, “Shuffle instances-based vision transformer for pancreatic cancer ROSE image classification,” arXiv:2208.06833, Aug. 2022.
[15] B. Napoleon, M. Palazzo, AI. Lemaistre, F. Caillol, L. Palazzo, A. Aubert, L. Buscail, F. Maire, B. M. Morellon, B. Pujol, and M. Giovannini. “Needle-based confocal laser endomicroscopy of pancreatic cystic lesions: a prospective multicenter validation study in patients with definite diagnosis,” Endoscopy, vol. 51, no. 9, pp. 825-835, Sep. 2019.
[16] S. G. Krishna, W. R. Brugge, J. M. Dewitt, P. Kongkam, B. Napoleon, C. Robles-Medranda, D. Tan, S. El-Dika S. McCarthy, J. Walker, M. E. Dillhoff, A. Manilchuk, C. Schmidt, B. Swanson, Z. K. Shah, P. A. Hart, and A. L. Conwell, “Needle-based confocal laser endomicroscopy for the diagnosis of pancreatic cystic lesions: an international external interobserver and intraobserver study (with videos),” Gastrointestinal Endoscopy, vol. 86, no. 4, pp. 644-654, Oct. 2017.
[17] H. Neumann, R. Kiesslich, M. B. Wallace, and M. F. Neurath, “Confocal laser endomicroscopy: technical advances and clinical applications,” Gastroenterology, vol. 139, no. 2, pp. 388-392, 1 Aug. 2010.
[18] S. S. Chauhan, B. K. Abu Dayyeh, Y. M. Bhat, K. T. Gottlieb, J. H. Hwang, S. Komanduri, V. Konda, S. K. Lo, M. A. Manfredi, J. T. Maple, F.M. Murad, U.D. Siddiqui, S, Banerjee, M.B. Wallace, “Confocal laser endomicroscopy,” Gastrointest Endosc, vol. 80, no. 6, pp. 928–938, Dec. 2014.
[19] A. Villard, I. Breuskin, O. Casiraghi, S. Asmandar, C. Laplace-Builhe, M. Abbaci, A. Moya Plana, “Confocal laser endomicroscopy and confocal microscopy for head and neck cancer imaging: Recent updates and future perspectives,” Oral Oncology, vol. 127. Elsevier Ltd, Apr. 01, 2022.
[20] M.S. Bhutani, P. Koduru, V. Joshi, J.G. Karstensen, A. Saftoiu, P. Vilmann, M. Giovannini, “EUS-Guided Needle-Based Confocal Laser Endomicroscopy: A Novel Technique With Emerging Applications,” Gastroenterol Hepatol (N Y), vol. 11, no. 4, pp. 235–40, Apr. 2015.
[21] M.-I. Costache, S. Iordache, J. Karstensen, A. Saftoiu, and P. Vilmann, “Endoscopic ultrasound-guided fine needle aspiration: From the past to the future,” Endosc Ultrasound, vol. 2, no. 2, p. 77, 2013.
[22] S. Hao, W. Ding, Y. Jin, Y. Di, F. Yang, H. He, H. Li, C. Jin, D. Fu, and L. Zhong, “Appraisal of EUS-guided needle-based confocal laser endomicroscopy in the diagnosis of pancreatic lesions: A single Chinese center experience,” Endosc Ultrasound, vol. 9, no. 3, pp. 180–186, May-Jun 2020.
[23] P. Kongkam, R. Pittayanon, P. Sampatanukul, P. Angsuwatcharakon, S. Aniwan, P. Prueksapanich, V. Sriuranpong, P. Navicharern, S. Treeprasertsuk, P. Kullavanijaya, and R. Rerknimitr, “Endoscopic ultrasound-guided needle-based confocal laser endomicroscopy for diagnosis of solid pancreatic lesions (ENES): a pilot study,” Endoscopy International Open, vol. 4, no. 1, E17-E23, Jan. 2016.
[24] B. Napoleon, S. G. Krishna, B. Marco, D. Carr-Locke, K. J. Chang, À. Ginès, F. G. Gress, A. Larghi, K. W. Oppong, L. Palazzo, P. Kongkam, C. Robles-Medranda, D. Sejpal, D. Tan, and W. R. Brugge, “Confocal endomicroscopy for evaluation of pancreatic cystic lesions: a systematic review and international Delphi consensus report,” Endoscopy International Open, vol. 8, no. 11, E1566-E1581, Nov. 2020.
[25] J. Guo, M. S. Bhutani, M. Giovannini, Z. Li, Z. Jin, A. Yang, G. Xu, G. Wang, S. Sun, “Can endoscopic ultrasound-guided needle-based confocal laser endomicroscopy replace fine-needle aspiration for pancreatic and mediastinal diseases?” Endoscopic Ultrasound, vol. 6, no. 6, pp. 376–381, Nov. 01, 2017.
[26] Advanced endoscopy:ERCP and EUS, From gutworks webside : https://www.gutworks.com.au/endoscopy-ercp-eus-procedure-murdoch-perth.
[27] E. A. F. Piñeros, H. J. Cardona, K. Karia, A. Sethi, and M. Kahaleh, “Utility of Probe-based (Cellvizio) Confocal Laser Endomicroscopy in Gastroenterology,” Rev Col Gastroenterol, vol.30, no.3, pp.298-314, 2015.
[28] M. G. Keane, N. Wehnert, M. Perez-Machado, G. K. Fusai, D. Thorburn, K. W. Oppong, N. Carroll, A. J. Metz, and S. P. Pereira, “A prospective trial of CONfocal endomicroscopy in CYSTic lesions of the pancreas: CONCYST-01,” Endoscopy International Open, vol. 7, no. 9, E1117-E1122, Sep. 2019.
[29] N. D. Pilonis, W. Januszewicz, and M. di Pietro, “Confocal laser endomicroscopy in gastro-intestinal endoscopy: Technical aspects and clinical applications,” Translational Gastroenterology and Hepatology, vol. 7. AME Publishing Company, Jan. 01, 2022.
[30] S. G. Krishna, B. Swanson, P. A. Hart, S. El-Dika, J. P. Walker, S. T. McCarthy, A. Malli, Z. K. Shah, and D. L. Conwell, “Validation of diagnostic characteristics of needle based confocal laser endomicroscopy in differentiation of pancreatic cystic lesions,” Endoscopy International Open, vol. 4, no. 11, pp. E1124-E1135, Nov. 2016.
[31] S. G. Krishna, R. M. Modi, A. K. Kamboj, B. J. Swanson, P. A. Hart, M. E. Dillhoff, A. Manilchuk, C. R. Schmidt, D. L. Conwell, “In vivo and ex vivo confocal endomicroscopy of pancreatic cystic lesions: A prospective study,” World Journal of Gastroenterology, vol. 23, no. 18. Baishideng Publishing Group Co, pp. 3338–3348, May 14, 2017.
[32] S. G. Krishna and J. H. Lee, “Appraisal of needle-based confocal laser endomicroscopy in the diagnosis of pancreatic cysts,” World Journal of Gastroenterology, vol. 22, no. 4. Baishideng Publishing Group Inc, pp. 1701–1710, Jan. 28, 2016.
[33] R. M. Modi, A. K. Kamboj, B. Swanson, D. L. Conwell, and S. G. Krishna, “Novel technique for diagnosis of mucinous cystic neoplasms: in vivo and ex vivo confocal laser endomicroscopy,” VideoGIE, vol. 2, no. 3, pp. 55–56, Mar. 2017.
[34] W. Chen, N. Ahmed, and S. G. Krishna, “Pancreatic Cystic Lesions: A Focused Review on Cyst Clinicopathological Features and Advanced Diagnostics,” Diagnostics, vol. 13, no. 1. Multidisciplinary Digital Publishing Institute (MDPI), Jan. 01, 2023.
[35] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, I. Polosukhin, “Attention Is All You Need,” Neural Information Processing Systems, pp. 5998-6008, Dec. 2017.
[36] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
[37] HUNG-YI LEE (李宏毅) MACHINE LEARNING 2021 SPRING : https://speech.ee.ntu.edu.tw/~hylee/ml/2021-spring.php.
[38] T. G. Dietterich, R. H. Lathrop, and T. Lozano-P6rez, “Solving the multiple instance problem with axis-parallel rectangles,” Artificial Intelligence, vol. 89, no. 1–2, pp. 31-71, 1997.
[39] M. Ilse, J. M. Tomczak, and M. Welling, “Attention-based Deep Multiple Instance Learning,” In: International conference on machine learning. PMLR. p. 2127-2136, 2018.
[40] G. Quellec, G. Cazuguel, B. Cochener, and M. Lamard, “Multiple-Instance Learning for Medical Image and Video Analysis,” IEEE Reviews in Biomedical Engineering, vol. 10. Institute of Electrical and Electronics Engineers, pp. 213–234, 2017.
[41] K. Uehara, W. Uegami, H. Nosato, M. Murakawa, J. Fukuoka, and H. Sakanashi, “Evidence Dictionary Network Using Multiple Instance Contrastive Learning for Explainable Pathological Image Analysis,” in Proceedings - International Symposium on Biomedical Imaging, IEEE Computer Society, 2023 (pp. 1-5). IEEE.
[42] Z. Sha and J. Li, “MITformer: A Multiinstance Vision Transformer for Remote Sensing Scene Classification,” IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2022.
[43] W.-J. Zhang and Z.-H. Zhou, “Multi-Instance Learning with Distribution Change,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28, no. 1, Jun. 2014.
[44] 周易凱, “基於電腦視覺與深度學習網路於共焦顯微內視鏡視訊中的胰腺囊性癌變分類,” 國立雲林科技大學電機工程系碩士論文, 2022。

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top