跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.110) 您好!臺灣時間:2025/09/25 06:13
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:陳怡君
研究生(外文):CHEN,YI-CHUN
論文名稱:乳房保留手術中之腫瘤邊緣評估
論文名稱(外文):Intra-operative tumor margin evaluation in breast-conserving surgery
指導教授:黃育仁黃育仁引用關係
指導教授(外文):HUANG,YU-LEN
口試委員:黃育仁吳士駿張嘉仁
口試委員(外文):HUANG,YU-LENWU,SHYH-TSUNCHANG,CHIA-JEN
口試日期:2019-06-17
學位類別:碩士
校院名稱:東海大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:英文
論文頁數:50
中文關鍵詞:乳癌乳房保留手術腫瘤邊緣評估乳房X光攝影深度學習影像切割
外文關鍵詞:Breast cancerbreast-conserving therapytumor margin evaluationspecimen mammographydeep learningimage segmentation
相關次數:
  • 被引用被引用:0
  • 點閱點閱:312
  • 評分評分:
  • 下載下載:8
  • 收藏至我的研究室書目清單書目收藏:0
乳癌是現今婦女最常罹患的癌症,但隨著醫學研究的發展,若能早期發現並及早接受治療,可有效地提高乳癌的治癒率。乳房保留手術(BCT)再加上術後放射治療,是治療早期乳癌的優先選擇。乳房保留手術能保有乳房的外觀,並降低復發的機率,然而,對於任何惡性腫瘤,陽性邊緣可能導致BCT後局部復發的風險增加,為了減少陽性邊緣的數量,將為外科醫生提供關於陽性切除邊緣存在的及時資訊。本論文提出了在乳房保留手術的術中腫瘤邊緣評估,首先利用門檻值取出感興趣的區域,並以傳統的影像切割方法,多重門檻值、K-means和區域成長法以及兩個深度學習網路來進行腫瘤區域切割,接著評估其周圍正常組織的邊緣寬度。本論文所提出的方法,可使外科醫生獲得更多的資訊,希望在進行乳房保留手術時獲得乾淨的邊緣。本研究總共使用30個病例進行評估,最後將實驗結果與醫師手繪的腫瘤區域以及病理報告進行比較。實驗結果顯示,比起傳統的影像切割方法,深度學習網路能繪製出更符合病理報告的結果。在深度學習技術的輔助下,本研究將成為有潛力的術中輔助量測系統。
Breast cancer is the most commonly diagnosed cancer in women. Breast-conserving therapy (BCT) followed by irradiation is the treatment of choice for early-stage breast cancer. Breast retention surgery preserves the appearance of the breast and reduces the chance of recurrence. A positive margin may result in an increased risk of local recurrences after BCT for any malignant tumor. In order to reduce the number of positive margins would offer surgeon real-time intra-operative information on the presence of positive resection margins. This thesis aims to design an intra-operative tumor margin evaluation scheme by using specimen mammography in breast-conserving surgery. The proposed method first utilizes image thresholding to extract regions of interest and then to segment cancer tissue using various segmentation methods, i.e. multi-thresholding, K-means and regional growth methods and two deep learning networks. Finally, the margin width of normal tissues surrounding it is evaluated as the result. With this work, surgeons would acquire more information to get clean margins when performing breast conserving surgeries. This study evaluated total of 30 cases, the results were compared with the manually determined contours and pathology report. The experimental results reveal that deep learning techniques can draw results that are more consistent with pathology reports than traditional segmentation methods. With the aid of deep learning techniques, the proposed scheme would be a potential intra-operative measurement system.
摘要 i
Abstract ii
Acknowledgements iii
List of Figures v
List of Tables viii
CHAPTER 1 INTRODUCTION 1
CHAPTER 2 MATERIAL AND METHODS 4
2.1.Data acquisition 4
2.2.Flow-chart of the proposed method 5
2.3.Measurement of pixel density 6
2.4.Specimen boundary detection and ROI extraction 7
2.5.Tumor boundary detection 11
2.5.1.Multi-thresholding 11
2.5.2.K-means clustering 14
2.5.3.Region growing 16
2.5.4.U-net 18
2.5.5.SegNet 20
2.6. Margin width evaluation 22
CHAPTER 3 RESULTS 23
CHAPTER 4 CONCLUSIONS 38
Reference 39

[1]R. K. Benda, N. P. Mendenhall, D. S. Lind, J. C. Cendan, B. F. Shea, L. C. Richardson, et al., "Breast-conserving therapy (BCT) for early-stage breast cancer," J Surg Oncol, vol. 85, pp. 14-27, Jan 2004.
[2]K. K. Hunt, B. D. Smith, and E. A. Mittendorf, "The Controversy Regarding Margin Width in Breast Cancer: Enough is Enough," Annals of Surgical Oncology, vol. 21, pp. 701-703, Mar 2014.
[3]F. T. Nguyen, A. M. Zysk, E. J. Chaney, J. G. Kotynek, U. J. Oliphant, F. J. Bellafiore, et al., "Intraoperative evaluation of breast tumor margins with optical coherence tomography," Cancer Res, vol. 69, pp. 8790-6, Nov 15 2009.
[4]F. Schnabel, S. K. Boolbol, M. Gittleman, T. Karni, L. Tafra, S. Feldman, et al., "A randomized prospective study of lumpectomy margin assessment with use of MarginProbe in patients with nonpalpable breast malignancies," Ann Surg Oncol, vol. 21, pp. 1589-95, May 2014.
[5]D. W. Shipp, E. A. Rakha, A. A. Koloydenko, R. D. Macmillan, I. O. Ellis, and I. Notingher, "Intra-operative spectroscopic assessment of surgical margins during breast conserving surgery," Breast Cancer Res, vol. 20, p. 69, Jul 9 2018.
[6]M. Koller, S. Q. Qiu, M. D. Linssen, L. Jansen, W. Kelder, J. de Vries, et al., "Implementation and benchmarking of a novel analytical framework to clinically evaluate tumor-specific fluorescent tracers," Nat Commun, vol. 9, p. 3739, Sep 18 2018.
[7]M. Adhi and J. S. Duker, "Optical coherence tomography--current and future applications," Curr Opin Ophthalmol, vol. 24, pp. 213-21, May 2013.
[8]M. Terashima, H. Kaneda, and T. Suzuki, "The role of optical coherence tomography in coronary intervention," Korean J Intern Med, vol. 27, pp. 1-12, Mar 2012.
[9]T. H. Tsai, J. G. Fujimoto, and H. Mashimo, "Endoscopic Optical Coherence Tomography for Clinical Gastroenterology," Diagnostics (Basel), vol. 4, pp. 57-93, May 5 2014.
[10]J. T. McCormick, A. J. Keleher, V. B. Tikhomirov, R. J. Budway, and P. F. Caushaj, "Analysis of the use of specimen mammography in breast conservation therapy," Am J Surg, vol. 188, pp. 433-6, Oct 2004.
[11]R. C. Gonzalez and R. E. Woods, Digital Image Processing (4th Edition): Pearson, 2017.
[12]E. R. D. J. T. Astola, "An Introduction to Nonlinear Image Processing," SPIE publication, 1994.
[13]J. A. Hartigan and M. A. Wong, "Algorithm AS 136: A K-Means Clustering Algorithm," Journal of the Royal Statistical Society. Series C (Applied Statistics), vol. 28, pp. 100-108, 1979.
[14]R. Adams and L. Bischof, "Seeded region growing," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, pp. 641-647, 1994.
[15]O. F. Ronneberger, Philipp; Brox, Thomas, "U-Net: Convolutional Networks for Biomedical Image Segmentation" MICCAI, 2015.
[16]V. Badrinarayanan, A. Kendall, and R. Cipolla, "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, pp. 2481-2495, 2017.
[17]A. Elmoufidi, K. El Fahssi, S. Jai-andaloussi, A. Sekkaki, Q. Gwenole, and M. Lamard, "Anomaly classification in digital mammography based on multiple-instance learning," Iet Image Processing, vol. 12, pp. 320-328, Mar 2018.
[18]H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. (2016, December 01, 2016). Pyramid Scene Parsing Network. arXiv e-prints. Available: https://ui.adsabs.harvard.edu/abs/2016arXiv161201105Z
[19]G. Lin, A. Milan, C. Shen, and I. Reid. (2016, November 01, 2016). RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation. arXiv e-prints. Available: https://ui.adsabs.harvard.edu/abs/2016arXiv161106612L
[20]L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. (2016, June 01, 2016). DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. arXiv e-prints. Available: https://ui.adsabs.harvard.edu/abs/2016arXiv160600915C
[21]Z. Meng, Z. Fan, Z. Zhao, and F. Su, "ENS-Unet: End-to-End Noise Suppression U-Net for Brain Tumor Segmentation," in 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2018, pp. 5886-5889.
[22]J. Chang, X. Zhang, J. Chang, M. Ye, D. Huang, P. Wang, et al., "Brain Tumor Segmentation Based on 3D Unet with Multi-Class Focal Loss," in 2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), 2018, pp. 1-5.
[23]X. Li, H. Chen, X. Qi, Q. Dou, C. Fu, and P. Heng, "H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation From CT Volumes," IEEE Transactions on Medical Imaging, vol. 37, pp. 2663-2674, 2018.
[24]J. Long, E. Shelhamer, and T. Darrell. (2014, November 01, 2014). Fully Convolutional Networks for Semantic Segmentation. arXiv e-prints. Available: https://ui.adsabs.harvard.edu/abs/2014arXiv1411.4038L
[25]J. Tang, J. Li, and X. Xu, "Segnet-based gland segmentation from colon cancer histology images," in 2018 33rd Youth Academic Annual Conference of Chinese Association of Automation (YAC), 2018, pp. 1078-1082.
[26]T. Tran, O. Kwon, K. Kwon, S. Lee, and K. Kang, "Blood Cell Images Segmentation using Deep Learning Semantic Segmentation," in 2018 IEEE International Conference on Electronics and Communication Engineering (ICECE), 2018, pp. 13-16.
[27]K. Simonyan and A. Zisserman. (2014, September 01, 2014). Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv e-prints. Available: https://ui.adsabs.harvard.edu/abs/2014arXiv1409.1556S
[28]H. Salehinejad, J. Barfett, S. Valaee, and T. Dowdell, Training Neural Networks with Very Little Data -- A Draft, 2017.
[29]M. M. D. Deza, Elena, "Encyclopedia of Distances," 2009.
[30]P. Anbeek, K. L. Vincken, M. J. van Osch, R. H. Bisschops, and J. van der Grond, "Probabilistic segmentation of white matter lesions in MR imaging," Neuroimage, vol. 21, pp. 1037-44, Mar 2004.
[31]Y. Dgani, H. Greenspan, and J. Goldberger, "Training a neural network based on unreliable human annotation of medical images," in 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 2018, pp. 39-42.


QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊