跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.80) 您好!臺灣時間:2025/01/25 20:13
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:藍偉任
研究生(外文):Wei-Ren Lan
論文名稱:應用卷積神經網絡於支氣管超音波影像診斷
論文名稱(外文):Endobronchial Ultrasound Images Diagnosis Using Convolutional Neural Network
指導教授:張瑞峰張瑞峰引用關係
口試日期:2017-07-19
學位類別:碩士
校院名稱:國立臺灣大學
系所名稱:生醫電子與資訊學研究所
學門:工程學門
學類:生醫工程學類
論文種類:學術論文
論文出版年:2017
畢業學年度:105
語文別:英文
論文頁數:31
中文關鍵詞:肺癌支氣管超音波卷積神經網絡遷移學習
外文關鍵詞:lung cancerEBUSconvolutional neural networktransfer learning
相關次數:
  • 被引用被引用:0
  • 點閱點閱:388
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
肺癌在美國是死亡人數最高的癌症,及早的治療,可有效提升肺癌的存活率。支氣管超音波影像由於他的即時性、低輻射、較好的偵測能力,並且可與穿刺搭配常用來做肺部疾病檢查以及肺部病灶的良惡性診斷,近年來成為一個肺癌重要的診斷工具之一。不過目前病灶的支氣管超音波圖像判斷以醫生主觀統整特徵做判斷參考為主。電腦輔助診斷有運用灰階影像特徵做分類,但仍先需有醫生專業從影像上取樣進行分析,屬於半自動化輔助。因此,此篇研究主要的目的是希望藉由卷積神經網路來達成全自動化輔助。首先,調整每張EBUS影像成神經網絡所需的影像輸入尺寸,接著藉由旋轉、翻轉影像做訓練資料數的擴充。欲作為使用的卷積神經網絡CaffeNet 遷移了預先已在ImageNet 訓練過的模型參數,而後再藉由訓練資料訓練來做網絡的參數優化。接著從第七層的全連階層取出 4096 維度的特徵,利用SVM分類器進行病灶的良惡性分類。在此次研究中採用164個病例,包含56個良性病灶以及108個惡性病灶,研究結果顯示,使用遷移學習的卷積神經網絡特徵作為分類使用,比特徵上使用GLCM (gray-level co-occurrence matrix)更較具有分辨率,可達到準確率85.4% (140/164)、靈敏性87.0% (94/108)、特異性 82.1% (46/56),以及ROC曲線面積0.8705。從結果上來看,使用卷積神經網絡作為支氣管超音波良惡性分類很具有潛力。
In the United States, lung cancer is the leading cause of cancer death. The survival rate could increase by early detection. In recent years, the endobronchial ultrasonography (EBUS) images have been utilized to differentiate between benign and malignant lesions and guide transbronchial needle aspiration because it is real-time, radiation-free and has better performance. However, the diagnosis depends on the subjective judgement from doctors. There was a study which using the greyscale image textures of the EBUS images to classify the lung lesions but it belonged to semi-automated system which still need the experts to select a part of the lesion first. Therefore, the main purpose of the study was to achieve full automation assistance by using convolution neural network. First of all, the EBUS images resized to the input size of convolution neural network (CNN). And then, the training data were rotated and flipped. The parameters of the model trained with ImageNet previously were transferred to the CaffeNet used to classify the lung lesions. And then, the parameter of the CaffeNet was optimized by the EBUS training data. The features with 4096 dimension were extracted from the 7th fully connected layer and the support vector machine (SVM) was utilized to differentiate benign and malignant. This study was validated with 164 cases including 56 benign and 108 malignant. According to the experiment results, applying the classification by the features from the CNN with transfer learning had better performance than the conventional method with Gray Level Co-Occurrence Matrix (GLCM) features. The accuracy, sensitivity, specificity, and the area under ROC achieved 85.4% (140/164), 87.0% (94/108), 82.1% (46/56), and 0.8705, respectively. From the experiment results, it has potential to diagnose EBUS images with CNN.
口試委員會審定書 i
致謝 ii
摘要 iii
Abstract iv
List of Figures vii
List of Tables viii
Chapter 1 Introduction 1
Chapter 2 Material 5
Chapter 3 EBUS Images Diagnosis System Using Convolutional Neural Network 6
3.1. Data Augmentation 7
3.2. Feature Extraction based on Fine-tuned CNN 8
3.2.1. Convolutional Neural Network 9
3.2.2. Fine-tuning the CNN 13
3.2.3. Feature Extraction 13
3.3. Classification 14
3.3.1. SVM 14
Chapter 4 Experiment Results and Discussion 16
4.1. Experiment Environment 16
4.2. Results 17
4.3. Discussion 25
Chapter 5 Conclusion and Future Work 27
References 28
[1]R. L. Siegel, K. D. Miller, and A. Jemal, "Cancer statistics, 2016," CA: a cancer journal for clinicians, vol. 66, pp. 7-30, 2016.
[2]R. S. Fontana, D. R. Sanderson, W. F. Taylor, L. B. Woolner, W. E. Miller, J. R. Muhm, et al., "Early Lung Cancer Detection: Results of the Initial (Prevalence) Radiologic and Cytologic Screening in the Mayo Clinic Study 1, 2," American Review of Respiratory Disease, vol. 130, pp. 561-565, 1984.
[3]T. N. L. S. T. R. Team, "Reduced Lung-Cancer Mortality with Low-Dose Computed Tomographic Screening," New England Journal of Medicine, vol. 365, pp. 395-409, 2011.
[4]M. Kaneko, K. Eguchi, H. Ohmatsu, R. Kakinuma, T. Naruke, K. Suemasu, et al., "Peripheral lung cancer: screening and detection with low-dose spiral CT versus radiography," Radiology, vol. 201, pp. 798-802, 1996.
[5]E. A. Kazerooni, F. T. Lim, A. Mikhail, and F. J. Martinez, "Risk of pneumothorax in CT-guided transthoracic needle aspiration biopsy of the lung," Radiology, vol. 198, pp. 371-375, 1996.
[6]T. Balamugesh and F. Herth, "Endobronchial ultrasound: A new innovation in bronchoscopy," Lung India: Official Organ of Indian Chest Society, vol. 26, p. 17, 2009.
[7]K. Yasufuku, T. Nakajima, M. Chiyo, Y. Sekine, K. Shibuya, and T. Fujisawa, "Endobronchial ultrasonography: current status and future directions," Journal of Thoracic Oncology, vol. 2, pp. 970-979, 2007.
[8]H. Wada, T. Nakajima, K. Yasufuku, T. Fujiwara, S. Yoshida, M. Suzuki, et al., "Lymph node staging by endobronchial ultrasound-guided transbronchial needle aspiration in patients with small cell lung cancer," The Annals of thoracic surgery, vol. 90, pp. 229-234, 2010.
[9]T.-Y. Chao, C.-H. Lie, Y.-H. Chung, J.-L. Wang, Y.-H. Wang, and M.-C. Lin, "Differentiating peripheral pulmonary lesions based on images of endobronchial ultrasonography," CHEST Journal, vol. 130, pp. 1191-1197, 2006.
[10]C.-H. Lie, T.-Y. Chao, Y.-H. Chung, J.-L. Wang, Y.-H. Wang, and M.-C. Lin, "New image characteristics in endobronchial ultrasonography for differentiating peripheral pulmonary lesions," Ultrasound in medicine & biology, vol. 35, pp. 376-381, 2009.
[11]P. Nguyen, F. Bashirzadeh, J. Hundloe, O. Salvado, N. Dowson, R. Ware, et al., "Grey scale texture analysis of endobronchial ultrasound mini probe images for prediction of benign or malignant aetiology," Respirology, vol. 20, pp. 960-966, 2015.
[12]K. Fukushima and S. Miyake, "Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition," in Competition and cooperation in neural nets, ed: Springer, 1982, pp. 267-285.
[13]Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol. 521, pp. 436-444, 2015.
[14]Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, pp. 2278-2324, 1998.
[15]C.-K. Shie, C.-H. Chuang, C.-N. Chou, M.-H. Wu, and E. Y. Chang, "Transfer representation learning for medical image analysis," in Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE, 2015, pp. 711-714.
[16]J.-Z. Cheng, D. Ni, Y.-H. Chou, J. Qin, C.-M. Tiu, Y.-C. Chang, et al., "Computer-aided diagnosis with deep learning architecture: applications to breast lesions in US images and pulmonary nodules in CT scans," Scientific reports, vol. 6, p. 24454, 2016.
[17]A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, 2012, pp. 1097-1105.
[18]J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "Imagenet: A large-scale hierarchical image database," in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, 2009, pp. 248-255.
[19]Y. Bar, I. Diamant, L. Wolf, and H. Greenspan, "Deep learning with non-medical training used for chest pathology identification," in Proc. SPIE, 2015, p. 94140V.
[20]H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, et al., "Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning," IEEE transactions on medical imaging, vol. 35, pp. 1285-1298, 2016.
[21]R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587.
[22]P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, "Overfeat: Integrated recognition, localization and detection using convolutional networks," arXiv preprint arXiv:1312.6229, 2013.
[23]A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, "CNN features off-the-shelf: an astounding baseline for recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2014, pp. 806-813.
[24]C. Cortes and V. Vapnik, "Support-vector networks," Machine learning, vol. 20, pp. 273-297, 1995.
[25]B. Athiwaratkun and K. Kang, "Feature representation in convolutional neural networks," arXiv preprint arXiv:1507.02313, 2015.
[26]J. Salamon and J. P. Bello, "Deep convolutional neural networks and data augmentation for environmental sound classification," IEEE Signal Processing Letters, vol. 24, pp. 279-283, 2017.
[27]N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway, et al., "Convolutional neural networks for medical image analysis: Full training or fine tuning?," IEEE transactions on medical imaging, vol. 35, pp. 1299-1312, 2016.
[28]J. Schmidhuber, "Deep learning in neural networks: An overview," Neural networks, vol. 61, pp. 85-117, 2015.
[29]J. Donahue, "Caffenet," ed, 2016.
[30]M. D. Zeiler and R. Fergus, "Visualizing and understanding convolutional networks," in European conference on computer vision, 2014, pp. 818-833.
[31]V. Vapnik, The nature of statistical learning theory: Springer science & business media, 2013.
[32]Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, et al., "Caffe: Convolutional architecture for fast feature embedding," in Proceedings of the 22nd ACM international conference on Multimedia, 2014, pp. 675-678.
[33]R. Kohavi, "A study of cross-validation and bootstrap for accuracy estimation and model selection," in Ijcai, 1995, pp. 1137-1145.
[34]R. M. Haralick and K. Shanmugam, "Textural features for image classification," IEEE Transactions on systems, man, and cybernetics, pp. 610-621, 1973.
[35]K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
[36]C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, et al., "Going deeper with convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9.
[37]K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[38]H. C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, et al., "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning," IEEE Transactions on Medical Imaging, vol. 35, pp. 1285-1298, 2016.
[39]D.-X. Xue, R. Zhang, H. Feng, and Y.-L. Wang, "CNN-SVM for microvascular morphological type recognition with data augmentation," Journal of Medical and Biological Engineering, vol. 36, pp. 755-764, 2016.
[40]O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, et al., "Imagenet large scale visual recognition challenge," International Journal of Computer Vision, vol. 115, pp. 211-252, 2015.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top