跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.136) 您好!臺灣時間:2025/09/20 13:31
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:黃珮楨
研究生(外文):Pei-Chen Huang
論文名稱:即時在全視野數位病理切片上自動分割肺癌
論文名稱(外文):Real Time Automatic Lung Tumor Segmentation in Whole-slide Histopathological Images
指導教授:王靖維
指導教授(外文):Ching-Wei Wang
口試委員:白孟宜趙載光
口試委員(外文):Meng-Yi Bai
口試日期:2019-07-26
學位類別:碩士
校院名稱:國立臺灣科技大學
系所名稱:醫學工程研究所
學門:工程學門
學類:生醫工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:英文
論文頁數:46
中文關鍵詞:Artificial IntelligenceDeep LearningFull Convolutional Neural NetworksAdaptive LearningMedical ImagingDigital PathologyCancer DetectionLungImage ClassificationImage Segmentation
外文關鍵詞:Artificial IntelligenceDeep LearningFull Convolutional Neural NetworksAdaptive LearningMedical ImagingDigital PathologyCancer DetectionLungImage ClassificationImage Segmentation
相關次數:
  • 被引用被引用:0
  • 點閱點閱:161
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
For diagnosis and classification of cancer, pathologists visually scan over a large number of glass slide images under the microscope, which is a time-consuming and tedious task. Nowadays, glass slides are being converted to whole-slide images enabling a computer-based analysis of pathological images. In this study, an adaptive deep learning FCN framework is developed by using whole-slide images to automatically segment lung tumor tissue in real-time. The proposed fully convolutional network reduces convolutional layers to overcome insufficient GPU memory required for training situation and uses single-stream 32s as the upsampling method to avoid overly fragmented segmentation results and improve accuracy. We improve the FCN model to greatly accelerate the training speed, and increase the efficiency with adaptive learning. The training time only needs 9 hours with adaptive learning, but it takes 41 hours for training without adaptive learning. The proposed approach saved about 78\% of the inference time and it is about 50 seconds for one WSI. Moreover, the inference time of one 2.16GB pixel-based segmentation slide can be reduced from 50 seconds to 35 seconds if upgrading the GPU to 2080Ti. In evaluation, the proposed method is compared with four benchmarked approaches, including SquezeNet, ResNet, VggNet, and AlexNet. The experimental results show that average area under the curve (AUC) of 0.99 of the proposed method is much better than the results of SqueezeNet, ResNet, VggNet, and AlexNet, which are 0.91, 0.89, 0.88, and 0.91, respectively. In addition, the proposed method achieved a true positive rate of 0.751 with respect to the false positive rate at 0.05, which is higher than the results of SquezeNet, ResNet, VggNet, and AlexNet, which are 0.564, 0.486, 0.443, and 0.506, respectively. Furthermore, the accuracy of our method with a Dice coefficient of 0.751 is comparable to that of average pathologists with a Dice coefficient of 0.76.
For diagnosis and classification of cancer, pathologists visually scan over a large number of glass slide images under the microscope, which is a time-consuming and tedious task. Nowadays, glass slides are being converted to whole-slide images enabling a computer-based analysis of pathological images. In this study, an adaptive deep learning FCN framework is developed by using whole-slide images to automatically segment lung tumor tissue in real-time. The proposed fully convolutional network reduces convolutional layers to overcome insufficient GPU memory required for training situation and uses single-stream 32s as the upsampling method to avoid overly fragmented segmentation results and improve accuracy. We improve the FCN model to greatly accelerate the training speed, and increase the efficiency with adaptive learning. The training time only needs 9 hours with adaptive learning, but it takes 41 hours for training without adaptive learning. The proposed approach saved about 78\% of the inference time and it is about 50 seconds for one WSI. Moreover, the inference time of one 2.16GB pixel-based segmentation slide can be reduced from 50 seconds to 35 seconds if upgrading the GPU to 2080Ti. In evaluation, the proposed method is compared with four benchmarked approaches, including SquezeNet, ResNet, VggNet, and AlexNet. The experimental results show that average area under the curve (AUC) of 0.99 of the proposed method is much better than the results of SqueezeNet, ResNet, VggNet, and AlexNet, which are 0.91, 0.89, 0.88, and 0.91, respectively. In addition, the proposed method achieved a true positive rate of 0.751 with respect to the false positive rate at 0.05, which is higher than the results of SquezeNet, ResNet, VggNet, and AlexNet, which are 0.564, 0.486, 0.443, and 0.506, respectively. Furthermore, the accuracy of our method with a Dice coefficient of 0.751 is comparable to that of average pathologists with a Dice coefficient of 0.76.
Abstract
Acknowledgement
Table of Content
List of Tables
List of Figure
1 Introduction
1.1 Contribution
1.2 Thesis Organization
2 Related Work
2.1 AlexNet
2.2 VGGNet
2.3 ResNet
2.4 SqueezeNet
2.5 Transfer learning
2.6 Fully Convolutional Networks
3 Methodology
3.1 Data Set
3.2 Adaptive learning framework
3.3 The proposed Fully Convolutional Network
3.4 Test methods
4 Experiments and Results
4.1 Evaluation Metrics
4.1.1 True positive rate
4.1.2 Area Under Curve (AUC)
4.1.3 Dice coefficient
4.2 Comparison with Benchmark Functions
4.3 Lung Carcinoma WSIs Segmentation Results
4.4 Computing Time
5 Discussion
6 Conclusion and Future Work
6.1 Conclusion
6.2 Future Work
Reference
[1] WHO Statistics on Cancer. Availabe online: https://www.who.int/cancer/en/
(accessed on 1 May 2019).
[2] F. Bray, J. Ferlay, I. Soerjomataram, R.L. Siegel, L.A. Torre, A. Jemal, "Global
Cancer Statistics 2018: GLOBOCAN Estimates of Incidence and Mortality
Worldwide for 36 Cancers in 185 Countries," CA CANCER J CLIN 2018, 68,
394424
[3] B.W. Stewart, and C.P. Wild, "World Cancer Report 2014: International Agency
for Research on Cancer," 2014.
[4] N. Coudray, P. S. Ocampo, T. Sakellaropoulos, N. Narula, M. Snuderl, D. Fenyo,
A. L. Moreira, N. Razavian, and A. Tsirigos, "Classification and mutation predic-
tion from nonsmall cell lung cancer histopathology images using deep learning,"
Nature Medicine, vol. 24, no. 10, pp. 1559-1567, 2018/10/01, 2018.
[5] M. N. Gurcan, L. E. Boucheron, A. Can, A. Madabhushi, N. M. Rajpoot and B.
Yener, "Histopathological Image Analysis: A Review," IEEE Reviews in Biomed-
ical Engineering, vol. 2, pp. 147-171, 2009. doi: 10.1109/RBME.2009.2034865
[6] T. Qaiser, Y-W. Tsang, D. Taniyama, N. Sakamoto, K. Nakane, D. Epstein,
N. Rajpoot, "Fast and accurate tumor segmentation of histology images using
persistent homology and deep convolutional features," Medical Image Analysis,
Volume 55, 2019, Pages 1-14, ISSN 1361-8415
[7] A. A. A. Setio, F. Ciompi, G. Litjens, P. Gerke, C. Jacobs, S. J. v. Riel, M.
M. W. Wille, M. Naqibullah, C. I. Sanchez, and B. v. Ginneken, "Pulmonary
Nodule Detection in CT Images: False Positive Reduction Using Multi-View
Convolutional Networks," IEEE Transactions on Medical Imaging, vol. 35, no.
5, pp. 1160-1169, 2016.
[8] S.Wang, Z. Liu, X. Chen, Y. Zhu, H. Zhou, Z. Tang, W.Wei, D. Dong, M.Wang,
and J. Tian, "Unsupervised Deep Learning Features for Lung Cancer Overall
Survival Analysis." Conf Proc IEEE Eng Med Biol Soc. 2018 Jul;2018:2583-2586.
doi: 10.1109/EMBC.2018.8512833.
[9] W. Ausawalaithong, A. Thirach, S. Marukatat, and T. Wilaiprasitporn, "Auto-
matic Lung Cancer Prediction from Chest X-ray Images Using the Deep Learning
Approach." Image and Video Processing, arXiv:1808.10858
[10] G. v. Tulder, and M. d. Bruijne, "Combining Generative and Discriminative
Representation Learning for Lung CT Analysis With Convolutional Restricted
Boltzmann Machines," IEEE Transactions on Medical Imaging, vol. 35, no. 5,
pp. 1262-1272, 2016.
[11] V. A. A. Antonio, N. Ono, A. Saito, T. Sato, M. Altaf-Ul-Amin, and S. Kanaya,
"Classification of lung adenocarcinoma transcriptome subtypes from pathological
images using deep convolutional networks," International Journal of Computer
Assisted Radiology and Surgery, vol. 13, no. 12, pp. 1905-1913, 2018/12/01,
2018.
[12] Zhang Li et al. "Computer-aided diagnosis of lung carcinoma using deep learn-
ing - a pilot study," Computer Vision and Pattern Recognition, arXiv:1803.05471
[13] MH. Yap, G. Pons, J. Marti, S. Ganau, M. Sentis, R. Zwiggelaar, AK. Davison,
R. Marti, Y. M. Hoon, G. Pons, J. Marti, S. Ganau, M. Sentis , R. Zwiggelaar,
AK. Davison, R. Marti, "Automated Breast Ultrasound Lesions Detection Us-
ing Convolutional Neural Networks," IEEE Journal of Biomedical and Health
Informatics 2018, 22, 1218-1226, doi:10.1109/JBHI.2017.2731873.
[14] D.A. Ragab, M. Sharkas, S. Marshall, J. Ren "Breast cancer detection using
deep convolutional neural networks and support vector machines," PeerJ 2019,
7, e6201, doi:10.7717/peerj.6201.
[15] M.Wu, C. Yan, H. Liu, Q. Liu, "Automatic classification of ovarian cancer types
from cytological images using deep convolutional neural networks," Bioscience
Reports 2018, 38, BSR20180289, doi:10.1042/BSR20180289.
[16] K. Simonyan, and A. Zisserman, "Very Deep Convolutional Networks for
Large-Scale Image Recognition," Computer Vision and Pattern Recognition,
arXiv:1409.1556
[17] M.Z. Alom, T.M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M.S. Nasrin, M.
Hasan, B.C. Van Essen, A.A.S. Awwal, and V.K. Asari, "A State-of-the-Art
Survey on Deep Learning Theory and Architectures," Electronics vol. 8, no. 3,
pp. 292, 2019.
[18] P. Xi, C. Shu, and R. Goubran, "Abnormality Detection in Mammography
using Deep Convolutional Neural Networks," In Proceedings of 2018 IEEE In-
ternational Symposium on Medical Measurements and Applications (MeMeA),
11-13 June 2018; pp. 1-6.
[19] J.H. Lee, Y.J. Kim, Y.W. Kim, S. Park, Y.-i. Choi, Y.J. Kim, D.K. Park, K.G.
Kim, J.W.J.S.E. Chung, "Spotting malignancies from gastric endoscopic im-
ages using deep learning," Surgical Endoscopy 2019, 10.1007/s00464-019-06677-
2, doi:10.1007/s00464-019-06677-2.
[20] H. Yoon, J. Lee, J.E. Oh, H.R. Kim, S. Lee, H.J. Chang, D.K. Sohn, "Tu-
mor Identification in Colorectal Histology Images Using a Convolutional Neural
Network," Journal of Digital Imaging 2019, 32, 131-140, doi:10.1007/s10278-018-
0112-9.
[21] L. Gong, S. Jiang, Z. Yang, G. Zhang, L. Wang, "Automated pulmonary
nodule detection in CT images using 3D deep squeeze-and-excitation net-
works," International Journal of Computer Assisted Radiology and Surgery 2019,
10.1007/s11548-019-01979-1, doi:10.1007/s11548-019-01979-1.
[22] Q. Wang, F. Shen, L. Shen, J. Huang, W.J.J.o.D.I. Sheng, "Lung Nod-
ule Detection in CT Images Using a Raw Patch-Based Convolutional Neu-
ral Network," Journal of Digital Imaging 2019, 10.1007/s10278-019-00221-3,
doi:10.1007/s10278-019-00221-3.
[23] A. Nibali, Z. He, D. Wollersheim, "Pulmonary nodule classification with deep
residual networks," International Journal of Computer Assisted Radiology and
Surgery 2017, 12, 1799-1808, doi:10.1007/s11548-017-1605-6.
[24] Y. Jiang, L. Chen, H. Zhang, X. Xiao, "Breast cancer histopathological image
classification using convolutional neural networks with small SE-ResNet mod-
ule," PLOS ONE 2019, 14, e0214587, doi:10.1371/journal.pone.0214587.
[25] M.ur. Rehman, S.H. Khan, Z. Abbas, S.M.D. Rizvi, "Classification of Diabetic
Retinopathy Images Based on Customised CNN Architecture," In Proceedings
of 2019 Amity International Conference on Artificial Intelligence (AICAI), 4-6
Feb. 2019; pp. 244-248.
[26] X. Zhang, W. Hu, F. Chen, J. Liu, Y. Yang, L. Wang, H. Duan, J. Si, "Gastric
precancerous diseases classification using CNN with a concise model," PLOS
ONE 2017, 12, e0185508, doi:10.1371/journal.pone.0185508.
[27] G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian,
J. A. W. M. van der Laak, B. van Ginneken, and C. I. Sanchez, "A survey on
deep learning in medical image analysis," Medical Image Analysis, vol. 42, pp.
60-88, 2017/12/01/, 2017.
[28] J.Yosinski, J.Clune, Y.Bengio, and H.Lipson, "How transferable are features
in deep neural networks?" Advances in Neural Information Processing Systems,
27:33203328, 2014.
[29] M.Oquab, L.Bottou, I.Laptev, and J.Sivic, "Learning and transferring mid-level
image representations using convolutional neural networks," IEEE Conference on
Computer Vision and Pattern Recognition, pages 17171724, 2014.
[30] D.Z. Matthew and F. Rob, "Visualizing and understanding convolutional net-
works," Computer Vision-ECCV, pages 818833, 2014.
[31] E. Shelhamer, J. Long and T. Darrell, "Fully Convolutional Networks for Se-
mantic Segmentation," IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 39, no. 4, pp. 640-651, 1 April 2017.
[32] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov,
"Dropout: a simple way to prevent neural networks from overfitting," Journal of
Machine Learning Research 2014, 15, 1929-1958.
[33] A. Krizhevsky, I. Sutskever, G.E. Hinton, "ImageNet classification with deep
convolutional neural networks," In Proceedings of Proceedings of the 25th In-
ternational Conference on Neural Information Processing Systems - Volume 1,
Lake Tahoe, Nevada, pp. 1097-1105.
[34] E.H. Geoffrey, S. Nitish, K. Alex, S. Ilya, and R.S. Ruslan, "Improving neural
networks by preventing co-adaptation of feature detectors," Neural and Evolu-
tionary Computing arXiv:1207.0580.
[35] V. Nair and G.E. Hinton, "Rectified linear units improve restricted boltz-mann
machines," ICML10 Proceedings of the 27th International Conference on Inter-
national Conference on Machine Learning, pages 807814, 2010.
[36] T. Fawcett, "An introduction to ROC analysis," Pattern Recognition Letters,
Volume 27, Issue 8, 2006, Pages 861-874, ISSN 0167-8655
[37] T.J. Sørensen, "A method of establishing groups of equal amplitude in plant
sociology based on similarity of species and its application to analyses of the
vegetation on Danish commons," Biologiske skrifter, Kongelige Danske vidensk-
abernes selskab, ISSN 0366-3612 Volumes 4-5 of Det Kongelige Danske Viden-
skabernes Selskab. Biologiske Skrifter. Bd. 5. no. 4
[38] R.D. Lee, "Measures of the Amount of Ecologic Association Between Species,"
Wiley on behalf of the Ecological Society of America, Vol. 26, No. 3 (Jul., 1945),
pp. 297-302, doi:10.2307/1932409
[39] AIExplore platform for real-time whole slide segmentation,
http://aiexploredb.ntust.edu.tw/
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊