跳到主要內容

臺灣博碩士論文加值系統

(44.212.99.248) 您好!臺灣時間:2023/01/28 12:42
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:古明章
研究生(外文):Gu, Ming-Zhang
論文名稱:深度遷移學習於胸部 X 光影像之多標籤分類
論文名稱(外文):Deep transfer learning on multi label classification of chest x ray images
指導教授:黃冠華黃冠華引用關係
指導教授(外文):Huang, Guan-Hua
口試日期:2021-08-30
學位類別:碩士
校院名稱:國立陽明交通大學
系所名稱:統計學研究所
學門:數學及統計學門
學類:統計學類
論文種類:學術論文
論文出版年:2021
畢業學年度:110
語文別:中文
論文頁數:47
中文關鍵詞:胸部X光影像深度學習遷移學習多標籤分類
外文關鍵詞:chest x-raysdeep learningtransfer learningmulti label classification
相關次數:
  • 被引用被引用:0
  • 點閱點閱:104
  • 評分評分:
  • 下載下載:22
  • 收藏至我的研究室書目清單書目收藏:0
這項研究所使用的資料是義大醫院所提供的 1630 張標註疾病標籤的胸部 X光影像,而其中每張影像可能包含複數種異常病徵,也就是說這是一個典型的多標籤分類問題。本研究透過使用三種不同特色的源數據集ImageNet、CheXpert 和 NIH胸部X光影像集來實施深度遷移學習。此外我們也使用三種不同的方法來結合數據集並期望可以獲得更好的表現。根據以上的各種組合,我們一共對比了16種不同的模型。最後實驗結果說明,對於單一數據集,ResNet 50通常具有更好的表現,而 DenseNet 121則在組合過後的數據集表現較優。此外,NIH Chest X-ray作為源數據集的表現最優。對於結合數據集的方法,一般正常的深度遷移學習是最好的,但是使用同步訓練的方法似乎相當具有潛力。最後研究表明組合過後的數據集比單一數據集具有更好的性能。
In this research, the E-DA hospital provides 1630 chest x-ray images with disease labels, which might contain multiple sites of abnormalities. Therefore, this is a typical multi-label classification problem. We use three characteristic datasets, the ImageNet, the CheXpert, and the NIH Chest X-ray as the source data to implement transfer learning. Additionally, we use three different methods to combine the datasets and expect to get a better performance. According to the various combinations of the above, we have totally compared 16 different models. The results show that for single dataset the ResNet 50 has a better performance, and the DenseNet 121 is suitable for combined datasets. Moreover, for selection of source data the NIH Chest X-ray dataset has the best performance. And for methods to combine datasets the regular transfer learning is the best, but the co-trained has method seems to have potential. Besides, the research shows that combined datasets have a better performance than single datasets.
摘要 i
Abstract ii
List of table v
List of figure vi
Chapter 1 Introduction 1
Chapter 2 Methods 3
2.1 Preprocessing 4
2.1.1 Image Argumentation 4
2.2 Multi-label classification 6
2.2.1 Activation function of the final layer 7
2.2.2 Loss function 7
2.3 Backbone model 8
2.3.1 ResNet [1] 8
2.3.2 DenseNet [2] 9
2.4 Transfer Learning 11
2.4.1 Source data and target data 11
2.4.2 Model fine-tuning and Layer transfer 11
Chapter 3 Materials and implement 13
3.1 Dataset 13
3.1.1 Source data 13
3.1.2 Target data 16
3.2 Model structure 17
3.2.1 Model adjustment and hyper parameters 17
3.2.2 Different approaches for transfer learning 18
3.3 Evaluation 23
3.3.1 Stratified K-fold cross-validation 23
3.3.2 Metrics 24
Chapter 4 Data analysis results 28
4.1 Backbone model selection 33
4.2 Source data selection 35
4.3 Combination method selection 40
4.4 Future work 45
Chapter 5 Conclusion 46
Reference 47
[1]He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
[2]Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700-4708).
[3]Irvin, J., Rajpurkar, P., Ko, M., Yu, Y., Ciurea-Ilcus, S., Chute, C., ... & Ng, A. Y. (2019, July). Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, No. 01, pp. 590-597).
[4]Allaouzi, I., & Ahmed, M. B. (2019). A novel approach for multi-label chest X-ray classification of common thorax diseases. IEEE Access, 7, 64279-64288.
[5]Sorower, M. S. (2010). A literature survey on algorithms for multi-label learning. Oregon State University, Corvallis, 18, 1-25.
[6]Fawcett, T. (2006). An introduction to ROC analysis. Pattern recognition letters, 27(8), 861-874.
[7]He, K., Fan, H., Wu, Y., Xie, S., & Girshick, R. (2020). Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9729-9738).
[8]Rajan, D., Thiagarajan, J. J., Karargyris, A., & Kashyap, S. (2021, February). Self-training with improved regularization for sample-efficient chest x-ray classification. In Medical Imaging 2021: Computer-Aided Diagnosis (Vol. 11597, p. 115971S). International Society for Optics and Photonics.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊