跳到主要內容

臺灣博碩士論文加值系統

(34.204.198.73) 您好!臺灣時間:2024/07/21 15:34
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:傅艾峰
研究生(外文):Elvin Nur Furqon
論文名稱:傅艾峰
論文名稱(外文):Generating Clear Sampling and Managing Outliers to Enhance Skin Lesion Classification using Pseudo Skin Image Generator (PSIG-Net)
指導教授:林 柏江
指導教授(外文):Po-Chiang Lin
口試委員:林 智揚王 文俊
口試委員(外文):Chih-Yang LinWen-June Wang
口試日期:2024-05-07
學位類別:碩士
校院名稱:元智大學
系所名稱:電機工程學系乙組
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:英文
論文頁數:56
中文關鍵詞:皮損分類黑色素瘤連體網生成對抗網絡虛擬影像
外文關鍵詞:skin lesion classificationmelanomaSiamese networkgenerative advertise networkpseudo image
DOI:10.1016/j.bspc.2024.106112
ORCID或ResearchGate:orcid.org/0000-0002-6529-6709
IG URL:tynclause
Facebook:Elvin Nur Furqon
相關次數:
  • 被引用被引用:0
  • 點閱點閱:10
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
降低黑色素瘤所導致的死亡率以及提高黑色素瘤診斷的準確性,極大地依賴於皮膚病變圖像的自動化準確分類。儘管深度學習模型已經進步,能夠處理不同皮膚病變類型之間的相似性,但訓練數據中的一些模糊特徵仍然阻礙了模型性能,尤其是在處理重疊數據點時。為了減少這些混淆,本論文提出了一種新穎的框架,稱為偽皮膚圖像生成網絡(PSIG-Net),該網絡人工增強訓練數據集並消除問題實例。本方法重點處理低變異案例,使用生成對抗網絡(GAN)產生模仿真實類別特徵的人工樣本。這些人工樣本經過繁瑣的處理步驟,以控制其相似度同時過濾異常值。一個連體架構管理偽數據與原始數據群集之間的距離,利用基於密度的距離測量來僅保留與真實樣本最接近的匹配。全面的實驗表明PSIG-Net取得了顯著的性能提升。模型在ISIC-2017數據集上達到了0.958的準確率,在ISIC-2018數據集上達到了0. 980的準確率。超越了在這些具有挑戰性的皮膚病變數據集上評估的類似 最先進的方法。
Reducing the death rates caused by melanoma and improving the accuracy of melanoma diagnosis depends significantly on the accurate automated classification of dermoscopic skin lesion images. Even while deep learning models have improved to deal with similarities between and among different types of skin lesions, some ambiguous characteristics in training data keep impeding model performance, especially when handling overlapping data points. To reduce these confusions, this thesis proposes a novel framework called the Pseudo Skin Image Generator network (PSIG-Net), which enhances the training dataset artificially and eliminates problematic instances. With an emphasis on managing low-variation cases, the proposed approach uses a generative adversarial network (GAN) to produce artificial samples that mimic the characteristics of the real classes. These artificial samples go through painstaking processing steps to control their resemblance while filtering outliers. A Siamese architecture manages the distance between pseudo and original data clusters, utilizing density-based distance measures to retain only the closest matches to real samples. Comprehensive experiments demonstrate significant performance improvements achieved by PSIG-Net. The model achieved an accuracy of 0.958 on the ISIC-2017 dataset and 0.980 on the ISIC-2018 dataset, surpassing similar state-of-the-art methods evaluated on these challenging skin lesion datasets.
摘要 iii
ABSTRACT iv
ACKNOWLEDGMENT v
List of Figures viii
List of Tables ix
1 Introduction 1
2 Related Work 6
2.1 Generative Adversarial Network 6
2.2 Siamese Network 14
2.3 Efficient Net for Classification 16
2.4 Skin Lesion Melanoma 20
3 Method 26
3.1 Public Dataset 27
3.2 Stage-1: Generative pseudo samples module 29
3.3 Stage-2: Controlling the outlier pseudo samples. 32
4 Experiment and Result 37
4.1 Experiment and Metric 37
4.2 Performance Evaluation with Pseudo Sample 39
4.3 Results and Discussions 42
4.4 Ablation Study 46
5 Conclusion 49
References 51

[1]J. Ferlay et al., “Cancer statistics for the year 2020: An overview,” Intl Journal of Cancer, vol. 149, no. 4, pp. 778–789, Aug. 2021, doi: 10.1002/ijc.33588.
[2]D. Schadendorf et al., “Melanoma,” The Lancet, vol. 392, no. 10151, pp. 971–984, Sep. 2018, doi: 10.1016/S0140-6736(18)31559-9.
[3]M. Binder et al., “Epiluminescence Microscopy: A Useful Tool for the Diagnosis of Pigmented Skin Lesions for Formally Trained Dermatologists,” Arch Dermatol, vol. 131, no. 3, p. 286, Mar. 1995, doi: 10.1001/archderm.1995.01690150050011.
[4]H. Bhatt, V. Shah, K. Shah, R. Shah, and M. Shah, “State-of-the-art machine learning techniques for melanoma skin cancer detection and classification: a comprehensive review,” Intelligent Medicine, vol. 3, no. 3, pp. 180–190, Aug. 2023, doi: 10.1016/j.imed.2022.08.004.
[5]A. Ameri, “A Deep Learning Approach to Skin Cancer Detection in Dermoscopy Images,” Journal of Biomedical Physics and Engineering, vol. 10, no. 6, pp. 801–806, Dec. 2020, doi: 10.31661/jbpe.v0i0.2004-1107.
[6]K. Das et al., “Machine Learning and Its Application in Skin Cancer,” International Journal of Environmental Research and Public Health, vol. 18, no. 24, Art. no. 24, Jan. 2021, doi: 10.3390/ijerph182413409.
[7]A. Khamparia, P. K. Singh, P. Rani, D. Samanta, A. Khanna, and B. Bhushan, “An internet of health things-driven deep learning framework for detection and classification of skin cancer using transfer learning,” Transactions on Emerging Telecommunications Technologies, vol. 32, no. 7, p. e3963, 2021, doi: 10.1002/ett.3963.
[8]M. Dildar et al., “Skin Cancer Detection: A Review Using Deep Learning Techniques,” International Journal of Environmental Research and Public Health, vol. 18, no. 10, Art. no. 10, Jan. 2021, doi: 10.3390/ijerph18105479.
[9]M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, “A density-based algorithm for discovering clusters in large spatial databases with noise,” in Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, in KDD’96. Portland, Oregon: AAAI Press, Aug. 1996, pp. 226–231.
[10]I. Goodfellow et al., “Generative adversarial networks,” Commun. ACM, vol. 63, no. 11, pp. 139–144, Oct. 2020, doi: 10.1145/3422622.
[11]G. R. Koch, R. Zemel, and R. Salakhutdinov, “Siamese Neural Networks for One-Shot Image Recognition,” in Proceedings of the 32 nd International Conference on Machine Learning, Lille, France: JMLR: W&CP, 2015. Accessed: Mar. 31, 2024. [Online]. Available: https://www.semanticscholar.org/paper/Siamese-Neural-Networks-for-One-Shot-Image-Koch/f216444d4f2959b4520c61d20003fa30a199670a
[12]A. Radford, L. Metz, and S. Chintala, “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.” arXiv, Jan. 07, 2016. doi: 10.48550/arXiv.1511.06434.
[13]S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” arXiv, Mar. 02, 2015. doi: 10.48550/arXiv.1502.03167.
[14]M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus, “Deconvolutional networks,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Jun. 2010, pp. 2528–2535. doi: 10.1109/CVPR.2010.5539957.
[15]V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning.” arXiv, Jan. 11, 2018. doi: 10.48550/arXiv.1603.07285.
[16]S. Chopra, R. Hadsell, and Y. LeCun, “Learning a similarity metric discriminatively, with application to face verification,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), Jun. 2005, pp. 539–546 vol. 1. doi: 10.1109/CVPR.2005.202.
[17]K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition.” arXiv, Dec. 10, 2015. doi: 10.48550/arXiv.1512.03385.
[18]S. Zagoruyko and N. Komodakis, “Wide Residual Networks.” arXiv, Jun. 14, 2017. doi: 10.48550/arXiv.1605.07146.
[19]Y. Huang et al., “GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism.” arXiv, Jul. 25, 2019. doi: 10.48550/arXiv.1811.06965.
[20]M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.” arXiv, Sep. 11, 2020. doi: 10.48550/arXiv.1905.11946.
[21]M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp. 4510–4520. doi: 10.1109/CVPR.2018.00474.
[22]M. Tan et al., “MnasNet: Platform-Aware Neural Architecture Search for Mobile,” presented at the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, Jun. 2019, pp. 2815–2823. doi: 10.1109/CVPR.2019.00293.
[23]J. Hu, L. Shen, and G. Sun, “Squeeze-and-Excitation Networks,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp. 7132–7141. doi: 10.1109/CVPR.2018.00745.
[24]“What Is Melanoma Skin Cancer? | What Is Melanoma?,” American Cancer Society. Accessed: Apr. 09, 2024. [Online]. Available: https://www.cancer.org/cancer/types/melanoma-skin-cancer/about/what-is-melanoma.html
[25]A. Hanon AlAsadi, “Diagnosis of Malignant Melanoma of Skin Cancer Types,” International Journal of Interactive Multimedia and Artificial Intelligence, vol. 4, pp. 44–49, Jan. 2017, doi: 10.9781/ijimai.2017.458.
[26]C. K. Viknesh, P. N. Kumar, R. Seetharaman, and D. Anitha, “Detection and Classification of Melanoma Skin Cancer Using Image Processing Technique,” Diagnostics, vol. 13, no. 21, Art. no. 21, Jan. 2023, doi: 10.3390/diagnostics13213313.
[27]H. Ashraf, A. Waris, M. F. Ghafoor, S. O. Gilani, and I. K. Niazi, “Melanoma segmentation using deep learning with test-time augmentations and conditional random fields,” Sci Rep, vol. 12, no. 1, p. 3948, Mar. 2022, doi: 10.1038/s41598-022-07885-y.
[28]N. C. F. Codella et al., “Skin lesion analysis toward melanoma detection: A challenge at the 2017 International symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC),” in 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Apr. 2018, pp. 168–172. doi: 10.1109/ISBI.2018.8363547.
[29]N. Codella et al., “Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC).” arXiv, Mar. 29, 2019. doi: 10.48550/arXiv.1902.03368.
[30]P. Tschandl, C. Rosendahl, and H. Kittler, “The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions,” Sci Data, vol. 5, no. 1, p. 180161, Aug. 2018, doi: 10.1038/sdata.2018.161.
[31]L. van der Maaten and G. Hinton, “Viualizing data using t-SNE,” Journal of Machine Learning Research, vol. 9, pp. 2579–2605, Nov. 2008.
[32]J. Bridle, “Training Stochastic Model Recognition Algorithms as Networks can Lead to Maximum Mutual Information Estimation of Parameters,” in Advances in Neural Information Processing Systems, Morgan-Kaufmann, 1989. Accessed: Apr. 13, 2024. [Online]. Available: https://proceedings.neurips.cc/paper/1989/hash/0336dcbab05b9d5ad24f4333c7658a0e-Abstract.html
[33]R. Hadsell, S. Chopra, and Y. LeCun, “Dimensionality Reduction by Learning an Invariant Mapping,” in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), Jun. 2006, pp. 1735–1742. doi: 10.1109/CVPR.2006.100.
[34]I. Farady, C.-Y. Lin, and M.-C. Chang, “PreAugNet: improve data augmentation for industrial defect classification with small-scale training data,” J Intell Manuf, vol. 35, no. 3, pp. 1233–1246, Mar. 2024, doi: 10.1007/s10845-023-02109-0.
[35]“OpenMMLab Generative Model Toolbox and Benchmark.” Accessed: Apr. 17, 2024. [Online]. Available: https://github.com/open-mmlab/mmgeneration
[36]X. Yang, Z. Zeng, S. Y. Yeo, C. Tan, H. L. Tey, and Y. Su, “A Novel Multi-task Deep Learning Model for Skin Lesion Segmentation and Classification.” arXiv, Mar. 02, 2017. doi: 10.48550/arXiv.1703.01025.
[37]J. Zhang, Y. Xie, Y. Xia, and C. Shen, “Attention Residual Learning for Skin Lesion Classification,” IEEE Transactions on Medical Imaging, vol. 38, no. 9, pp. 2092–2103, Sep. 2019, doi: 10.1109/TMI.2019.2893944.
[38]I. González-Díaz, “DermaKNet: Incorporating the Knowledge of Dermatologists to Convolutional Neural Networks for Skin Lesion Diagnosis,” IEEE Journal of Biomedical and Health Informatics, vol. 23, no. 2, pp. 547–559, Mar. 2019, doi: 10.1109/JBHI.2018.2806962.
[39]Y. Xie, J. Zhang, Y. Xia, and C. Shen, “A Mutual Bootstrapping Model for Automated Skin Lesion Segmentation and Classification,” IEEE Transactions on Medical Imaging, vol. 39, no. 7, pp. 2482–2493, Jul. 2020, doi: 10.1109/TMI.2020.2972964.
[40]J. Wu et al., “Medical Big Data Analysis with Attention and Large Margin Loss Model for Skin Lesion Application,” J Sign Process Syst, vol. 93, no. 7, pp. 827–839, Jul. 2021, doi: 10.1007/s11265-021-01664-0.
[41]Q. Jin, H. Cui, C. Sun, Z. Meng, and R. Su, “Cascade knowledge diffusion network for skin lesion diagnosis and segmentation,” Applied Soft Computing, vol. 99, p. 106881, Feb. 2021, doi: 10.1016/j.asoc.2020.106881.
[42]K. Matsunaga, A. Hamada, A. Minagawa, and H. Koga, “Image Classification of Melanoma, Nevus and Seborrheic Keratosis by Deep Neural Network Ensemble.” arXiv, Mar. 08, 2017. doi: 10.48550/arXiv.1703.03108.
[43]A. Menegola, J. Tavares, M. Fornaciali, L. T. Li, S. Avila, and E. Valle, “RECOD Titans at ISIC Challenge 2017.” arXiv, Mar. 14, 2017. doi: 10.48550/arXiv.1703.04819.
[44]L. Bi, J. Kim, E. Ahn, and D. Feng, “Automatic Skin Lesion Analysis using Large-scale Dermoscopy Images and Deep Residual Networks.” arXiv, Mar. 16, 2017. doi: 10.48550/arXiv.1703.04197.
[45]T. DeVries and D. Ramachandram, “Skin Lesion Classification Using Deep Multi-scale Convolutional Neural Networks.” arXiv, Mar. 04, 2017. doi: 10.48550/arXiv.1703.01402.
[46]P. Tang, Q. Liang, X. Yan, S. Xiang, and D. Zhang, “GP-CNN-DTEL: Global-Part CNN Model With Data-Transformed Ensemble Learning for Skin Lesion Classification,” IEEE J Biomed Health Inform, vol. 24, no. 10, pp. 2870–2882, Oct. 2020, doi: 10.1109/JBHI.2020.2977013.
[47]L. Wang, L. Zhang, X. Shu, and Z. Yi, “Intra-class consistency and inter-class discrimination feature learning for automatic skin lesion classification,” Medical Image Analysis, vol. 85, p. 102746, Apr. 2023, doi: 10.1016/j.media.2023.102746.
[48]Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random Erasing Data Augmentation.” arXiv, Nov. 16, 2017. doi: 10.48550/arXiv.1708.04896.
[49]G. Huang, Z. Liu, L. V. D. Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” presented at the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, Jul. 2017, pp. 2261–2269. doi: 10.1109/CVPR.2017.243.
[50]T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive Growing of GANs for Improved Quality, Stability, and Variation.” arXiv, Feb. 26, 2018. doi: 10.48550/arXiv.1710.10196.
[51]T. Karras, S. Laine, and T. Aila, “A Style-Based Generator Architecture for Generative Adversarial Networks.” arXiv, Mar. 29, 2019. doi: 10.48550/arXiv.1812.04948.
[52]T. R. Shaham, T. Dekel, and T. Michaeli, “SinGAN: Learning a Generative Model from a Single Natural Image.” arXiv, Sep. 04, 2019. doi: 10.48550/arXiv.1905.01164.

QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top