跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.85) 您好!臺灣時間:2024/12/12 09:27
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:周哲宇
研究生(外文):Che-Yu Chou
論文名稱:整合錯誤更正碼技術之自動化編碼簿學習
論文名稱(外文):Automated Codebook Learning with Error Correcting Output Code Technique
指導教授:陳弘軒陳弘軒引用關係
指導教授(外文):Hung-Hsuan Chen
學位類別:碩士
校院名稱:國立中央大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:中文
論文頁數:68
中文關鍵詞:對比學習自監督式學習錯誤更正碼對抗攻擊
外文關鍵詞:Contrastive LearningSelf-Supervised LearningError Correcting Output CodesAdversarial Attacks
相關次數:
  • 被引用被引用:0
  • 點閱點閱:20
  • 評分評分:
  • 下載下載:6
  • 收藏至我的研究室書目清單書目收藏:0
錯誤更正碼(Error Correcting Output Codes, ECOC)是一種用於解決多元分類問題的技術,其核心概念是設計編碼簿(Codebook),將每個類別映射到唯一的碼字(Codeword),並將編碼簿作為標籤讓模型學習。在基於錯誤更正碼技術的模型中,編碼簿的設計至關重要。過去的研究中,編碼簿多為人為設計、使用已知的編碼技術或隨機生成。然而,這些方法不僅需在模型訓練前額外產生,其產生的編碼簿也不一定能適用於任意資料集。本論文基於對比學習的模型框架,提出了三種自動化編碼簿學習的錯誤更正碼模型。這些模型無需在訓練前生成編碼簿,且編碼簿的生成由模型根據資料集的特性自動學習,從而解決了上述提及的編碼簿問題。我們在四種資料集中與兩種基礎模型進行比較,並評估三種錯誤更正碼模型的優劣與限制。此外,我們還實驗了自動化編碼簿學習的錯誤更正碼模型是否具有抵禦對抗攻擊的能力,並討論了未來改進的方向。
Error Correcting Output Codes (ECOC) is a technique for solving multi-class classification problems. Its core concept involves designing a codebook: each class maps to a unique codeword; these codewords are treated as labels for model training. Thus, the design of the codebook is crucial. In past research, codebooks were often manually designed based on known encoding techniques or generated randomly. However, these methods require manual codebook design before model training, and there may be better choices of codebooks for the given datasets. This paper proposes three automated codebook learning models for ECOC based on the framework of contrastive learning. These models do not require manual codebook design before training, and the model automatically learns the codebook based on the dataset's characteristics. We compare these models with two baseline models on four open datasets and evaluate the strengths, weaknesses, and limitations of the three ECOC models. Additionally, we experiment with whether the ECOC models with automated codebook learning can resist adversarial attacks and discuss directions for future improvements.
摘要 v
Abstract vi
致謝 viii
目錄 ix
一、 緒論 1
二、 相關研究 4
2.1 對抗攻擊 (Adversarial Attack)....................................... 4
2.2 對抗攻擊的防禦方式 ................................................... 5
2.3 錯誤更正碼 (Error Correcting Output Code)..................... 6
2.4 SimCLR ................................................................... 8
三、 研究模型及方法 9
3.1 ACL: 基於預訓練的自動化編碼簿學習 ............................ 9
3.1.1 預訓練模型 ...................................................... 10
3.1.2 微調模型 ......................................................... 12
3.2 模型之損失函數設計 ................................................... 15
3.2.1 對比學習之損失函數 .......................................... 15
3.2.2 分類問題之損失函數 .......................................... 16
3.2.3 預測碼字與正確類別碼字間之損失函數 .................. 16
3.2.4 錯誤更正碼之損失函數 ....................................... 17
3.3 ACL-CFPC: 微調模型的編碼簿再訓練 ............................ 20
3.4 ACL-TFC: 基於 ACL-CFPC 模型編碼簿的模型訓練 .......... 22
3.5 三種自動化學習編碼簿模型的比較 ................................. 23
四、 實驗設計與結果分析 25
4.1 資料集介紹與對抗例生成 ............................................. 25
4.2 實驗環境與模型參數設定 ............................................. 26
4.3 實驗模型介紹與評估方式 ............................................. 27
4.4 實驗結果與分析 ......................................................... 29
4.4.1 ACL 模型與基礎模型間的比較 ............................. 29
4.4.2 隨機初始權重學習的模型比較 .............................. 32
4.5 ACL 模型與其他 ECOC 模型的結果比較......................... 33
4.6 基於不同資料集特性的自動化學習編碼簿效果分析 ............ 34
4.7 消融實驗 .................................................................. 35
4.7.1 不同碼字長度及損失函數之 ACL 模型於 CLEAN
下的表現 .................................................................. 36
4.7.2 不同碼字長度及損失函數之 ACL 模型於 FGSM 下
的表現 ..................................................................... 38
4.7.3 不同碼字長度及損失函數之 ACL 模型於 PGD 下
的表現 ..................................................................... 39
4.7.4 ϵ 與碼字長度之相關實驗 ..................................... 41
4.7.5 ϵ 與損失函數之相關實驗 ..................................... 43
五、 總結 46
5.1 結論 ........................................................................ 46
5.2 未來展望 .................................................................. 47
參考文獻 48
附錄 A 實驗程式碼 52
附錄 B ACL 編碼簿的詳細生成流程 53
[1] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
[3] G. Hinton, L. Deng, D. Yu, et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal processing magazine, vol. 29, no. 6, pp. 82–97, 2012.
[4] D. Andor, C. Alberti, D. Weiss, et al., “Globally normalized transition-based neural networks,” arXiv preprint arXiv:1603.06042, 2016.
[5] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[6] C. Szegedy, W. Zaremba, I. Sutskever, et al., “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
[7] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
[8] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: A simple and accurate method to fool deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2574–2582.
[9] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
[10] N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in 2017 ieee symposium on security and privacy (sp), Ieee, 2017, pp. 39–57.
[11] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” in 2016 IEEE symposium on security and privacy (SP), IEEE, 2016, pp. 582–597.
[12] D. Hendrycks and K. Gimpel, “A baseline for detecting misclassified and out-ofdistribution examples in neural networks,” arXiv preprint arXiv:1610.02136, 2016.
[13] D. Meng and H. Chen, “Magnet: A two-pronged defense against adversarial examples,” in Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, 2017, pp. 135–147.
[14] G. Verma and A. Swami, “Error correcting output codes improve probability estimation and adversarial robustness of deep neural networks,” Advances in Neural Information Processing Systems, vol. 32, 2019.
[15] Y. Song, Q. Kang, and W. P. Tay, “Error-correcting output codes with ensemble diversity for robust learning in neural networks,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, 2021, pp. 9722–9729.
[16] L. Wan, T. Alpcan, E. Viterbo, and M. Kuijper, “Efficient error-correcting output codes for adversarial learning robustness,” in ICC 2022-IEEE International Conference on Communications, IEEE, 2022, pp. 2345–2350.
[17] T. Philippon and C. Gagné, “Improved robustness against adaptive attacks with ensembles and error-correcting output codes,” arXiv preprint arXiv:2303.02322, 2023.
[18] T. G. Dietterich and G. Bakiri, “Solving multiclass learning problems via errorcorrecting output codes,” Journal of artificial intelligence research, vol. 2, pp. 263–286, 1994.
[19] I. Evron, O. Onn, T. Weiss, H. Azeroual, and D. Soudry, “The role of codeword-toclass assignments in error-correcting codes: An empirical study,” in International Conference on Artificial Intelligence and Statistics, PMLR, 2023, pp. 8053–8077.
[20] A. Zhang, Z.-L. Wu, C.-H. Li, and K.-T. Fang, “On hadamard-type output coding in multiclass learning,” in Intelligent Data Engineering and Automated Learning: 4th International Conference, IDEAL 2003, Hong Kong, China, March 21-23, 2003. Revised Papers 4, Springer, 2003, pp. 397–404.
[21] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning, PMLR, 2020, pp. 1597–1607.
[22] N. Papernot, P. McDaniel, and I. Goodfellow, “Transferability in machine learning: From phenomena to black-box attacks using adversarial samples,” arXiv preprint arXiv:1605.07277, 2016.
[23] A. Ilyas, L. Engstrom, A. Athalye, and J. Lin, “Query-efficient black-box adversarial examples (superceded),” arXiv preprint arXiv:1712.07113, 2017.
[24] W. Xu, D. Evans, and Y. Qi, “Feature squeezing: Detecting adversarial examples in deep neural networks,” arXiv preprint arXiv:1704.01155, 2017.
[25] C. Xie, J. Wang, Z. Zhang, Z. Ren, and A. Yuille, “Mitigating adversarial effects through randomization,” arXiv preprint arXiv:1711.01991, 2017.
[26] S. Yang, P. Luo, C. C. Loy, K. W. Shum, and X. Tang, “Deep representation learning with target coding,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 29, 2015.
[27] P. Rodríguez, M. A. Bautista, J. Gonzalez, and S. Escalera, “Beyond one-hot encoding: Lower dimensional target embedding,” Image and Vision Computing, vol. 75, pp. 21–31, 2018.
[28] A. Kusupati, M. Wallingford, V. Ramanujan, et al., “Llc: Accurate, multi-purpose learnt low-dimensional binary codes,” Advances in neural information processing systems, vol. 34, pp. 23 900–23 913, 2021.
[29] S. Gupta and S. Amin, “Integer programming-based error-correcting output code design for robust classification,” in Uncertainty in Artificial Intelligence, PMLR, 2021, pp. 1724–1734.
[30] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
[31] D. Shah and T. M. Aamodt, “Learning label encodings for deep regression,” arXiv preprint arXiv:2303.02273, 2023. [32] K. Sohn, “Improved deep metric learning with multi-class n-pair loss objective,” Advances in neural information processing systems, vol. 29, 2016.
[33] Z. Wu, Y. Xiong, S. X. Yu, and D. Lin, “Unsupervised feature learning via nonparametric instance discrimination,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3733–3742.
[34] A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with contrastive predictive coding,” arXiv preprint arXiv:1807.03748, 2018.
[35] P. Khosla, P. Teterwak, C. Wang, et al., “Supervised contrastive learning,” Advances in neural information processing systems, vol. 33, pp. 18 661–18 673, 2020.
[36] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition,” Neural networks, vol. 32, pp. 323–332, 2012.
[37] Y. You, I. Gitman, and B. Ginsburg, “Large batch training of convolutional networks,” arXiv preprint arXiv:1708.03888, 2017.
[38] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 9729–9738.
[39] J.-B. Grill, F. Strub, F. Altché, et al., “Bootstrap your own latent-a new approach to self-supervised learning,” Advances in neural information processing systems, vol. 33, pp. 21 271–21 284, 2020.
[40] M. Caron, I. Misra, J. Mairal, P. Goyal, P. Bojanowski, and A. Joulin, “Unsupervised learning of visual features by contrasting cluster assignments,” Advances in neural information processing systems, vol. 33, pp. 9912–924, 2020.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊