跳到主要內容

臺灣博碩士論文加值系統

(44.222.82.133) 您好!臺灣時間:2024/09/07 18:00
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:蔡承泰
研究生(外文):TSAI, CHENG-TAI
論文名稱:穩健型點對點聯邦學習的設計與實現
論文名稱(外文):PeerSecure Federated Learning: The Design and Implement of the Robust Peer-to-Peer Federated Learning
指導教授:鄭伯炤
指導教授(外文):CHENG, BO-CHAO
口試委員:李忠憲林輝堂陳嘉玫陳煥鄭伯炤
口試委員(外文):LI, JUNG-SHIANLIN, HUI-TANGCHEN, CHIA-MEICHEN, HUANCHENG, BO-CHAO
口試日期:2024-07-24
學位類別:碩士
校院名稱:國立中正大學
系所名稱:通訊工程研究所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:中文
論文頁數:46
中文關鍵詞:點對點聯邦學習資料毒化攻擊拜占庭攻擊餘弦相似度邏輯迴歸網路安全
外文關鍵詞:Peer-to-Peer Federated LearningData Poisoning AttacksByzantine AttacksCosine SimilarityLogistic RegressionNetwork Security
相關次數:
  • 被引用被引用:0
  • 點閱點閱:23
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
點對點聯邦學習(Peer-to-Peer Federated Learning, P2P FL)是由傳統的聯邦學習(Federated Learning, FL)演變而來,旨在改善 FL 需要將模型更新傳送到中央伺服器進行聚合的缺點。P2P FL實現了完全去中心化,模型聚合在本地端執行,用戶端可以擁有更高的自主性來決定要使用哪些模型更新。然而,P2P FL同時也面對一些安全性挑戰像是資料毒化攻擊、拜占庭攻擊等。如何讓模型在受到這些攻擊的同時能夠維持一定的抵抗能力,並不被其影響模型性能是一個需要面對的問題。我們提出了一種抵抗攻擊的P2P FL系統名為PSFL(PeerSecure FL System),透過比較餘弦相似度並結合邏輯迴歸模型,找出與本地端模型相似的模型梯度進行模型聚合,並測試新的本地端模型性能是否有進步。透過這樣的過濾、聚合、驗證的步驟來提高模型抵抗攻擊的能力,同時提高本地端模型的準確率。
Peer-to-Peer Federated Learning (P2P FL) originated from standard Federated Learning (FL) and tries to address the drawbacks of FL by eliminating the need for model changes to be transferred to a central server for aggregation. P2P FL achieves total decentralization; model aggregation occurs locally, and the user side has greater autonomy in deciding which model updates to adopt. However, P2P FL has various security issues, such as data poisoning attacks, Byzantine attacks, and so on. The problem of how to allow the model to maintain a certain level of resilience while being subjected to these attacks without having the model's performance impaired by them is one that must be addressed. We propose the PSFL(PeerSecure FL System), a P2P FL system that is resistant to attacks. By analyzing cosine similarity and combining with logistic regression models, we identify model gradients that are comparable to the local model, execute model aggregation, and test the new local model to see if its performance has improved. Filtering, aggregation, and verification increase the model's capacity to resist assaults while also improving the correctness of the local model.
誌謝辭 iii
摘要 I
Abstract II
目錄 III
圖目錄 V
表目錄 VII
第一章 緒論 1
1.1 研究背景 1
1.1.1 Federated Learning 概述 2
1.1.2 P2P Federated Learning 概述 5
1.1.3 P2P Federated Learning 所面臨的挑戰 6
1.2 研究動機 8
1.3 問題陳述 8
1.4 論文架構 9
第二章 相關文獻 10
2.1 對聯邦學習系統的資料毒化攻擊[13] 10
2.2 防禦聯邦學習中的標籤翻轉攻擊[15] 12
2.3 拜占庭和非獨立同分佈環境中的去中心化聯邦學習[7] 13
2.4 通過不確定性感知的內部和外部檢查實現的穩健聯邦學習[6] 15
2.5 相關文獻比較 16
第三章 研究方法 18
3.1 概述 18
3.2 PSFL系統架構與運作流程 19
3.3 PSFL元件說明 20
3.3.1 Filter 20
3.3.1.1 餘弦相似度(Cosine Similarity) 23
3.3.1.2 邏輯迴歸模型(Logistic Regression) 23
3.3.1.3 恢復機制 24
3.3.2 Aggregator 24
3.3.3 Verifier 25
3.4 PSFL實例說明 27
第四章 實驗與結果分析 29
4.1 實驗設置 29
4.2 實驗結果 33
4.2.1 Performance without Attack 34
4.2.2 Performance under Byzantine Attack 35
4.2.3 Performance under Label-Flipping Attack 37
第五章 結論與未來展望 39
參考文獻 41
附錄一 各節點的訓練集資料分布 44
作者簡介 46

[1] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Y. Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” arXiv (Cornell University), Jan. 2016, doi: 10.48550/arxiv.1602.05629.
[2] H. Hellström et al., “Wireless for Machine Learning,” arXiv (Cornell University), Jan. 2020, doi: 10.48550/arxiv.2008.13492.
[3] A. Hard et al., “Federated Learning for Mobile Keyboard Prediction,” arXiv (Cornell University), Jan. 2018, doi: 10.48550/arxiv.1811.03604.
[4] S. I. Popoola, R. Ande, B. Adebisi, G. Gui, M. Hammoudeh and O. Jogunola, "Federated Deep Learning for Zero-Day Botnet Attack Detection in IoT-Edge Devices," in IEEE Internet of Things Journal, vol. 9, no. 5, pp. 3930-3944, 1 March1, 2022, doi: 10.1109/JIOT.2021.3100755.
[5] N. Bouacida and P. Mohapatra, "Vulnerabilities in Federated Learning," in IEEE Access, vol. 9, pp. 63229-63249, 2021, doi: 10.1109/ACCESS.2021.3075203.
[6] N. Heydaribeni, R. Zhang, T. Javidi, C. Nita-Rotaru, and F. Koushanfar, “SureFED: Robust Federated Learning via Uncertainty-Aware Inward and Outward Inspection,” arXiv (Cornell University), Jan. 2023, doi: 10.48550/arxiv.2308.02747.
[7] J. Verbraeken, M. de Vos, and J. Pouwelse, “Bristle: Decentralized Federated Learning in Byzantine, Non-i.i.d. Environments,” arXiv.org, Oct. 21, 2021.
[8] A. G. Roy, S. Siddiqui, S. Pölsterl, N. Navab, and C. Wachinger, “BrainTorrent: A Peer-to-Peer Environment for Decentralized Federated Learning,” arXiv.org, May 16, 2019.
[9] G Lu, Z Xiong, R Li, N Mohammad, Y Li, W Li, “DEFEAT:A decentralized federated learning against gradient attacks,” High Confidence Computing, 2023, 100128.
[10] T. Wink and Z. Nochta, "An Approach for Peer-to-Peer Federated Learning," 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), Taipei, Taiwan, 2021, pp. 150-157, doi: 10.1109/DSN-W52860.2021.00034.
[11] H. Wang, L. Muñoz-González, M. Z. Hameed, D. Eklund, S. Raza, “SparSFA: Towards robust and communication-efficient peer-to-peer federated learning,” Computers & Security, 129, 103182.
[12] M. Fang, X. Cao, J. Jia, and N. Gong, “Local Model Poisoning Attacks to Byzantine-Robust Federated Learning,” 2020. https://www.usenix.org/conference/usenixsecurity20/presentation/fang
[13] V. Tolpegin, S. Truex, M. E. Gursoy, and L. Liu, “Data Poisoning Attacks Against Federated Learning Systems,” Data Poisoning Attacks Against Federated Learning Systems | SpringerLink, Sep. 12, 2020.
[14] P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer, “Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent,” Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent.”
[15] N. M. Jebreel, J. Domingo-Ferrer, D. Sánchez, and A. Blanco-Justicia, “Defending against the Label-flipping Attack in Federated Learning,” arXiv (Cornell University), Jan. 2022, doi: 10.48550/arxiv.2207.01982.
[16] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: Synthetic Minority Over-sampling Technique,” Journal of Artificial Intelligence Research, vol. 16, pp. 321–357, Jun. 2002, doi: 10.1613/jair.953.
[17] Y. LeCun, C. Cortes, and C. J. Burges, "MNIST handwritten digit database,". http://yann.lecun.com/exdb/mnist. 2010.
[18] D. Alistarh, Z. Allen-Zhu, F. Ebrahimianghazani, and J. Li, “Byzantine-Resilient Non-Convex Stochastic Gradient Descent,” arXiv (Cornell University), Jan. 2020, doi: 10.48550/arxiv.2012.14368.

電子全文 電子全文(網際網路公開日期:20290801)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top