|
[1] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Y. Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” arXiv (Cornell University), Jan. 2016, doi: 10.48550/arxiv.1602.05629. [2] H. Hellström et al., “Wireless for Machine Learning,” arXiv (Cornell University), Jan. 2020, doi: 10.48550/arxiv.2008.13492. [3] A. Hard et al., “Federated Learning for Mobile Keyboard Prediction,” arXiv (Cornell University), Jan. 2018, doi: 10.48550/arxiv.1811.03604. [4] S. I. Popoola, R. Ande, B. Adebisi, G. Gui, M. Hammoudeh and O. Jogunola, "Federated Deep Learning for Zero-Day Botnet Attack Detection in IoT-Edge Devices," in IEEE Internet of Things Journal, vol. 9, no. 5, pp. 3930-3944, 1 March1, 2022, doi: 10.1109/JIOT.2021.3100755. [5] N. Bouacida and P. Mohapatra, "Vulnerabilities in Federated Learning," in IEEE Access, vol. 9, pp. 63229-63249, 2021, doi: 10.1109/ACCESS.2021.3075203. [6] N. Heydaribeni, R. Zhang, T. Javidi, C. Nita-Rotaru, and F. Koushanfar, “SureFED: Robust Federated Learning via Uncertainty-Aware Inward and Outward Inspection,” arXiv (Cornell University), Jan. 2023, doi: 10.48550/arxiv.2308.02747. [7] J. Verbraeken, M. de Vos, and J. Pouwelse, “Bristle: Decentralized Federated Learning in Byzantine, Non-i.i.d. Environments,” arXiv.org, Oct. 21, 2021. [8] A. G. Roy, S. Siddiqui, S. Pölsterl, N. Navab, and C. Wachinger, “BrainTorrent: A Peer-to-Peer Environment for Decentralized Federated Learning,” arXiv.org, May 16, 2019. [9] G Lu, Z Xiong, R Li, N Mohammad, Y Li, W Li, “DEFEAT:A decentralized federated learning against gradient attacks,” High Confidence Computing, 2023, 100128. [10] T. Wink and Z. Nochta, "An Approach for Peer-to-Peer Federated Learning," 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), Taipei, Taiwan, 2021, pp. 150-157, doi: 10.1109/DSN-W52860.2021.00034. [11] H. Wang, L. Muñoz-González, M. Z. Hameed, D. Eklund, S. Raza, “SparSFA: Towards robust and communication-efficient peer-to-peer federated learning,” Computers & Security, 129, 103182. [12] M. Fang, X. Cao, J. Jia, and N. Gong, “Local Model Poisoning Attacks to Byzantine-Robust Federated Learning,” 2020. https://www.usenix.org/conference/usenixsecurity20/presentation/fang [13] V. Tolpegin, S. Truex, M. E. Gursoy, and L. Liu, “Data Poisoning Attacks Against Federated Learning Systems,” Data Poisoning Attacks Against Federated Learning Systems | SpringerLink, Sep. 12, 2020. [14] P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer, “Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent,” Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent.” [15] N. M. Jebreel, J. Domingo-Ferrer, D. Sánchez, and A. Blanco-Justicia, “Defending against the Label-flipping Attack in Federated Learning,” arXiv (Cornell University), Jan. 2022, doi: 10.48550/arxiv.2207.01982. [16] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: Synthetic Minority Over-sampling Technique,” Journal of Artificial Intelligence Research, vol. 16, pp. 321–357, Jun. 2002, doi: 10.1613/jair.953. [17] Y. LeCun, C. Cortes, and C. J. Burges, "MNIST handwritten digit database,". http://yann.lecun.com/exdb/mnist. 2010. [18] D. Alistarh, Z. Allen-Zhu, F. Ebrahimianghazani, and J. Li, “Byzantine-Resilient Non-Convex Stochastic Gradient Descent,” arXiv (Cornell University), Jan. 2020, doi: 10.48550/arxiv.2012.14368.
|