2024
Autores
Biadgligne, Y; Smaili, K;
Publicação
Interciencia
Abstract
2025
Autores
Biadgligne, Y; Baghoussi, Y; Li, K; Jorge, A;
Publicação
Advances in Computational Intelligence - 18th International Work-Conference on Artificial Neural Networks, IWANN 2025, A Coruña, Spain, June 16-18, 2025, Proceedings, Part I
Abstract
Federated Learning (FL) enables decentralized model training while preserving data privacy but remains susceptible to poisoning attacks. Malicious clients can manipulate local data or model updates, threatening FL’s reliability, especially in privacy-sensitive domains like healthcare and finance. While client-side optimization algorithms play a crucial role in training local models, their resilience to such attacks is underexplored. This study empirically evaluates the robustness of three widely used optimization algorithms: SGD, Adam, and RMSProp—against label-flipping attacks (LFAs) in image classification tasks using Convolutional Neural Networks (CNNs). Through 900 individual runs in both federated and centralized learning (CL) settings, we analyze their performance under Independent and Identically Distributed (IID) and Non-IID data distributions. Results reveal that SGD is the most resilient, achieving the highest accuracy in 87% of cases, while Adam performs best in 13%. Additionally, centralized models outperform FL on CIFAR-10, whereas FL excels on Fashion-MNIST, highlighting the impact of dataset characteristics on adversarial robustness. © 2025 Elsevier B.V., All rights reserved.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.