Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Yohannes Biadgligne
  • Cargo

    Assistente de Investigação
  • Desde

    29 janeiro 2024
Publicações

2025

Resilience Under Attack: Benchmarking Optimizers Against Poisoning in Federated Learning for Image Classification Using CNN

Autores
Biadgligne, Y; Baghoussi, Y; Li, K; Jorge, A;

Publicação
Advances in Computational Intelligence - 18th International Work-Conference on Artificial Neural Networks, IWANN 2025, A Coruña, Spain, June 16-18, 2025, Proceedings, Part I

Abstract
Federated Learning (FL) enables decentralized model training while preserving data privacy but remains susceptible to poisoning attacks. Malicious clients can manipulate local data or model updates, threatening FL’s reliability, especially in privacy-sensitive domains like healthcare and finance. While client-side optimization algorithms play a crucial role in training local models, their resilience to such attacks is underexplored. This study empirically evaluates the robustness of three widely used optimization algorithms: SGD, Adam, and RMSProp—against label-flipping attacks (LFAs) in image classification tasks using Convolutional Neural Networks (CNNs). Through 900 individual runs in both federated and centralized learning (CL) settings, we analyze their performance under Independent and Identically Distributed (IID) and Non-IID data distributions. Results reveal that SGD is the most resilient, achieving the highest accuracy in 87% of cases, while Adam performs best in 13%. Additionally, centralized models outperform FL on CIFAR-10, whereas FL excels on Fashion-MNIST, highlighting the impact of dataset characteristics on adversarial robustness. © 2025 Elsevier B.V., All rights reserved.

2024

Boosting English-Amharic machine translation using corpus augmentation and Transformer

Autores
Biadgligne, Y; Smaili, K;

Publicação
Interciencia

Abstract
The Transformer-based neural machine translation (NMT) model has been very successful in recent years and has become a new mainstream method. However, using them in lowresourced languages requires large amounts of data and efficient model configuration (hyperparameter tuning) mechanisms. The scarcity of parallel texts is a bottleneck for high quality (N) MTs, especially for under resourced languages like Amharic. As a result, this paper presents an attempt to improve English-Amharic MT by introducing three different vanilla Transformer architectures, with different hyper-parameter values. To obtain additional training material, offline token level corpus augmentation was applied to the previously collected English-Amharic parallel corpus. Compared to previous work on Amharic MT, the best of the three Transformer models have achieved state-of-the-art BLEU scores. In fact, we were able to achieve this result by employing corpus augmentation techniques and hyper-parameter tuning.