Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Yassine Baghoussi
  • Desde

    05 agosto 2025
Publicações

2025

Resilience Under Attack: Benchmarking Optimizers Against Poisoning in Federated Learning for Image Classification Using CNN

Autores
Biadgligne, Y; Baghoussi, Y; Li, K; Jorge, A;

Publicação
Advances in Computational Intelligence - 18th International Work-Conference on Artificial Neural Networks, IWANN 2025, A Coruña, Spain, June 16-18, 2025, Proceedings, Part I

Abstract
Federated Learning (FL) enables decentralized model training while preserving data privacy but remains susceptible to poisoning attacks. Malicious clients can manipulate local data or model updates, threatening FL’s reliability, especially in privacy-sensitive domains like healthcare and finance. While client-side optimization algorithms play a crucial role in training local models, their resilience to such attacks is underexplored. This study empirically evaluates the robustness of three widely used optimization algorithms: SGD, Adam, and RMSProp—against label-flipping attacks (LFAs) in image classification tasks using Convolutional Neural Networks (CNNs). Through 900 individual runs in both federated and centralized learning (CL) settings, we analyze their performance under Independent and Identically Distributed (IID) and Non-IID data distributions. Results reveal that SGD is the most resilient, achieving the highest accuracy in 87% of cases, while Adam performs best in 13%. Additionally, centralized models outperform FL on CIFAR-10, whereas FL excels on Fashion-MNIST, highlighting the impact of dataset characteristics on adversarial robustness. © 2025 Elsevier B.V., All rights reserved.

2024

Kernel Corrector LSTM

Autores
Tuna, R; Baghoussi, Y; Soares, C; Mendes-Moreira, J;

Publicação
ADVANCES IN INTELLIGENT DATA ANALYSIS XXII, PT II, IDA 2024

Abstract
Forecasting methods are affected by data quality issues in two ways: 1. they are hard to predict, and 2. they may affect the model negatively when it is updated with new data. The latter issue is usually addressed by pre-processing the data to remove those issues. An alternative approach has recently been proposed, Corrector LSTM (cLSTM), which is a Read & Write Machine Learning (RW-ML) algorithm that changes the data while learning to improve its predictions. Despite promising results being reported, cLSTM is computationally expensive, as it uses a meta-learner to monitor the hidden states of the LSTM. We propose a new RW-ML algorithm, Kernel Corrector LSTM (KcLSTM), that replaces the meta-learner of cLSTM with a simpler method: Kernel Smoothing. We empirically evaluate the forecasting accuracy and the training time of the new algorithm and compare it with cLSTM and LSTM. Results indicate that it is able to decrease the training time while maintaining a competitive forecasting accuracy.

2024

Corrector LSTM: built-in training data correction for improved time-series forecasting

Autores
Baghoussi, Y; Soares, C; Moreira, JM;

Publicação
Neural Comput. Appl.

Abstract
Traditional recurrent neural networks (RNNs) are essential for processing time-series data. However, they function as read-only models, lacking the ability to directly modify the data they learn from. In this study, we introduce the corrector long short-term memory (cLSTM), a Read & Write LSTM architecture that not only learns from the data but also dynamically adjusts it when necessary. The cLSTM model leverages two key components: (a) predicting LSTM’s cell states using Seasonal Autoregressive Integrated Moving Average (SARIMA) and (b) refining the training data based on discrepancies between actual and forecasted cell states. Our empirical validation demonstrates that cLSTM surpasses read-only LSTM models in forecasting accuracy across the Numenta Anomaly Benchmark (NAB) and M4 Competition datasets. Additionally, cLSTM exhibits superior performance in anomaly detection compared to hierarchical temporal memory (HTM) models. © The Author(s) 2024.

2024

Enhancing Forecasting using Read & Write Recurrent Neural Networks

Autores
Yassine Baghoussi;

Publicação

Abstract

2023

Interpreting What is Important: An Explainability Approach and Study on Feature Selection

Autores
Rodrigues, EM; Baghoussi, Y; Mendes-Moreira, J;

Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT I

Abstract
Machine learning models are widely used in time series forecasting. One way to reduce its computational cost and increase its efficiency is to select only the relevant exogenous features to be fed into the model. With this intention, a study on the feature selection methods: Pearson correlation coefficient, Boruta, Boruta-Shap, IMV-LSTM, and LIME is performed. A new method focused on interpretability, SHAP-LSTM, is proposed, using a deep learning model training process as part of a feature selection algorithm. The methods were compared in 2 different datasets showing comparable results with lesser computational cost when compared with the use of all features. In all datasets, SHAP-LSTM showed competitive results, having comparatively better results on the data with a higher presence of scarce occurring categorical features.