Details
Name
Leonardo Machado FerreiraRole
External Research CollaboratorSince
18th November 2024
Nationality
PortugalCentre
Telecommunications and MultimediaContacts
+351222094000
leonardo.m.ferreira@inesctec.pt
2025
Authors
Ferreira, Leonardo; Gonçalves, Tiago; Neto, Pedro C.; Sequeira, Ana; Mamede, Rafael; Oliveira, Mafalda;
Publication
Abstract
This study investigates the use of SHAP (SHapley Additive exPlanations) values as an explainable artificial intelligence (xAI) technique applied on a facial attribute classification task. We analyse the consistency of SHAP value distributions across diverse classifier architectures that share the same feature extractor, revealing that key features driving attribute classification remain stable regardless of classifier architecture. Our findings highlight the challenges in interpreting SHAP values at the individual sample level, as their reliability depends on the model’s ability to learn distinct class-specific features; models exploiting inter-class correlations yield less representative SHAP explanations. Furthermore, pixel-level SHAP analysis reveals that superior classification accuracy does not necessarily equate to meaningful semantic understanding; notably, despite FaceNet exhibiting lower performance than CLIP, it demonstrated a more nuanced grasp of the underlying class attributes. Finally, we address the computational scalability of SHAP, demonstrating that KernelExplainer becomes infeasible for high-dimensional pixel data, whereas DeepExplainer and GradientExplainer offer more practical alternatives with trade-offs. Our results suggest that SHAP is most effective for small to medium feature sets or tabular data, providing interpretable and computationally manageable explanations.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.