2025
Autores
Albuquerque, C; Neto, PC; Gonçalves, T; Sequeira, AF;
Publicação
HCI for Cybersecurity, Privacy and Trust - 7th International Conference, HCI-CPT 2025, Held as Part of the 27th HCI International Conference, HCII 2025, Gothenburg, Sweden, June 22-27, 2025, Proceedings, Part II
Abstract
Face recognition technology, despite its advancements and increasing accuracy, still presents significant challenges in explainability and ethical concerns, especially when applied in sensitive domains such as surveillance, law enforcement, and access control. The opaque nature of deep learning models jeopardises transparency, bias, and user trust. Concurrently, the proliferation of web applications presents a unique opportunity to develop accessible and interactive tools for demonstrating and analysing these complex systems. These tools can facilitate model decision exploration with various images, aiding in bias mitigation or enhancing users’ trust by allowing them to see the model in action and understand its reasoning. We propose an explainable face recognition web application designed to support enrolment, identification, authentication, and verification while providing visual explanations through pixel-wise importance maps to clarify the model’s decision-making process. The system is built in compliance with the European Union General Data Protection Regulation, ensuring data privacy and user control over personal information. The application is also designed for scalability, capable of efficiently managing large datasets. Load tests conducted on databases containing up to 1,000,000 images confirm its efficiency. This scalability ensures robust performance and a seamless user experience even with database growth. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.