Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Pedro Henriques Abreu

2019

A Data Visualization Approach for Intersection Analysis using AIS Data

Autores
Pereira, RC; Abreu, PH; Polisciuc, E; Machado, P;

Publicação
Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2019, Volume 3: IVAPP, Prague, Czech Republic, February 25-27, 2019.

Abstract

2019

Analyzing the Footprint of Classifiers in Adversarial Denial of Service Contexts

Autores
Martins, N; Cruz, JM; Cruz, T; Abreu, PH;

Publicação
Progress in Artificial Intelligence, 19th EPIA Conference on Artificial Intelligence, EPIA 2019, Vila Real, Portugal, September 3-6, 2019, Proceedings, Part II.

Abstract

2021

FAWOS: Fairness-Aware Oversampling Algorithm Based on Distributions of Sensitive Attributes

Autores
Salazar, T; Santos, MS; Araújo, H; Abreu, PH;

Publicação
IEEE Access

Abstract

2022

The impact of heterogeneous distance functions on missing data imputation and classification performance

Autores
Santos, MS; Abreu, PH; Fernández, A; Luengo, J; Santos, JAM;

Publicação
Eng. Appl. Artif. Intell.

Abstract

2024

Data-Centric Federated Learning for Anomaly Detection in Smart Grids and Other Industrial Control Systems

Autores
Perdigão, D; Cruz, T; Simões, P; Abreu, PH;

Publicação
NOMS 2024 IEEE Network Operations and Management Symposium, Seoul, Republic of Korea, May 6-10, 2024

Abstract

2025

Guidelines for designing visualization tools for group fairness analysis in binary classification

Autores
Cruz, A; Salazar, T; Carvalho, M; Maças, C; Machado, P; Abreu, PH;

Publicação
ARTIFICIAL INTELLIGENCE REVIEW

Abstract
The use of machine learning in decision-making has become increasingly pervasive across various fields, from healthcare to finance, enabling systems to learn from data and improve their performance over time. The transformative impact of these new technologies warrants several considerations that demand the development of modern solutions through responsible artificial intelligence-the incorporation of ethical principles into the creation and deployment of AI systems. Fairness is one such principle, ensuring that machine learning algorithms do not produce biased outcomes or discriminate against any group of the population with respect to sensitive attributes, such as race or gender. In this context, visualization techniques can help identify data imbalances and disparities in model performance across different demographic groups. However, there is a lack of guidance towards clear and effective representations that support entry-level users in fairness analysis, particularly when considering that the approaches to fairness visualization can vary significantly. In this regard, the goal of this work is to present a comprehensive analysis of current tools directed at visualizing and examining group fairness in machine learning, with a focus on both data and binary classification model outcomes. These visualization tools are reviewed and discussed, concluding with the proposition of a focused set of visualization guidelines directed towards improving the comprehensibility of fairness visualizations.

  • 18
  • 21