Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Pedro Henriques Abreu

2019

A Data Visualization Approach for Intersection Analysis using AIS Data

Authors
Pereira, RC; Abreu, PH; Polisciuc, E; Machado, P;

Publication
Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2019, Volume 3: IVAPP, Prague, Czech Republic, February 25-27, 2019.

Abstract

2019

Analyzing the Footprint of Classifiers in Adversarial Denial of Service Contexts

Authors
Martins, N; Cruz, JM; Cruz, T; Abreu, PH;

Publication
Progress in Artificial Intelligence, 19th EPIA Conference on Artificial Intelligence, EPIA 2019, Vila Real, Portugal, September 3-6, 2019, Proceedings, Part II.

Abstract

2021

FAWOS: Fairness-Aware Oversampling Algorithm Based on Distributions of Sensitive Attributes

Authors
Salazar, T; Santos, MS; Araújo, H; Abreu, PH;

Publication
IEEE Access

Abstract

2022

The impact of heterogeneous distance functions on missing data imputation and classification performance

Authors
Santos, MS; Abreu, PH; Fernández, A; Luengo, J; Santos, JAM;

Publication
Eng. Appl. Artif. Intell.

Abstract

2024

Data-Centric Federated Learning for Anomaly Detection in Smart Grids and Other Industrial Control Systems

Authors
Perdigão, D; Cruz, T; Simões, P; Abreu, PH;

Publication
NOMS 2024 IEEE Network Operations and Management Symposium, Seoul, Republic of Korea, May 6-10, 2024

Abstract

2025

Guidelines for designing visualization tools for group fairness analysis in binary classification

Authors
Cruz, A; Salazar, T; Carvalho, M; Maças, C; Machado, P; Abreu, PH;

Publication
ARTIFICIAL INTELLIGENCE REVIEW

Abstract
The use of machine learning in decision-making has become increasingly pervasive across various fields, from healthcare to finance, enabling systems to learn from data and improve their performance over time. The transformative impact of these new technologies warrants several considerations that demand the development of modern solutions through responsible artificial intelligence-the incorporation of ethical principles into the creation and deployment of AI systems. Fairness is one such principle, ensuring that machine learning algorithms do not produce biased outcomes or discriminate against any group of the population with respect to sensitive attributes, such as race or gender. In this context, visualization techniques can help identify data imbalances and disparities in model performance across different demographic groups. However, there is a lack of guidance towards clear and effective representations that support entry-level users in fairness analysis, particularly when considering that the approaches to fairness visualization can vary significantly. In this regard, the goal of this work is to present a comprehensive analysis of current tools directed at visualizing and examining group fairness in machine learning, with a focus on both data and binary classification model outcomes. These visualization tools are reviewed and discussed, concluding with the proposition of a focused set of visualization guidelines directed towards improving the comprehensibility of fairness visualizations.

  • 18
  • 21