2019
Autores
Pereira, RC; Abreu, PH; Polisciuc, E; Machado, P;
Publicação
Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2019, Volume 3: IVAPP, Prague, Czech Republic, February 25-27, 2019.
Abstract
2019
Autores
Martins, N; Cruz, JM; Cruz, T; Abreu, PH;
Publicação
Progress in Artificial Intelligence, 19th EPIA Conference on Artificial Intelligence, EPIA 2019, Vila Real, Portugal, September 3-6, 2019, Proceedings, Part II.
Abstract
2021
Autores
Salazar, T; Santos, MS; Araújo, H; Abreu, PH;
Publicação
IEEE Access
Abstract
2022
Autores
Santos, MS; Abreu, PH; Fernández, A; Luengo, J; Santos, JAM;
Publicação
Eng. Appl. Artif. Intell.
Abstract
2024
Autores
Perdigão, D; Cruz, T; Simões, P; Abreu, PH;
Publicação
NOMS 2024 IEEE Network Operations and Management Symposium, Seoul, Republic of Korea, May 6-10, 2024
Abstract
2025
Autores
Cruz, A; Salazar, T; Carvalho, M; Maças, C; Machado, P; Abreu, PH;
Publicação
ARTIFICIAL INTELLIGENCE REVIEW
Abstract
The use of machine learning in decision-making has become increasingly pervasive across various fields, from healthcare to finance, enabling systems to learn from data and improve their performance over time. The transformative impact of these new technologies warrants several considerations that demand the development of modern solutions through responsible artificial intelligence-the incorporation of ethical principles into the creation and deployment of AI systems. Fairness is one such principle, ensuring that machine learning algorithms do not produce biased outcomes or discriminate against any group of the population with respect to sensitive attributes, such as race or gender. In this context, visualization techniques can help identify data imbalances and disparities in model performance across different demographic groups. However, there is a lack of guidance towards clear and effective representations that support entry-level users in fairness analysis, particularly when considering that the approaches to fairness visualization can vary significantly. In this regard, the goal of this work is to present a comprehensive analysis of current tools directed at visualizing and examining group fairness in machine learning, with a focus on both data and binary classification model outcomes. These visualization tools are reviewed and discussed, concluding with the proposition of a focused set of visualization guidelines directed towards improving the comprehensibility of fairness visualizations.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.