Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by LIAAD

2025

Fairness Analysis in Causal Models: An Application to Public Procurement

Authors
Teixeira, S; Nogueira, AR; Gama, J;

Publication
MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2023, PT II

Abstract
Data-driven decision models based on Artificial Intelligence (AI) have been widely used in the public and private sectors. These models present challenges and are intended to be fair, effective and transparent in public interest areas. Bias, fairness and government transparency are aspects that significantly impact the functioning of a democratic society. They shape the government's and its citizens' relationship, influencing trust, accountability, and the equitable treatment of individuals and groups. Data-driven decision models can be biased at several process stages, contributing to injustices. Our research purpose is to understand fairness in the use of causal discovery for public procurement. By analysing Portuguese public contracts data, we aim i) to predict the place of execution of public contracts using the PC algorithm with sp-mi, smc-chi(2) and mc-chi(2) conditional independence tests; ii) to analyse and compare the fairness in those scenarios using Predictive Parity Rate, Proportional Parity, Demographic Parity and Accuracy Parity metrics. By addressing fairness concerns, we pursue to enhance responsible data-driven decision models. We conclude that, in our case, fairness metrics make an assessment more local than global due to causality pathways. We also observe that the Proportional Parity metric is the one with the lowest variance among all metrics and one with the highest precision, and this reinforces the observation that the Agency category is the one that is furthest apart in terms of the proportion of the groups.

2025

One-Class Learning for Data Stream Through Graph Neural Networks

Authors
Gôlo, MPS; Gama, J; Marcacini, RM;

Publication
INTELLIGENT SYSTEMS, BRACIS 2024, PT IV

Abstract
In many data stream applications, there is a normal concept, and the objective is to identify normal and abnormal concepts by training only with normal concept instances. This scenario is known in the literature as one-class learning (OCL) for data streams. In this OCL scenario for data streams, we highlight two main gaps: (i) lack of methods based on graph neural networks (GNNs) and (ii) lack of interpretable methods. We introduce OPENCAST (One-class graPh autoENCoder for dAta STream), a new method for data streams based on OCL and GNNs. Our method learns representations while encapsulating the instances of interest through a hypersphere. OPENCAST learns low-dimensional representations to generate interpretability in the representation learning process. OPENCAST achieved state-of-the-art results for data streams in the OCL scenario, outperforming seven other methods. Furthermore, OPENCAST learns low-dimensional representations, generating interpretability in the representation learning process and results.

2025

Interpretable Rules for Online Failure Prediction: A Case Study on the Metro do Porto dataset

Authors
Jakobs, M; Veloso, B; Gama, J;

Publication
CoRR

Abstract

2025

A Deep Learning Framework for Medium-Term Covariance Forecasting in Multi-Asset Portfolios

Authors
Reis, P; Serra, AP; Gama, J;

Publication
CoRR

Abstract

2025

On-device edge learning for IoT data streams: a survey

Authors
Lourenço, A; Rodrigo, J; Gama, J; Marreiros, G;

Publication
CoRR

Abstract

2025

In-context learning of evolving data streams with tabular foundational models

Authors
Lourenço, A; Gama, J; Xing, EP; Marreiros, G;

Publication
CoRR

Abstract

  • 7
  • 504