Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Interest
Topics
Details

Details

  • Name

    Sónia Carvalho Teixeira
  • Role

    External Student
  • Since

    01st April 2015
001
Publications

2026

Ethical Considerations in the Context of AI-Driven Misinformation Detection

Authors
Ettore Barbagallo; Guillaume Gadek; Géraud Faye; Nina Khairova; Chirag Arora; Dilhan Thilakarathne; Karen Joisten; Sónia Teixeira; Juan M. Durán; Manuel Barrantes;

Publication
Handbook of Human-AI Collaboration

Abstract
Abstract Misinformation poses one of the most urgent challenges of our society and raises the question of how to deal with it and manage its rapid spread. To address this problem, a promising approach relies on AI-based misinformation detection. This chapter of the book offers a critical analysis of the ethical implications associated with the design, deployment, and use of misinformation detectors (MDs). Designing and deploying an MD—an AI system that automatically identifies misinformation—is a complex undertaking that requires an interdisciplinary approach, as the challenges faced by MD designers and deployers encompass not only technical aspects, but also linguistic, sociological, political, and especially ethical dimensions. Our analysis is ethics-oriented and follows two main lines of inquiry: (1) Ethics by Design, which focuses on issues related to the design process of an MD, and (2) Ethics of Impact, which addresses the intended and unintended effects of MD deployment and use.

2025

Unveiling Fairness and Performance of Causal Discovery

Authors
Teixeira, S; Nogueira, AR; Gama, J;

Publication
DSAA

Abstract
Data-driven decision models based on Artificial Intelligence (AI) are increasingly adopted across domains. However, these models are susceptible to bias that can result in unfair or discriminatory outcomes. Recent research has explored causal discovery methods as a promising way to understand and improve fairness in decision-making systems. In this work, we investigate how different conditional independence tests used in constraint-based causal discovery algorithms, specifically the PC algorithm, affect fairness and performance. We perform an empirical evaluation on several datasets, including Portuguese public contracts, COMPAS, and the German Credit dataset. Using seven conditional independence tests, we assess model behavior under fairness (demographic parity, accuracy parity, equalized odds and predictive rate parity) and performance (accuracy, F1-score, AUC) metrics. Our findings reveal that some tests, due to their statistical properties, fail to expose unfairness detectable via causal structures, even when performance metrics appear acceptable. Furthermore, we highlight significant differences in computational efficiency among the tests, with x2-Adf, sp-mi, and sp-x2 being the least efficient. This study underscores the need for careful selection of conditional independence tests in causal discovery to ensure both fairness and reliability in data-driven decision systems. © 2025 IEEE.

2025

A Multidimensional Approach to Ethical AI Auditing

Authors
Sónia Teixeira; Atia Cortés; Dilhan Thilakarathne; Gianmarco Gori; Marco Minici; Monowar Bhuyan; Nina Khairova; Tosin Adewumi; Devvjiit Bhuyan; Jack O'Keefe; Carmela Comito; João Gama; Virginia Dignum;

Publication
Proceedings of the AAAI/ACM Conference on AI Ethics and Society

Abstract
The increasing integration of Artificial Intelligence (AI) across various sectors of society raises complex ethical challenges requiring systematic and scalable oversight mechanisms. While tools such as AIF360 and Aequitas address specific dimensions, namely fairness, there remains a lack of comprehensive frameworks capable of auditing multiple ethical principles simultaneously. This paper introduces a multidimensional AI auditing tool designed to evaluate systems across key dimensions: fairness, explainability, robustness, transparency, bias, sustainability, and legal compliance. Unlike existing tools, our framework enables simultaneous assessment of these dimensions, supporting more holistic and accountable AI deployment. We demonstrate the tool’s applicability through use cases and discuss its implications for building trust and aligning AI development with fundamental ethical standards.

2025

Fairness Analysis in Causal Models: An Application to Public Procurement

Authors
Teixeira, S; Nogueira, AR; Gama, J;

Publication
MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2023, PT II

Abstract
Data-driven decision models based on Artificial Intelligence (AI) have been widely used in the public and private sectors. These models present challenges and are intended to be fair, effective and transparent in public interest areas. Bias, fairness and government transparency are aspects that significantly impact the functioning of a democratic society. They shape the government's and its citizens' relationship, influencing trust, accountability, and the equitable treatment of individuals and groups. Data-driven decision models can be biased at several process stages, contributing to injustices. Our research purpose is to understand fairness in the use of causal discovery for public procurement. By analysing Portuguese public contracts data, we aim i) to predict the place of execution of public contracts using the PC algorithm with sp-mi, smc-chi(2) and mc-chi(2) conditional independence tests; ii) to analyse and compare the fairness in those scenarios using Predictive Parity Rate, Proportional Parity, Demographic Parity and Accuracy Parity metrics. By addressing fairness concerns, we pursue to enhance responsible data-driven decision models. We conclude that, in our case, fairness metrics make an assessment more local than global due to causality pathways. We also observe that the Proportional Parity metric is the one with the lowest variance among all metrics and one with the highest precision, and this reinforces the observation that the Agency category is the one that is furthest apart in terms of the proportion of the groups.

2025

Strategic Alliances in NetLogo: A Flocking Algorithm with Reinforcement Learning

Authors
Sónia Teixeira; Sónia Teixeira; Pedro Campos; Pedro Campos; Sónia Teixeira; Sónia Teixeira; Pedro Campos; Pedro Campos;

Publication
Machine Learning Perspectives of Agent-Based Models

Abstract
The evolution of markets provides a change in the way organisations act. To improve their competitive performance and stay on the market, organisations often adopt a strategy to establish agreements with other organisations, known as strategic alliances. Several tools, algorithms, and computational systems call upon other sciences as a source of inspiration. In this work we explore flocking behaviour, a paradigm of biology, to analyse the collective intelligence behaviour that emerges from a group of individuals or firms. Inspired by the Cucker and Smale algorithm (C-S), we propose a new version of the flocking algorithm, AllFlock, applied to strategic alliances, considering a learning mechanism. For this new approach, metrics were obtained for the parameters of the C-S algorithm: position, velocity, and influence. The latter uses cooperative games, adapted mechanisms, and methods currently explored in reinforcement learning. We have used Netlogo as the modelling environment. Five parameter configurations were analysed. For each of those configurations, the average number of iterations, the permanence rate of organisations in the alliance, and the average growth of the organisations were computed. The behaviour of the organisations reveals a tendency for convergence, confirming the existence of flocking behaviour. © 2025 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG.