Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Sónia Carvalho Teixeira
  • Cargo

    Estudante Externo
  • Desde

    01 abril 2015
001
Publicações

2025

Unveiling Fairness and Performance of Causal Discovery

Autores
Teixeira, S; Nogueira, AR; Gama, J;

Publicação
DSAA

Abstract

2025

A Multidimensional Approach to Ethical AI Auditing

Autores
Teixeira, S; Cortés, A; Thilakarathne, D; Gori, G; Minici, M; Bhuyan, M; Khairova, N; Adewumi, T; Bhuyan, D; O'Keefe, J; Comito, C; Gama, J; Dignum, V;

Publicação
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society

Abstract
The increasing integration of Artificial Intelligence (AI) across various sectors of society raises complex ethical challenges requiring systematic and scalable oversight mechanisms. While tools such as AIF360 and Aequitas address specific dimensions, namely fairness, there remains a lack of comprehensive frameworks capable of auditing multiple ethical principles simultaneously. This paper introduces a multidimensional AI auditing tool designed to evaluate systems across key dimensions: fairness, explainability, robustness, transparency, bias, sustainability, and legal compliance. Unlike existing tools, our framework enables simultaneous assessment of these dimensions, supporting more holistic and accountable AI deployment. We demonstrate the tool’s applicability through use cases and discuss its implications for building trust and aligning AI development with fundamental ethical standards.

2025

Fairness Analysis in Causal Models: An Application to Public Procurement

Autores
Teixeira, S; Nogueira, AR; Gama, J;

Publicação
MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2023, PT II

Abstract
Data-driven decision models based on Artificial Intelligence (AI) have been widely used in the public and private sectors. These models present challenges and are intended to be fair, effective and transparent in public interest areas. Bias, fairness and government transparency are aspects that significantly impact the functioning of a democratic society. They shape the government's and its citizens' relationship, influencing trust, accountability, and the equitable treatment of individuals and groups. Data-driven decision models can be biased at several process stages, contributing to injustices. Our research purpose is to understand fairness in the use of causal discovery for public procurement. By analysing Portuguese public contracts data, we aim i) to predict the place of execution of public contracts using the PC algorithm with sp-mi, smc-chi(2) and mc-chi(2) conditional independence tests; ii) to analyse and compare the fairness in those scenarios using Predictive Parity Rate, Proportional Parity, Demographic Parity and Accuracy Parity metrics. By addressing fairness concerns, we pursue to enhance responsible data-driven decision models. We conclude that, in our case, fairness metrics make an assessment more local than global due to causality pathways. We also observe that the Proportional Parity metric is the one with the lowest variance among all metrics and one with the highest precision, and this reinforces the observation that the Agency category is the one that is furthest apart in terms of the proportion of the groups.

2025

Strategic Alliances in NetLogo: A Flocking Algorithm with Reinforcement Learning

Autores
Sónia Teixeira; Sónia Teixeira; Pedro Campos; Pedro Campos; Sónia Teixeira; Sónia Teixeira; Pedro Campos; Pedro Campos;

Publicação
Machine Learning Perspectives of Agent-Based Models

Abstract
The evolution of markets provides a change in the way organisations act. To improve their competitive performance and stay on the market, organisations often adopt a strategy to establish agreements with other organisations, known as strategic alliances. Several tools, algorithms, and computational systems call upon other sciences as a source of inspiration. In this work we explore flocking behaviour, a paradigm of biology, to analyse the collective intelligence behaviour that emerges from a group of individuals or firms. Inspired by the Cucker and Smale algorithm (C-S), we propose a new version of the flocking algorithm, AllFlock, applied to strategic alliances, considering a learning mechanism. For this new approach, metrics were obtained for the parameters of the C-S algorithm: position, velocity, and influence. The latter uses cooperative games, adapted mechanisms, and methods currently explored in reinforcement learning. We have used Netlogo as the modelling environment. Five parameter configurations were analysed. For each of those configurations, the average number of iterations, the permanence rate of organisations in the alliance, and the average growth of the organisations were computed. The behaviour of the organisations reveals a tendency for convergence, confirming the existence of flocking behaviour. © 2025 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG.

2023

Ethical and Technological AI Risks Classification: A Human Vs Machine Approach

Autores
Teixeira, S; Veloso, B; Rodrigues, JC; Gama, J;

Publicação
MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT I

Abstract
The growing use of data-driven decision systems based on Artificial Intelligence (AI) by governments, companies and social organizations has given more attention to the challenges they pose to society. Over the last few years, news about discrimination appeared on social media, and privacy, among others, highlighted their vulnerabilities. Despite all the research around these issues, the definition of concepts inherent to the risks and/or vulnerabilities of data-driven decision systems is not consensual. Categorizing the dangers and vulnerabilities of data-driven decision systems will facilitate ethics by design, ethics in design and ethics for designers to contribute to responsibleAI. Themain goal of thiswork is to understand which types of AI risks/ vulnerabilities are Ethical and/or Technological and the differences between human vs machine classification. We analyze two types of problems: (i) the risks/ vulnerabilities classification task by humans; and (ii) the risks/vulnerabilities classification task by machines. To carry out the analysis, we applied a survey to perform human classification and the BERT algorithm in machine classification. The results show that even with different levels of detail, the classification of vulnerabilities is in agreement in most cases.