2026
Autores
Salazar, T; Gama, J; Araújo, H; Abreu, PH;
Publicação
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
Abstract
In the evolving field of machine learning, ensuring group fairness has become a critical concern, prompting the development of algorithms designed to mitigate bias in decision-making processes. Group fairness refers to the principle that a model's decisions should be equitable across different groups defined by sensitive attributes such as gender or race, ensuring that individuals from privileged groups and unprivileged groups are treated fairly and receive similar outcomes. However, achieving fairness in the presence of group-specific concept drift remains an unexplored frontier, and our research represents pioneering efforts in this regard. Group-specific concept drift refers to situations where one group experiences concept drift over time, while another does not, leading to a decrease in fairness even if accuracy (ACC) remains fairly stable. Within the framework of federated learning (FL), where clients collaboratively train models, its distributed nature further amplifies these challenges since each client can experience group-specific concept drift independently while still sharing the same underlying concept, creating a complex and dynamic environment for maintaining fairness. The most significant contribution of our research is the formalization and introduction of the problem of group-specific concept drift and its distributed counterpart, shedding light on its critical importance in the field of fairness. In addition, leveraging insights from prior research, we adapt an existing distributed concept drift adaptation algorithm to tackle group-specific distributed concept drift, which uses a multimodel approach, a local group-specific drift detection mechanism, and continuous clustering of models over time. The findings from our experiments highlight the importance of addressing group-specific concept drift and its distributed counterpart to advance fairness in machine learning.
2023
Autores
Amorim, JP; Abreu, PH; Santos, JAM; Müller, H;
Publicação
CoRR
Abstract
2023
Autores
Santos, JC; Abreu, MH; Santos, MS; Duarte, H; Alpoim, T; Próspero, I; Sousa, S; Abreu, PH;
Publicação
ONCOLOGIST
Abstract
This article compares the effectiveness of the PET/CT scan and bone scintigraphy for the detection of bone metastases in patients with breast cancer. Background Positron emission tomography/computed tomography (PET/CT) has become in recent years a tool for breast cancer (BC) staging. However, its accuracy to detect bone metastases is classically considered inferior to bone scintigraphy (BS). The purpose of this work is to compare the effectiveness of bone metastases detection between PET/CT and BS. Materials and Methods Prospective study of 410 female patients treated in a Comprehensive Cancer Center between 2014 and 2020 that performed PET/CT and BS for staging purposes. The image analysis was performed by 2 senior nuclear medicine physicians. The comparison was performed based on accuracy, sensitivity, and specificity on a patient and anatomical region level and was assessed using McNemar's Test. An average ROC was calculated for the anatomical region analysis. Results PET/CT presented higher values of accuracy and sensitivity (98.0% and 93.83%), surpassing BS (95.61% and 81.48%) in detecting bone disease. There was a significant difference in favor of PET/CT (sensitivity 93.83% vs. 81.48%), however, there is no significant difference in eliminating false positives (specificity 99.09% vs. 99.09%). PET/CT presented the highest accuracy and sensitivity values for most of the bone segments, only surpassed by BS for the cranium. There was a significant difference in favor of PET/CT in the upper limb, spine, thorax (sternum) and lower limb (pelvis and sacrum), and in favor of BS in the cranium. The ROC showed that PET/CT has a higher sensitivity and consistency across the bone segments. Conclusion With the correct imaging protocol, PET/CT does not require BS for patients with BC staging.
2023
Autores
Graziani, M; Dutkiewicz, L; Calvaresi, D; Amorim, JP; Yordanova, K; Vered, M; Nair, R; Abreu, PH; Blanke, T; Pulignano, V; Prior, JO; Lauwaert, L; Reijers, W; Depeursinge, A; Andrearczyk, V; Müller, H;
Publicação
ARTIFICIAL INTELLIGENCE REVIEW
Abstract
Since its emergence in the 1960s, Artificial Intelligence (AI) has grown to conquer many technology products and their fields of application. Machine learning, as a major part of the current AI solutions, can learn from the data and through experience to reach high performance on various tasks. This growing success of AI algorithms has led to a need for interpretability to understand opaque models such as deep neural networks. Various requirements have been raised from different domains, together with numerous tools to debug, justify outcomes, and establish the safety, fairness and reliability of the models. This variety of tasks has led to inconsistencies in the terminology with, for instance, terms such as interpretable, explainable and transparent being often used interchangeably in methodology papers. These words, however, convey different meanings and are weighted differently across domains, for example in the technical and social sciences. In this paper, we propose an overarching terminology of interpretability of AI systems that can be referred to by the technical developers as much as by the social sciences community to pursue clarity and efficiency in the definition of regulations for ethical and reliable AI development. We show how our taxonomy and definition of interpretable AI differ from the ones in previous research and how they apply with high versatility to several domains and use cases, proposing a-highly needed-standard for the communication among interdisciplinary areas of AI.
2023
Autores
Salazar, T; Fernandes, M; Araújo, H; Abreu, PH;
Publicação
Computational Science - ICCS 2023 - 23rd International Conference, Prague, Czech Republic, July 3-5, 2023, Proceedings, Part I
Abstract
2019
Autores
Marques, F; Duarte, H; Santos, J; Domingues, I; Amorim, JP; Abreu, PH;
Publicação
SAC '19: PROCEEDINGS OF THE 34TH ACM/SIGAPP SYMPOSIUM ON APPLIED COMPUTING
Abstract
The machine learning field has grown considerably in the last years. There are, however, some problems still to be solved. The characteristics of the training sets, for instance, are known to affect the classifiers performance. Here, and inspired by medical applications, we are interested in studying datasets that are both ordinal and imbalanced. Ordinal datasets present labels where only the relative ordering between different values is significant. Imbalanced datasets have very different quantity of examples per class. Building upon our previous work, we make three new contributions, (1) extend the number of classifiers, (2) evaluate two techniques to balance intermediate train sets in binary decomposition methods (often used in multi-class contexts and ordinal ones in particular), and (3) propose a new, iterative, classifier-based oversampling algorithm that we name InCuBAtE. Experiments were made on 6 private datasets, concerning the assessment of response to treatment on oncologic diseases, and 15 public datasets widely used in the literature. When compared with our previous work, results have improved (or remained the same) for 4 of the 6 private datasets and for 11 out of the 15 public datasets.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.