Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CTM

2025

Optimizing Credit Risk Prediction for Peer-to-Peer Lending Using Machine Learning

Autores
Souadda, LI; Halitim, AR; Benilles, B; Oliveira, JM; Ramos, P;

Publicação

Abstract
This study investigates the effectiveness of different hyperparameter tuning strategies for peer-to-peer risk management. Ensemble learning techniques have shown superior performance in this field compared to individual classifiers and traditional statistical methods. However, model performance is influenced not only by the choice of algorithm but also by hyperparameter tuning, which impacts both predictive accuracy and computational efficiency. This research compares the performance and efficiency of three widely used hyperparameter tuning methods, Grid Search, Random Search, and Optuna, across XGBoost, LightGBM, and Logistic Regression models. The analysis uses the Lending Club dataset, spanning from 2007 Q1 to 2020 Q3, with comprehensive data preprocessing to address missing values, class imbalance, and feature engineering. Model explainability is assessed through feature importance analysis to identify key drivers of default probability. The findings reveal comparable predictive performance among the tuning methods, evaluated using metrics such as G-mean, sensitivity, and specificity. However, Optuna significantly outperforms the others in computational efficiency; for instance, it is 10.7 times faster than Grid Search for XGBoost and 40.5 times faster for LightGBM. Additionally, variations in feature importance rankings across tuning methods influence model interpretability and the prioritization of risk factors. These insights underscore the importance of selecting appropriate hyperparameter tuning strategies to optimize both performance and explainability in peer-to-peer risk management models.

2025

A survey on cell nuclei instance segmentation and classification: Leveraging context and attention

Autores
Nunes, JD; Montezuma, D; Oliveira, D; Pereira, T; Cardoso, JS;

Publicação
MEDICAL IMAGE ANALYSIS

Abstract
Nuclear-derived morphological features and biomarkers provide relevant insights regarding the tumour microenvironment, while also allowing diagnosis and prognosis in specific cancer types. However, manually annotating nuclei from the gigapixel Haematoxylin and Eosin (H&E)-stained Whole Slide Images (WSIs) is a laborious and costly task, meaning automated algorithms for cell nuclei instance segmentation and classification could alleviate the workload of pathologists and clinical researchers and at the same time facilitate the automatic extraction of clinically interpretable features for artificial intelligence (AI) tools. But due to high intra- and inter-class variability of nuclei morphological and chromatic features, as well as H&Estains susceptibility to artefacts, state-of-the-art algorithms cannot correctly detect and classify instances with the necessary performance. In this work, we hypothesize context and attention inductive biases in artificial neural networks (ANNs) could increase the performance and generalization of algorithms for cell nuclei instance segmentation and classification. To understand the advantages, use-cases, and limitations of context and attention-based mechanisms in instance segmentation and classification, we start by reviewing works in computer vision and medical imaging. We then conduct a thorough survey on context and attention methods for cell nuclei instance segmentation and classification from H&E-stained microscopy imaging, while providing a comprehensive discussion of the challenges being tackled with context and attention. Besides, we illustrate some limitations of current approaches and present ideas for future research. As a case study, we extend both a general (Mask-RCNN) and a customized (HoVer-Net) instance segmentation and classification methods with context- and attention-based mechanisms and perform a comparative analysis on a multicentre dataset for colon nuclei identification and counting. Although pathologists rely on context at multiple levels while paying attention to specific Regions of Interest (RoIs) when analysing and annotating WSIs, our findings suggest translating that domain knowledge into algorithm design is no trivial task, but to fully exploit these mechanisms in ANNs, the scientific understanding of these methods should first be addressed.

2025

CNN explanation methods for ordinal regression tasks

Autores
Gómez, JB; Cruz, RPM; Cardoso, JS; Gutiérrez, PA; Martínez, CH;

Publicação
Neurocomputing

Abstract

2025

CNN explanation methods for ordinal regression tasks

Autores
Barbero-Gómez, J; Cruz, RPM; Cardoso, JS; Gutiérrez, PA; Hervás-Martínez, C;

Publicação
NEUROCOMPUTING

Abstract
The use of Convolutional Neural Network (CNN) models for image classification tasks has gained significant popularity. However, the lack of interpretability in CNN models poses challenges for debugging and validation. To address this issue, various explanation methods have been developed to provide insights into CNN models. This paper focuses on the validity of these explanation methods for ordinal regression tasks, where the classes have a predefined order relationship. Different modifications are proposed for two explanation methods to exploit the ordinal relationships between classes: Grad-CAM based on Ordinal Binary Decomposition (GradOBDCAM) and Ordinal Information Bottleneck Analysis (OIBA). The performance of these modified methods is compared to existing popular alternatives. Experimental results demonstrate that GradOBD-CAM outperforms other methods in terms of interpretability for three out of four datasets, while OIBA achieves superior performance compared to IBA.

2025

Learning Ordinality in Semantic Segmentation

Autores
Cruz, RPM; Cristino, R; Cardoso, JS;

Publicação
IEEE ACCESS

Abstract
Semantic segmentation consists of predicting a semantic label for each image pixel. While existing deep learning approaches achieve high accuracy, they often overlook the ordinal relationships between classes, which can provide critical domain knowledge (e.g., the pupil lies within the iris, and lane markings are part of the road). This paper introduces novel methods for spatial ordinal segmentation that explicitly incorporate these inter-class dependencies. By treating each pixel as part of a structured image space rather than as an independent observation, we propose two regularization terms and a new metric to enforce ordinal consistency between neighboring pixels. Two loss regularization terms and one metric are proposed for structural ordinal segmentation, which penalizes predictions of non-ordinal adjacent classes. Five biomedical datasets and multiple configurations of autonomous driving datasets demonstrate the efficacy of the proposed methods. Our approach achieves improvements in ordinal metrics and enhances generalization, with up to a 15.7% relative increase in the Dice coefficient. Importantly, these benefits come without additional inference time costs. This work highlights the significance of spatial ordinal relationships in semantic segmentation and provides a foundation for further exploration in structured image representations.

2025

Second FRCSyn-onGoing: Winning solutions and post-challenge analysis to improve face recognition with synthetic data

Autores
Tame, ID; Tolosana, R; Melzi, P; Rodríguez, RV; Kim, M; Rathgeb, C; Liu, X; Gomez, LF; Morales, A; Fierrez, J; Garcia, JO; Zhong, Z; Huang, Y; Mi, Y; Ding, S; Zhou, S; He, S; Fu, L; Cong, H; Zhang, R; Xiao, Z; Smirnov, E; Pimenov, A; Grigorev, A; Timoshenko, D; Asfaw, KM; Low, CY; Liu, H; Wang, C; Zuo, Q; He, Z; Shahreza, HO; George, A; Unnervik, A; Rahimi, P; Marcel, S; Neto, PC; Huber, M; Kolf, JN; Damer, N; Boutros, F; Cardoso, JS; Sequeira, AF; Atzori, A; Fenu, G; Marras, M; Struc, V; Yu, J; Li, Z; Li, J; Zhao, W; Lei, Z; Zhu, X; Zhang, X; Biesseck, B; Vidal, P; Coelho, L; Granada, R; Menotti, D;

Publicação
Inf. Fusion

Abstract
Synthetic data is gaining increasing popularity for face recognition technologies, mainly due to the privacy concerns and challenges associated with obtaining real data, including diverse scenarios, quality, and demographic groups, among others. It also offers some advantages over real data, such as the large amount of data that can be generated or the ability to customize it to adapt to specific problem-solving needs. To effectively use such data, face recognition models should also be specifically designed to exploit synthetic data to its fullest potential. In order to promote the proposal of novel Generative AI methods and synthetic data, and investigate the application of synthetic data to better train face recognition systems, we introduce the 2nd FRCSyn-onGoing challenge, based on the 2nd Face Recognition Challenge in the Era of Synthetic Data (FRCSyn), originally launched at CVPR 2024. This is an ongoing challenge that provides researchers with an accessible platform to benchmark (i) the proposal of novel Generative AI methods and synthetic data, and (ii) novel face recognition systems that are specifically proposed to take advantage of synthetic data. We focus on exploring the use of synthetic data both individually and in combination with real data to solve current challenges in face recognition such as demographic bias, domain adaptation, and performance constraints in demanding situations, such as age disparities between training and testing, changes in the pose, or occlusions. Very interesting findings are obtained in this second edition, including a direct comparison with the first one, in which synthetic databases were restricted to DCFace and GANDiffFace. © 2025

  • 2
  • 318