2022
Authors
Silva, W; Goncalves, T; Harma, K; Schroder, E; Obmann, VC; Barroso, MC; Poellinger, A; Reyes, M; Cardoso, JS;
Publication
SCIENTIFIC REPORTS
Abstract
Currently, radiologists face an excessive workload, which leads to high levels of fatigue, and consequently, to undesired diagnosis mistakes. Decision support systems can be used to prioritize and help radiologists making quicker decisions. In this sense, medical content-based image retrieval systems can be of extreme utility by providing well-curated similar examples. Nonetheless, most medical content-based image retrieval systems work by finding the most similar image, which is not equivalent to finding the most similar image in terms of disease and its severity. Here, we propose an interpretability-driven and an attention-driven medical image retrieval system. We conducted experiments in a large and publicly available dataset of chest radiographs with structured labels derived from free-text radiology reports (MIMIC-CXR-JPG). We evaluated the methods on two common conditions: pleural effusion and (potential) pneumonia. As ground-truth to perform the evaluation, query/test and catalogue images were classified and ordered by an experienced board-certified radiologist. For a profound and complete evaluation, additional radiologists also provided their rankings, which allowed us to infer inter-rater variability, and yield qualitative performance levels. Based on our ground-truth ranking, we also quantitatively evaluated the proposed approaches by computing the normalized Discounted Cumulative Gain (nDCG). We found that the Interpretability-guided approach outperforms the other state-of-the-art approaches and shows the best agreement with the most experienced radiologist. Furthermore, its performance lies within the observed inter-rater variability.
2023
Authors
Cruz, R; Silva, DTE; Goncalves, T; Carneiro, D; Cardoso, JS;
Publication
SENSORS
Abstract
Semantic segmentation consists of classifying each pixel according to a set of classes. Conventional models spend as much effort classifying easy-to-segment pixels as they do classifying hard-to-segment pixels. This is inefficient, especially when deploying to situations with computational constraints. In this work, we propose a framework wherein the model first produces a rough segmentation of the image, and then patches of the image estimated as hard to segment are refined. The framework is evaluated in four datasets (autonomous driving and biomedical), across four state-of-the-art architectures. Our method accelerates inference time by four, with additional gains for training time, at the cost of some output quality.
2022
Authors
Neto, PC; Gonçalves, T; Pinto, JR; Silva, W; Sequeira, AF; Ross, A; Cardoso, JS;
Publication
CoRR
Abstract
2025
Authors
Ferreira, Leonardo; Gonçalves, Tiago; Neto, Pedro C.; Sequeira, Ana; Mamede, Rafael; Oliveira, Mafalda;
Publication
Abstract
This study investigates the use of SHAP (SHapley Additive exPlanations) values as an explainable artificial intelligence (xAI) technique applied on a facial attribute classification task. We analyse the consistency of SHAP value distributions across diverse classifier architectures that share the same feature extractor, revealing that key features driving attribute classification remain stable regardless of classifier architecture. Our findings highlight the challenges in interpreting SHAP values at the individual sample level, as their reliability depends on the model’s ability to learn distinct class-specific features; models exploiting inter-class correlations yield less representative SHAP explanations. Furthermore, pixel-level SHAP analysis reveals that superior classification accuracy does not necessarily equate to meaningful semantic understanding; notably, despite FaceNet exhibiting lower performance than CLIP, it demonstrated a more nuanced grasp of the underlying class attributes. Finally, we address the computational scalability of SHAP, demonstrating that KernelExplainer becomes infeasible for high-dimensional pixel data, whereas DeepExplainer and GradientExplainer offer more practical alternatives with trade-offs. Our results suggest that SHAP is most effective for small to medium feature sets or tabular data, providing interpretable and computationally manageable explanations.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.