Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Jaime Cardoso

2025

ECG Biometrics

Autores
Pinto, JR; Cardoso, S;

Publicação
Encyclopedia of Cryptography, Security and Privacy, Third Edition

Abstract
[No abstract available]

2025

Information bottleneck with input sampling for attribution

Autores
Coelho, B; Cardoso, JS;

Publicação
NEUROCOMPUTING

Abstract
In order to facilitate the adoption of deep learning in areas where decisions are of critical importance, understanding the model's internal workings is paramount. Nevertheless, since most models are considered black boxes, this task is usually not trivial, especially when the user does not have access to the network's intermediate outputs. In this paper, we propose IBISA, a model-agnostic attribution method that reaches stateof-the-art performance by optimizing sampling masks using the Information Bottleneck Principle. Our method improves on the previously known RISE and IBA techniques by placing the bottleneck right after the image input without complex formulations to estimate the mutual information. The method also requires only twenty forward passes and ten backward passes through the network, which is significantly faster than RISE, which needs at least 4000 forward passes. We evaluated IBISA using a VGG-16 and a ResNET-50 model, showing that our method produces explanations comparable or superior to IBA, RISE, and Grad-CAM but much efficiently.

2025

An inpainting approach to manipulate asymmetry in pre-operative breast images

Autores
Montenegro, H; Cardoso, MJ; Cardoso, JS;

Publicação
CoRR

Abstract

2025

CountPath: Automating Fragment Counting in Digital Pathology

Autores
Vieira, AB; Valente, M; Montezuma, D; Albuquerque, T; Ribeiro, L; Oliveira, D; Monteiro, JC; Gonçalves, S; Pinto, IM; Cardoso, JS; Oliveira, AL;

Publicação
CoRR

Abstract

2024

Parameter-Efficient Generation of Natural Language Explanations for Chest X-ray Classification

Autores
Rio-Torto, I; Cardoso, JS; Teixeira, LF;

Publicação
MEDICAL IMAGING WITH DEEP LEARNING

Abstract
The increased interest and importance of explaining neural networks' predictions, especially in the medical community, associated with the known unreliability of saliency maps, the most common explainability method, has sparked research into other types of explanations. Natural Language Explanations (NLEs) emerge as an alternative, with the advantage of being inherently understandable by humans and the standard way that radiologists explain their diagnoses. We extend upon previous work on NLE generation for multi-label chest X-ray diagnosis by replacing the traditional decoder-only NLE generator with an encoder-decoder architecture. This constitutes a first step towards Reinforcement Learning-free adversarial generation of NLEs when no (or few) ground-truth NLEs are available for training, since the generation is done in the continuous encoder latent space, instead of in the discrete decoder output space. However, in the current scenario, large amounts of annotated examples are still required, which are especially costly to obtain in the medical domain, given that they need to be provided by clinicians. Thus, we explore how the recent developments in Parameter-Efficient Fine-Tuning (PEFT) can be leveraged for this usecase. We compare different PEFT methods and find that integrating the visual information into the NLE generator layers instead of only at the input achieves the best results, even outperforming the fully fine-tuned encoder-decoder-based model, while only training 12% of the model parameters. Additionally, we empirically demonstrate the viability of supervising the NLE generation process on the encoder latent space, thus laying the foundation for RL-free adversarial training in low ground-truth NLE availability regimes. The code is publicly available at https://github.com/icrto/peft-nles.

2025

CBVLM: Training-free explainable concept-based Large Vision Language Models for medical image classification

Autores
Patrício, C; Torto, IR; Cardoso, JS; Teixeira, LF; Neves, J;

Publicação
Comput. Biol. Medicine

Abstract
The main challenges limiting the adoption of deep learning-based solutions in medical workflows are the availability of annotated data and the lack of interpretability of such systems. Concept Bottleneck Models (CBMs) tackle the latter by constraining the model output on a set of predefined and human-interpretable concepts. However, the increased interpretability achieved through these concept-based explanations implies a higher annotation burden. Moreover, if a new concept needs to be added, the whole system needs to be retrained. Inspired by the remarkable performance shown by Large Vision-Language Models (LVLMs) in few-shot settings, we propose a simple, yet effective, methodology, CBVLM, which tackles both of the aforementioned challenges. First, for each concept, we prompt the LVLM to answer if the concept is present in the input image. Then, we ask the LVLM to classify the image based on the previous concept predictions. Moreover, in both stages, we incorporate a retrieval module responsible for selecting the best examples for in-context learning. By grounding the final diagnosis on the predicted concepts, we ensure explainability, and by leveraging the few-shot capabilities of LVLMs, we drastically lower the annotation cost. We validate our approach with extensive experiments across four medical datasets and twelve LVLMs (both generic and medical) and show that CBVLM consistently outperforms CBMs and task-specific supervised methods without requiring any training and using just a few annotated examples. More information on our project page: https://cristianopatricio.github.io/CBVLM/. © 2025 Elsevier B.V., All rights reserved.

  • 42
  • 66