Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2025

Enhancing Weakly-Supervised Video Anomaly Detection With Temporal Constraints

Authors
Caetano, F; Carvalho, P; Mastralexi, C; Cardoso, JS;

Publication
IEEE ACCESS

Abstract
Anomaly Detection has been a significant field in Machine Learning since it began gaining traction. In the context of Computer Vision, the increased interest is notorious as it enables the development of video processing models for different tasks without the need for a cumbersome effort with the annotation of possible events, that may be under represented. From the predominant strategies, weakly and semi-supervised, the former has demonstrated potential to achieve a higher score in its analysis, adding to its flexibility. This work shows that using temporal ranking constraints for Multiple Instance Learning can increase the performance of these models, allowing the focus on the most informative instances. Moreover, the results suggest that altering the ranking process to include information about adjacent instances generates best-performing models.

2025

ECG Biometrics

Authors
Pinto, JR; Cardoso, S;

Publication
Encyclopedia of Cryptography, Security and Privacy, Third Edition

Abstract
[No abstract available]

2025

Information bottleneck with input sampling for attribution

Authors
Coelho, B; Cardoso, JS;

Publication
NEUROCOMPUTING

Abstract
In order to facilitate the adoption of deep learning in areas where decisions are of critical importance, understanding the model's internal workings is paramount. Nevertheless, since most models are considered black boxes, this task is usually not trivial, especially when the user does not have access to the network's intermediate outputs. In this paper, we propose IBISA, a model-agnostic attribution method that reaches stateof-the-art performance by optimizing sampling masks using the Information Bottleneck Principle. Our method improves on the previously known RISE and IBA techniques by placing the bottleneck right after the image input without complex formulations to estimate the mutual information. The method also requires only twenty forward passes and ten backward passes through the network, which is significantly faster than RISE, which needs at least 4000 forward passes. We evaluated IBISA using a VGG-16 and a ResNET-50 model, showing that our method produces explanations comparable or superior to IBA, RISE, and Grad-CAM but much efficiently.

2025

An inpainting approach to manipulate asymmetry in pre-operative breast images

Authors
Montenegro, H; Cardoso, MJ; Cardoso, JS;

Publication
CoRR

Abstract

2025

CountPath: Automating Fragment Counting in Digital Pathology

Authors
Vieira, AB; Valente, M; Albuquerque, T; Montezuma, D; Ribeiro, L; Oliveira, D; Monteiro, J; Goncalves, S; Pinto, IM; Cardoso, JS; Oliveira, AL;

Publication
2025 IEEE EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS, BHI

Abstract
Quality control of medical images is a critical component of digital pathology, ensuring that diagnostic images meet required standards. A pre-analytical task within this process is the verification of the number of specimen fragments, a process that ensures that the number of fragments on a slide matches the number documented in the macroscopic report. This step is important to ensure that the slides contain the appropriate diagnostic material from the grossing process, thereby guaranteeing the accuracy of subsequent microscopic examination and diagnosis. Traditionally, this assessment is performed manually, requiring significant time and effort while being subject to significant variability due to its subjective nature. To address these challenges, this study explores an automated approach to fragment counting using the YOLOv11 and Vision Transformer models. Our results demonstrate that the automated system achieves a level of performance comparable or even superior to that of experts, offering a reliable and efficient alternative to manual counting. Additionally, we present findings on interobserver variability, showing that the automated approach achieves an accuracy of 90.1%, surpassing the range observed among experts (82-88%). This result further supports its suitability for integration into routine pathology workflows.

2025

CBVLM: Training-free explainable concept-based Large Vision Language Models for medical image classification

Authors
Patrício, C; Torto, IR; Cardoso, JS; Teixeira, LF; Neves, J;

Publication
Comput. Biol. Medicine

Abstract
The main challenges limiting the adoption of deep learning-based solutions in medical workflows are the availability of annotated data and the lack of interpretability of such systems. Concept Bottleneck Models (CBMs) tackle the latter by constraining the model output on a set of predefined and human-interpretable concepts. However, the increased interpretability achieved through these concept-based explanations implies a higher annotation burden. Moreover, if a new concept needs to be added, the whole system needs to be retrained. Inspired by the remarkable performance shown by Large Vision-Language Models (LVLMs) in few-shot settings, we propose a simple, yet effective, methodology, CBVLM, which tackles both of the aforementioned challenges. First, for each concept, we prompt the LVLM to answer if the concept is present in the input image. Then, we ask the LVLM to classify the image based on the previous concept predictions. Moreover, in both stages, we incorporate a retrieval module responsible for selecting the best examples for in-context learning. By grounding the final diagnosis on the predicted concepts, we ensure explainability, and by leveraging the few-shot capabilities of LVLMs, we drastically lower the annotation cost. We validate our approach with extensive experiments across four medical datasets and twelve LVLMs (both generic and medical) and show that CBVLM consistently outperforms CBMs and task-specific supervised methods without requiring any training and using just a few annotated examples. More information on our project page: https://cristianopatricio.github.io/CBVLM/.

  • 11
  • 401