Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2025

Neonatal EEG classification using a compact support separable kernel time-frequency distribution and attention-based CNN

Authors
Larbi, A; Abed, M; Cardoso, JS; Ouahabi, A;

Publication
BIOMEDICAL SIGNAL PROCESSING AND CONTROL

Abstract
Neonatal seizures represent a critical medical issue that requires prompt diagnosis and treatment. Typically, at-risk newborns undergo a Magnetic Resonance Imaging (MRI) brain assessment followed by continuous seizure monitoring using multichannel EEG. Visual analysis of multichannel electroencephalogram (EEG) recordings remains the standard modality for seizure detection; however, it is limited by fatigue and delayed seizure identification. Advances in machine and deep learning have led to the development of powerful neonatal seizure detection algorithms that may help address these limitations. Nevertheless, their performance remains relatively low and often disregards the non-stationary attributes of EEG signals, especially when learned from weakly labeled EEG data. In this context, the present paper proposes a novel deep-learning approach for neonatal seizure detection. The method employs rigorous preprocessing to reduce noise and artifacts, along with a recently developed time-frequency distribution (TFD) derived from a separable compact support kernel to capture the fast spectral changes associated with neonatal seizures. The high-resolution TFD diagrams are then converted into RGB images and used as inputs to a pre-trained ResNet-18 model. This is followed by the training of an attention-based multiple-instance learning (MIL) mechanism. The purpose is to perform a spatial time-frequency analysis that can highlight which channels exhibit seizure activity, thereby reducing the time required for secondary evaluation by a doctor. Additionally, per-instance learning (PIL) is performed to further validate the robustness of our TFD and methodology. Tested on the Helsinki public dataset, the PIL model achieved an area under the curve (AUC) of 96.8%, while the MIL model attained an average AUC of 94.1%, surpassing similar attention-based methods.

2025

H&E to IHC virtual staining methods in breast cancer: an overview and benchmarking

Authors
Klöckner, P; Teixeira, J; Montezuma, D; Fraga, J; Horlings, HM; Cardoso, JS; de Oliveira, SP;

Publication
npj Digit. Medicine

Abstract

2025

GANs vs. Diffusion Models for Virtual Staining with the HER2match Dataset

Authors
Klöckner, P; Teixeira, J; Montezuma, D; Cardoso, JS; Horlings, HM; de Oliveira, SP;

Publication
Deep Generative Models - 5th MICCAI Workshop, DGM4MICCAI 2025, Held in Conjunction with MICCAI 2025, Daejeon, South Korea, September 23, 2025, Proceedings

Abstract
Virtual staining is a promising technique that uses deep generative models to recreate histological stains, providing a faster and more cost-effective alternative to traditional tissue chemical staining. Specifically for H&E-HER2 staining transfer, despite a rising trend in publications, the lack of sufficient public datasets has hindered progress in the topic. Additionally, it is currently unclear which model frameworks perform best for this particular task. In this paper, we introduce the HER2match dataset, the first publicly available dataset with the same breast cancer tissue sections stained with both H&E and HER2. Furthermore, we compare the performance of several Generative Adversarial Networks (GANs) and Diffusion Models (DMs), and implement a novel Brownian Bridge Diffusion Model for H&E-HER2 translation. Our findings indicate that, overall, GANs perform better than DMs, with only the BBDM achieving comparable results. Moreover, we emphasize the importance of data alignment, as all models trained on HER2match produced vastly improved visuals compared to the widely used consecutive-slide BCI dataset. This research provides a new high-quality dataset, improving both model training and evaluation. In addition, our comparison of frameworks offers valuable guidance for researchers working on the topic. © 2025 Elsevier B.V., All rights reserved.

2025

Leveraging Cold Diffusion for the Decomposition of Identically Distributed Superimposed Images

Authors
Montenegro, H; Cardoso, JS;

Publication
IEEE OPEN JOURNAL OF SIGNAL PROCESSING

Abstract
With the growing adoption of Deep Learning for imaging tasks in biometrics and healthcare, it becomes increasingly important to ensure privacy when using and sharing images of people. Several works enable privacy-preserving image sharing by anonymizing the images so that the corresponding individuals are no longer recognizable. Most works average images or their embeddings as an anonymization technique, relying on the assumption that the average operation is irreversible. Recently, cold diffusion models, based on the popular denoising diffusion probabilistic models, have succeeded in reversing deterministic transformations on images. In this work, we leverage cold diffusion to decompose superimposed images, empirically demonstrating that it is possible to obtain two or more identically-distributed images given their average. We propose novel sampling strategies for this task and show their efficacy on three datasets. Our findings highlight the risks of averaging images as an anonymization technique and argue for the use of alternative anonymization strategies.

2025

Balancing Beyond Discrete Categories: Continuous Demographic Labels for Fair Face Recognition

Authors
Neto, PC; Damer, N; Cardoso, JS; Sequeira, AF;

Publication
CoRR

Abstract

2025

End-to-End Occluded Person Re-Identification With Artificial Occlusion Generation

Authors
Capozzi, L; Cardoso, JS; Rebelo, A;

Publication
IEEE ACCESS

Abstract
In recent years, the task of person re-identification (Re-ID) has improved considerably with the advances in deep learning methodologies. However, occluded person Re-ID remains a challenging task, as parts of the body of the individual are frequently hidden by various objects, obstacles, or other people, making the identification process more difficult. To address these issues, we introduce a novel data augmentation strategy using artificial occlusions, consisting of random shapes and objects from a small image dataset that was created. We also propose an end-to-end methodology for occluded person Re-ID, which consists of three branches: a global branch, a feature dropping branch, and an occlusion detection branch. Experimental results show that the use of random shape occlusions is superior to random erasing using our architecture. Results on six datasets consisting of three tasks (holistic, partial and occluded person Re-ID) demonstrate that our method performs favourably against state-of-the-art methodologies.

  • 8
  • 384