Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CTM

2025

Efficient-Proto-Caps: A Parameter-Efficient and Interpretable Capsule Network for Lung Nodule Characterization

Autores
Rodrigues, EM; Gouveia, M; Oliveira, HP; Pereira, T;

Publicação
IEEE Access

Abstract
Deep learning techniques have demonstrated significant potential in computer-assisted diagnosis based on medical imaging. However, their integration into clinical workflows remains limited, largely due to concerns about interpretability. To address this challenge, we propose Efficient-Proto-Caps, a lightweight and inherently interpretable model that combines capsule networks with prototype learning for lung nodule characterization. Additionally, an innovative Davies-Bouldin Index with multiple centroids per cluster is employed as a loss function to promote clustering of lung nodule visual attribute representations. When evaluated on the LIDC-IDRI dataset, the most widely recognized benchmark for lung cancer prediction, our model achieved an overall accuracy of 89.7 % in predicting lung nodule malignancy and associated visual attributes. This performance is statistically comparable to that of the baseline model, while utilizing a backbone with only approximately 2 % of the parameters of the baseline model’s backbone. State-of-the-art models achieved better performance in lung nodule malignancy prediction; however, our approach relies on multiclass malignancy predictions and provides a decision rationale aligned with globally accepted clinical guidelines. These results underscore the potential of our approach, as the integration of lightweight and less complex designs into accurate and inherently interpretable models represents a significant advancement toward more transparent and clinically viable computer-assisted diagnostic systems. Furthermore, these findings highlight the model’s potential for broader applicability, extending beyond medicine to other domains where final classifications are grounded in concept-based or example-based attributes. © 2013 IEEE.

2025

Generative adversarial networks with fully connected layers to denoise PPG signals

Autores
Castro, IAA; Oliveira, HP; Correia, R; Hayes-Gill, B; Morgan, SP; Korposh, S; Gomez, D; Pereira, T;

Publicação
PHYSIOLOGICAL MEASUREMENT

Abstract
Objective.The detection of arterial pulsating signals at the skin periphery with Photoplethysmography (PPG) are easily distorted by motion artifacts. This work explores the alternatives to the aid of PPG reconstruction with movement sensors (accelerometer and/or gyroscope) which to date have demonstrated the best pulsating signal reconstruction. Approach. A generative adversarial network with fully connected layers is proposed for the reconstruction of distorted PPG signals. Artificial corruption was performed to the clean selected signals from the BIDMC Heart Rate dataset, processed from the larger MIMIC II waveform database to create the training, validation and testing sets. Main results. The heart rate (HR) of this dataset was further extracted to evaluate the performance of the model obtaining a mean absolute error of 1.31 bpm comparing the HR of the target and reconstructed PPG signals with HR between 70 and 115 bpm. Significance. The model architecture is effective at reconstructing noisy PPG signals regardless the length and amplitude of the corruption introduced. The performance over a range of HR (70-115 bpm), indicates a promising approach for real-time PPG signal reconstruction without the aid of acceleration or angular velocity inputs.

2025

CINDERELLA Clinical Trial (NCT05196269): Patient Engagement with an AI-based Healthcare Application for Enhancing Breast Cancer Locoregional Treatment Decisions- Preliminary Insights

Autores
Bonci, EA; Antunes, M; Bobowicz, M; Borsoi, L; Ciani, O; Cruz, HV; Di Micco, R; Ekman, M; Gentilini, O; Romariz, M; Gonçalves, T; Gouveia, P; Heil, J; Kabata, P; Kaidar Person, O; Martins, H; Mavioso, C; Mika, M; Oliveira, HP; Oprea, N; Pfob, A; Haik, J; Menes, T; Schinköthe, T; Silva, G; Cardoso, JS; Cardoso, MJ;

Publicação
BREAST

Abstract

2025

A Two-Stage U-Net Framework for Interactive Segmentation of Lung Nodules in CT Scans

Autores
Fernandes, L; Pereira, T; Oliveira, HP;

Publicação
IEEE ACCESS

Abstract
Segmentation of lung nodules in CT images is an important step during the clinical evaluation of patients with lung cancer. Furthermore, early assessment of the cancer is crucial to increase the overall survival chances of patients with such disease, and the segmentation of lung nodules can help detect the cancer in its early stages. Consequently, there are many works in the literature that explore the use of neural networks for the segmentation of lung nodules. However, these frameworks tend to rely on accurate labelling of the nodule centre to then crop the input image. Although such works are able to achieve remarkable results, they do not take into account that the healthcare professional may fail to correctly label the centre of the nodule. Therefore, in this work, we propose a new framework based on the U-Net model that allows to correct such inaccuracies in an interactive fashion. It is composed of two U-Net models in cascade, where the first model is used to predict a rough estimation of the lung nodule location and the second model refines the generated segmentation mask. Our results show that the proposed framework is able to be more robust than the studied baselines. Furthermore, it is able to achieve state-of-the-art performance, reaching a Dice of 91.12% when trained and tested on the LIDC-IDRI public dataset.

2025

Clinical Annotation and Medical Image Anonymization for AI Model Training in Lung Cancer Detection

Autores
Freire, AM; Rodrigues, EM; Sousa, JV; Gouveia, M; Ferreira-Santos, D; Pereira, T; Oliveira, HP; Sousa, P; Silva, AC; Fernandes, MS; Hespanhol, V; Araújo, J;

Publicação
UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION, UAHCI 2025, PT I

Abstract
Lung cancer remains one of the most common and lethal forms of cancer, with approximately 1.8 million deaths annually, often diagnosed at advanced stages. Early detection is crucial, but it depends on physicians' accurate interpretation of computed tomography (CT) scans, a process susceptible to human limitations and variability. ByMe has developed a medical image annotation and anonymization tool designed to address these challenges through a human-centered approach. The tool enables physicians to seamlessly add structured attribute-based annotations (e.g., size, location, morphology) directly within their established workflows, ensuring intuitive interaction.Integrated with Picture Archiving and Communication Systems (PACS), the tool streamlines the annotation process and enhances usability by offering a dedicated worklist for retrospective and prospective case analysis. Robust anonymization features ensure compliance with privacy regulations such as the General Data Protection Regulation (GDPR), enabling secure dataset sharing for research and developing artificial intelligence (AI) models. Designed to empower AI integration, the tool not only facilitates the creation of high-quality datasets but also lays the foundation for incorporating AI-driven insights directly into clinical workflows. Focusing on usability, workflow integration, and privacy, this innovation bridges the gap between precision medicine and advanced technology. By providing the means to develop and train AI models for lung cancer detection, it holds the potential to significantly accelerate diagnosis as well as enhance its accuracy and consistency.

2025

Integrating Automated Perforator Analysis for Breast Reconstruction in Medical Imaging Workflow

Autores
Frias, J; Romariz, M; Ferreira, R; Pereira, T; Oliveira, HP; Santinha, J; Pinto, D; Gouveia, P; Silva, LB; Costa, C;

Publicação
UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION, UAHCI 2025, PT I

Abstract
Deep Inferior Epigastric Perforator (DIEP) flap breast reconstruction relies on the precise identification of perforator vessels supplying blood to transferred tissue. Traditional manual mapping from preoperative imaging is timeconsuming and subjective. To address this, AVA, a semi-automated perforator detection algorithm, was developed to analyze angiography images. AVA follows a three-step process: automated anatomical segmentation, manual annotation of perforators, and segmentation of perforator courses. This approach enhances accuracy, reduces subjectivity, and accelerates the mapping process while generating quantitative reports for surgical planning. To streamline integration into clinical workflows, AVA has been embedded into PACScenter, a medical imaging platform, leveraging DICOM encapsulation for seamless data exchange within a Vendor Neutral Archive (VNA). This integration allows surgeons to interactively annotate perforators, adjust parameters iteratively, and visualize detailed anatomical structures. AVA-PACScenter integration eliminates workflow disruptions by providing real-time perforator analysis within the surgical environment, ultimately improving preoperative planning and intraoperative guidance. Currently undergoing clinical feasibility testing, this integration aims to enhance DIEP flap reconstruction efficiency by reducing manual inputs, improving mapping precision, and facilitating long-term report storage within Dicoogle. By automating perforator analysis, AVA represents a significant advancement toward data-driven, patient-centered surgical planning.

  • 9
  • 377