2025
Authors
Guimarães, V; Sousa, I; Correia, MV;
Publication
BMC Medical Informatics Decis. Mak.
Abstract
2025
Authors
Zolfagharnasab, MH; Freitas, N; Gonçalves, T; Bonci, E; Mavioso, C; Cardoso, MJ; Oliveira, HP; Cardoso, JS;
Publication
ARTIFICIAL INTELLIGENCE AND IMAGING FOR DIAGNOSTIC AND TREATMENT CHALLENGES IN BREAST CARE, DEEP-BREATH 2024
Abstract
Breast cancer treatments often affect patients' body image, making aesthetic outcome predictions vital. This study introduces a Deep Learning (DL) multimodal retrieval pipeline using a dataset of 2,193 instances combining clinical attributes and RGB images of patients' upper torsos. We evaluate four retrieval techniques: Weighted Euclidean Distance (WED) with various configurations and shallow Artificial Neural Network (ANN) for tabular data, pre-trained and fine-tuned Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), and a multimodal approach combining both data types. The dataset, categorised into Excellent/Good and Fair/Poor outcomes, is organised into over 20K triplets for training and testing. Results show fine-tuned multimodal ViTs notably enhance performance, achieving up to 73.85% accuracy and 80.62% Adjusted Discounted Cumulative Gain (ADCG). This framework not only aids in managing patient expectations by retrieving the most relevant post-surgical images but also promises broad applications in medical image analysis and retrieval. The main contributions of this paper are the development of a multimodal retrieval system for breast cancer patients based on post-surgery aesthetic outcome and the evaluation of different models on a new dataset annotated by clinicians for image retrieval.
2025
Authors
Almeida, FL;
Publication
Information Security Journal: A Global Perspective
Abstract
2025
Authors
Rabaev, I; Litvak, M; Bass, R; Campos, R; Jorge, AM; Jatowt, A;
Publication
ICDAR (5)
Abstract
This report describes the ICDAR 2025 Competition on Automatic Classification of Literary Epochs (ICDAR 2025 CoLiE), which consisted of two tasks focused on automatic prediction of the time in which a book was written (date of first publication). Both tasks comprised two sub-tasks, where a related fine-grained classification was addressed. Task 1 consisted of the identification of literary epochs, such as Romanticism or Modernism (sub-task 1.1), and a more precise classification of the period within the epoch (sub-task 1.2). Task 2 addressed the chronological identification of century (sub-task 2.1) or decade (sub-task 2.2). The compiled dataset and the reported findings are valuable to the scientific community and contribute to advancing research in the automatic dating of texts and its applications in digital humanities and temporal text analysis.
2025
Authors
Ginja, GA; Neto, MC; Moreira, MMAC; Amorim, MLM; Tita, V; Altafim, RAP; Altafim, RAC; Correia, MV; Queiroz, AAA; Siqueira, AAG; Do Carmo, JPP;
Publication
IEEE SENSORS JOURNAL
Abstract
This study explores the design, fabrication, and electromechanical characterization of thermoformed tubular Teflon piezoelectrets for force measurement applications. Piezoelectrets, a subclass of electrets, leverage engineered dipole configurations within charged internal cavities to exhibit piezoelectric properties. Using fluorinated ethylene propylene (FEP) films, tubular structures were fabricated through thermal lamination and subsequently polarized to form highly sensitive and flexible piezoelectrets. The electrical response was characterized by controlled impact tests, sinusoidal stationary force inputs using a shaker system and an instrumented insole to evaluate the piezoelectret in a real dynamic environment. The impact test revealed that the piezoelectret exhibits a rapid response time of 20 ms with a maximum voltage amplitude of +/- 3 V. The frequency-domain analysis identified primary and secondary bandpass ranges, with peak sensitivity observed at 400 Hz. The stationary test with a shaker showed a steady sensitivity of 53.87 mV/N for signals within the 200- and 700-Hz bandwidths.
2025
Authors
Macedo, E; Araujo, H; Abreu, PH;
Publication
PATTERN RECOGNITION: ICPR 2024 INTERNATIONAL WORKSHOPS AND CHALLENGES, PT V
Abstract
Capsule endoscopy has emerged as a non-invasive alternative to traditional gastrointestinal inspection procedures, such as endoscopy and colonoscopy. Removing sedation risks, it is a patient-friendly and hospital-free procedure, which allows small bowel assessment, region not easily accessible by traditional methods. Recently, deep learning techniques have been employed to analyse capsule endoscopy images, with a focus on lesion classification and/or capsule location along the gastrointestinal tract. This research work presents a novel approach for testing the generalization capacity of deep learning techniques in the lesion location identification process using capsule endoscopy images. To achieve that, AlexNet, InceptionV3 and ResNet-152 architectures were trained exclusively in normal frames and later tested in lesion frames. Frames were sourced from KID and Kvasir-Capsule open-source datasets. Both RGB and grayscale representations were evaluated, and experiments with complete images and patches were made. Results show that the generalization capacity on lesion location of models is not so strong as their capacity for normal frame location, with colon being the most difficult organ to identify.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.