Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2023

A CAD System for Colorectal Cancer from WSI: A Clinically Validated Interpretable ML-based Prototype

Authors
Neto, PC; Montezuma, D; de Oliveira, SP; Oliveira, D; Fraga, J; Monteiro, A; Monteiro, JC; Ribeiro, L; Gonçalves, S; Reinhard, S; Zlobec, I; Pinto, IM; Cardoso, JS;

Publication
CoRR

Abstract

2023

PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric Face Recognition

Authors
Neto, PC; Sequeira, AF; Cardoso, JS; Terhörst, P;

Publication
IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023 - Workshops, Vancouver, BC, Canada, June 17-24, 2023

Abstract
In the context of biometrics, matching confidence refers to the confidence that a given matching decision is correct. Since many biometric systems operate in critical decision-making processes, such as in forensics investigations, accurately and reliably stating the matching confidence becomes of high importance. Previous works on biometric confidence estimation can well differentiate between high and low confidence, but lack interpretability. Therefore, they do not provide accurate probabilistic estimates of the correctness of a decision. In this work, we propose a probabilistic interpretable comparison (PIC) score that accurately reflects the probability that the score originates from samples of the same identity. We prove that the proposed approach provides optimal matching confidence. Contrary to other approaches, it can also optimally combine multiple samples in a joint PIC score which further increases the recognition and confidence estimation performance. In the experiments, the proposed PIC approach is compared against all biometric confidence estimation methods available on four publicly available databases and five state-of-the-art face recognition systems. The results demonstrate that PIC has a significantly more accurate probabilistic interpretation than similar approaches and is highly effective for multi-biometric recognition. The code is publicly-available1. © 2023 IEEE.

2023

Evaluation of Vectra® XT 3D Surface Imaging Technology in Measuring Breast Symmetry and Breast Volume

Authors
Pham, M; Alzul, R; Elder, E; French, J; Cardoso, J; Kaviani, A; Meybodi, F;

Publication
AESTHETIC PLASTIC SURGERY

Abstract
Background Breast symmetry is an essential component of breast cosmesis. The Harvard Cosmesis scale is the most widely adopted method of breast symmetry assessment. However, this scale lacks reproducibility and reliability, limiting its application in clinical practice. The VECTRA (R) XT 3D (VECTRA (R)) is a novel breast surface imaging system that, when combined with breast contour measuring software (Mirror (R)), aims to produce a more accurate and reproducible measurement of breast contour to aid operative planning in breast surgery. Objectives This study aims to compare the reliability and reproducibility of subjective (Harvard Cosmesis scale) with objective (VECTRA (R)) symmetry assessment on the same cohort of patients. Methods Patients at a tertiary institution had 2D and 3D photographs of their breasts. Seven assessors scored the 2D photographs using the Harvard Cosmesis scale. Two independent assessors used Mirror (R) software to objectively calculate breast symmetry by analysing 3D images of the breasts. Results Intra-observer agreement ranged from none to moderate (kappa - 0.005-0.7) amongst the assessors using the Harvard Cosmesis scale. Inter-observer agreement was weak (kappa 0.078-0.454) amongst Harvard scores compared to VECTRA (R) measurements. Kappa values ranged 0.537-0.674 for intra-observer agreement (p < 0.001) with Root Mean Square (RMS) scores. RMS had a moderate correlation with the Harvard Cosmesis scale (r(s) = 0.613). Furthermore, absolute volume difference between breasts had poor correlation with RMS (R-2 = 0.133). Conclusion VECTRA (R) and Mirror (R) software have potential in clinical practice as objectifying breast symmetry, but in the current form, it is not an ideal test.

2023

Towards Concept-based Interpretability of Skin Lesion Diagnosis using Vision-Language Models

Authors
Patrício, C; Teixeira, LF; Neves, JC;

Publication
CoRR

Abstract

2023

Attention-Based Regularisation for Improved Generalisability in Medical Multi-Centre Data

Authors
Silva, D; Agrotis, G; Tan, RB; Teixeira, LF; Silva, W;

Publication
International Conference on Machine Learning and Applications, ICMLA 2023, Jacksonville, FL, USA, December 15-17, 2023

Abstract
Deep Learning models are tremendously valuable in several prediction tasks, and their use in the medical field is spreading abruptly, especially in computer vision tasks, evaluating the content in X-rays, CTs or MRIs. These methods can save a significant amount of time for doctors in patient diagnostics and help in treatment planning. However, these models are significantly sensitive to confounders in the training data and generally suffer a performance hit when dealing with out-of-distribution data, affecting their reliability and scalability in different medical institutions. Deep Learning research on Medical datasets may overlook essential details regarding the image acquisition procedure and the preprocessing steps. This work proposes a data-centric approach, exploring the potential of attention maps as a regularisation technique to improve robustness and generalisation. We use image metadata and explore self-attention maps and contrastive learning to promote feature space invariance to image disturbance. Experiments were conducted using Chest X-ray datasets that are publicly available. Some datasets contained information about the windowing settings applied by the radiologist, acting as a source of variability. The proposed model was tested and outperformed the baseline in out-of-distribution data, serving as a proof of concept. © 2023 IEEE.

2023

Evaluating Privacy on Synthetic Images Generated using GANs: Contributions of the VCMI Team to ImageCLEFmedical GANs 2023

Authors
Montenegro, H; Neto, PC; Patrício, C; Torto, IR; Gonçalves, T; Teixeira, LF;

Publication
Working Notes of the Conference and Labs of the Evaluation Forum (CLEF 2023), Thessaloniki, Greece, September 18th to 21st, 2023.

Abstract
This paper presents the main contributions of the VCMI Team to the ImageCLEFmedical GANs 2023 task. This task aims to evaluate whether synthetic medical images generated using Generative Adversarial Networks (GANs) contain identifiable characteristics of the training data. We propose various approaches to classify a set of real images as having been used or not used in the training of the model that generated a set of synthetic images. We use similarity-based approaches to classify the real images based on their similarity to the generated ones. We develop autoencoders to classify the images through outlier detection techniques. Finally, we develop patch-based methods that operate on patches extracted from real and generated images to measure their similarity. On the development dataset, we attained an F1-score of 0.846 and an accuracy of 0.850 using an autoencoder-based method. On the test dataset, a similarity-based approach achieved the best results, with an F1-score of 0.801 and an accuracy of 0.810. The empirical results support the hypothesis that medical data generated using deep generative models trained without privacy constraints threatens the privacy of patients in the training data. © 2023 Copyright for this paper by its authors.

  • 70
  • 395