2023
Autores
Neto, PC; Sequeira, AF; Cardoso, JS; Terhörst, P;
Publicação
IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023 - Workshops, Vancouver, BC, Canada, June 17-24, 2023
Abstract
In the context of biometrics, matching confidence refers to the confidence that a given matching decision is correct. Since many biometric systems operate in critical decision-making processes, such as in forensics investigations, accurately and reliably stating the matching confidence becomes of high importance. Previous works on biometric confidence estimation can well differentiate between high and low confidence, but lack interpretability. Therefore, they do not provide accurate probabilistic estimates of the correctness of a decision. In this work, we propose a probabilistic interpretable comparison (PIC) score that accurately reflects the probability that the score originates from samples of the same identity. We prove that the proposed approach provides optimal matching confidence. Contrary to other approaches, it can also optimally combine multiple samples in a joint PIC score which further increases the recognition and confidence estimation performance. In the experiments, the proposed PIC approach is compared against all biometric confidence estimation methods available on four publicly available databases and five state-of-the-art face recognition systems. The results demonstrate that PIC has a significantly more accurate probabilistic interpretation than similar approaches and is highly effective for multi-biometric recognition. The code is publicly-available1. © 2023 IEEE.
2023
Autores
Pham, M; Alzul, R; Elder, E; French, J; Cardoso, J; Kaviani, A; Meybodi, F;
Publicação
AESTHETIC PLASTIC SURGERY
Abstract
Background Breast symmetry is an essential component of breast cosmesis. The Harvard Cosmesis scale is the most widely adopted method of breast symmetry assessment. However, this scale lacks reproducibility and reliability, limiting its application in clinical practice. The VECTRA (R) XT 3D (VECTRA (R)) is a novel breast surface imaging system that, when combined with breast contour measuring software (Mirror (R)), aims to produce a more accurate and reproducible measurement of breast contour to aid operative planning in breast surgery. Objectives This study aims to compare the reliability and reproducibility of subjective (Harvard Cosmesis scale) with objective (VECTRA (R)) symmetry assessment on the same cohort of patients. Methods Patients at a tertiary institution had 2D and 3D photographs of their breasts. Seven assessors scored the 2D photographs using the Harvard Cosmesis scale. Two independent assessors used Mirror (R) software to objectively calculate breast symmetry by analysing 3D images of the breasts. Results Intra-observer agreement ranged from none to moderate (kappa - 0.005-0.7) amongst the assessors using the Harvard Cosmesis scale. Inter-observer agreement was weak (kappa 0.078-0.454) amongst Harvard scores compared to VECTRA (R) measurements. Kappa values ranged 0.537-0.674 for intra-observer agreement (p < 0.001) with Root Mean Square (RMS) scores. RMS had a moderate correlation with the Harvard Cosmesis scale (r(s) = 0.613). Furthermore, absolute volume difference between breasts had poor correlation with RMS (R-2 = 0.133). Conclusion VECTRA (R) and Mirror (R) software have potential in clinical practice as objectifying breast symmetry, but in the current form, it is not an ideal test.
2023
Autores
Matos, J; Struja, T; Gallifant, J; Nakayama, LF; Charpignon, M; Liu, X; Economou-Zavlanos, N; Cardoso, JS; Johnson, KS; Bhavsar, N; Gichoya, JW; Celi, LA; Wong, AI;
Publicação
Abstract
2023
Autores
Barbero-Gómez, J; Cruz, R; Cardoso, JS; Gutiérrez, PA; Hervás-Martínez, C;
Publicação
ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2023, PT II
Abstract
This paper introduces an evaluation procedure to validate the efficacy of explanation methods for Convolutional Neural Network (CNN) models in ordinal regression tasks. Two ordinal methods are contrasted against a baseline using cross-entropy, across four datasets. A statistical analysis demonstrates that attribution methods, such as Grad-CAM and IBA, perform significantly better when used with ordinal regression CNN models compared to a baseline approach in most ordinal and nominal metrics. The study suggests that incorporating ordinal information into the attribution map construction process may improve the explanations further.
2023
Autores
Torto, IR; Patrício, C; Montenegro, H; Gonçalves, T; Cardoso, JS;
Publicação
Working Notes of the Conference and Labs of the Evaluation Forum (CLEF 2023), Thessaloniki, Greece, September 18th to 21st, 2023.
Abstract
This paper presents the main contributions of the VCMI Team to the ImageCLEFmedical Caption 2023 task. We addressed both the concept detection and caption prediction tasks. Regarding concept detection, our team employed different approaches to assign concepts to medical images: multi-label classification, adversarial training, autoregressive modelling, image retrieval, and concept retrieval. We also developed three model ensembles merging the results of some of the proposed methods. Our best submission obtained an F1-score of 0.4998, ranking 3rd among nine teams. Regarding the caption prediction task, our team explored two main approaches based on image retrieval and language generation. The language generation approaches, based on a vision model as the encoder and a language model as the decoder, yielded the best results, allowing us to rank 5th among thirteen teams, with a BERTScore of 0.6147. © 2023 Copyright for this paper by its authors.
2023
Autores
Vidal, PL; Moura, Jd; Novo, J; Ortega, M; Cardoso, JS;
Publicação
IEEE International Conference on Acoustics, Speech and Signal Processing ICASSP 2023, Rhodes Island, Greece, June 4-10, 2023
Abstract
Optical Coherence Tomography (OCT) is the major diagnostic tool for the leading cause of blindness in developed countries: Diabetic Macular Edema (DME). Depending on the type of fluid accumulations, different treatments are needed. In particular, Cystoid Macular Edemas (CMEs) represent the most severe scenario, while Diffuse Retinal Thickening (DRT) is an early indicator of the disease but a challenging scenario to detect. While methodologies exist, their explanatory power is limited to the input sample itself. However, due to the complexity of these accumulations, this may not be enough for a clinician to assess the validity of the classification. Thus, in this work, we propose a novel approach based on multi-prototype networks with vision transformers to obtain an example-based explainable classification. Our proposal achieved robust results in two representative OCT devices, with a mean accuracy of 0.9099 ± 0.0083 and 0.8582 ± 0.0126 for CME and DRT-type fluid accumulations, respectively. © 2023 IEEE.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.