Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Jaime Cardoso

2022

Explainable Biometrics in the Age of Deep Learning

Autores
Neto, PC; Gonçalves, T; Pinto, JR; Silva, W; Sequeira, AF; Ross, A; Cardoso, JS;

Publicação
CoRR

Abstract

2022

OCFR 2022: Competition on Occluded Face Recognition From Synthetically Generated Structure-Aware Occlusions

Autores
Neto, PC; Boutros, F; Pinto, JR; Damer, N; Sequeira, AF; Cardoso, JS; Bengherabi, M; Bousnat, A; Boucheta, S; Hebbadj, N; Erakin, ME; Demir, U; Ekenel, HK; Vidal, PBD; Menotti, D;

Publicação
2022 IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB)

Abstract
This work summarizes the IJCB Occluded Face Recognition Competition 2022 (IJCB-OCFR-2022) embraced by the 2022 International Joint Conference on Biometrics (IJCB 2022). OCFR-2022 attracted a total of 3 participating teams, from academia. Eventually, six valid submissions were submitted and then evaluated by the organizers. The competition was held to address the challenge of face recognition in the presence of severe face occlusions. The participants were free to use any training data and the testing data was built by the organisers by synthetically occluding parts of the face images using a well-known dataset. The submitted solutions presented innovations and performed very competitively with the considered baseline. A major output of this competition is a challenging, realistic, and diverse, and publicly available occluded face recognition benchmark with well defined evaluation protocols.

2023

PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric Face Recognition

Autores
Neto, PC; Sequeira, AF; Cardoso, JS; Terhörst, P;

Publicação
IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023 - Workshops, Vancouver, BC, Canada, June 17-24, 2023

Abstract
In the context of biometrics, matching confidence refers to the confidence that a given matching decision is correct. Since many biometric systems operate in critical decision-making processes, such as in forensics investigations, accurately and reliably stating the matching confidence becomes of high importance. Previous works on biometric confidence estimation can well differentiate between high and low confidence, but lack interpretability. Therefore, they do not provide accurate probabilistic estimates of the correctness of a decision. In this work, we propose a probabilistic interpretable comparison (PIC) score that accurately reflects the probability that the score originates from samples of the same identity. We prove that the proposed approach provides optimal matching confidence. Contrary to other approaches, it can also optimally combine multiple samples in a joint PIC score which further increases the recognition and confidence estimation performance. In the experiments, the proposed PIC approach is compared against all biometric confidence estimation methods available on four publicly available databases and five state-of-the-art face recognition systems. The results demonstrate that PIC has a significantly more accurate probabilistic interpretation than similar approaches and is highly effective for multi-biometric recognition. The code is publicly-available1. © 2023 IEEE.

2023

Evaluation of Vectra® XT 3D Surface Imaging Technology in Measuring Breast Symmetry and Breast Volume

Autores
Pham, M; Alzul, R; Elder, E; French, J; Cardoso, J; Kaviani, A; Meybodi, F;

Publicação
AESTHETIC PLASTIC SURGERY

Abstract
Background Breast symmetry is an essential component of breast cosmesis. The Harvard Cosmesis scale is the most widely adopted method of breast symmetry assessment. However, this scale lacks reproducibility and reliability, limiting its application in clinical practice. The VECTRA (R) XT 3D (VECTRA (R)) is a novel breast surface imaging system that, when combined with breast contour measuring software (Mirror (R)), aims to produce a more accurate and reproducible measurement of breast contour to aid operative planning in breast surgery. Objectives This study aims to compare the reliability and reproducibility of subjective (Harvard Cosmesis scale) with objective (VECTRA (R)) symmetry assessment on the same cohort of patients. Methods Patients at a tertiary institution had 2D and 3D photographs of their breasts. Seven assessors scored the 2D photographs using the Harvard Cosmesis scale. Two independent assessors used Mirror (R) software to objectively calculate breast symmetry by analysing 3D images of the breasts. Results Intra-observer agreement ranged from none to moderate (kappa - 0.005-0.7) amongst the assessors using the Harvard Cosmesis scale. Inter-observer agreement was weak (kappa 0.078-0.454) amongst Harvard scores compared to VECTRA (R) measurements. Kappa values ranged 0.537-0.674 for intra-observer agreement (p < 0.001) with Root Mean Square (RMS) scores. RMS had a moderate correlation with the Harvard Cosmesis scale (r(s) = 0.613). Furthermore, absolute volume difference between breasts had poor correlation with RMS (R-2 = 0.133). Conclusion VECTRA (R) and Mirror (R) software have potential in clinical practice as objectifying breast symmetry, but in the current form, it is not an ideal test.

2021

Topological Similarity Index and Loss Function for Blood Vessel Segmentation

Autores
Araújo, RJ; Cardoso, JS; Oliveira, HP;

Publicação
CoRR

Abstract

2022

Deep learning-based system for real-time behavior recognition and closed-loop control of behavioral mazes using depth sensing

Autores
Geros, AF; Cruz, R; de Chaumont, F; Cardoso, JS; Aguiar, P;

Publicação

Abstract
Robust quantification of animal behavior is fundamental in experimental neuroscience research. Systems providing automated behavioral assessment are an important alternative to manual measurements avoiding problems such as human bias, low reproducibility and high cost. Integrating these tools with closed-loop control systems creates conditions to correlate environment and behavioral expressions effectively, and ultimately explain the neural foundations of behavior. We present an integrated solution for automated behavioral analysis of rodents using deep learning networks on video streams acquired from a depth-sensing camera. The use of depth sensors has notable advantages: tracking/classification performance is improved and independent of animals' coat color, and videos can be recorded in dark conditions without affecting animals' natural behavior. Convolutional and recurrent layers were combined in deep network architectures, and both spatial and temporal representations were successfully learned for a 4-classes behavior classification task (standstill, walking, rearing and grooming). Integration with Arduino microcontrollers creates an easy-to-use control platform providing low-latency feedback signals based on the deep learning automatic classification of animal behavior. The complete system, combining depth-sensor camera, computer, and Arduino microcontroller, allows simple mapping of input-output control signals using the animal's current behavior and position. For example, a feeder can be controlled not by pressing a lever but by the animal behavior itself. An integrated graphical user interface completes a user-friendly and cost-effective solution for animal tracking and behavior classification. This open-software/open-hardware platform can boost the development of customized protocols for automated behavioral research, and support ever more sophisticated, reliable and reproducible behavioral neuroscience experiments.

  • 55
  • 63