2025
Autores
DeAndres-Tame, I; Tolosana, R; Melzi, P; Vera-Rodriguez, R; Kim, M; Rathgeb, C; Liu, XM; Gomez, LF; Morales, A; Fierrez, J; Ortega-Garcia, J; Zhong, ZZ; Huang, YG; Mi, YX; Ding, SH; Zhou, SG; He, S; Fu, LZ; Cong, H; Zhang, RY; Xiao, ZH; Smirnov, E; Pimenov, A; Grigorev, A; Timoshenko, D; Asfaw, KM; Low, CY; Liu, H; Wang, CY; Zuo, Q; He, ZX; Shahreza, HO; George, A; Unnervik, A; Rahimi, P; Marcel, S; Neto, PC; Huber, M; Kolf, JN; Damer, N; Boutros, F; Cardoso, JS; Sequeira, AF; Atzori, A; Fenu, G; Marras, M; Struc, V; Yu, J; Li, ZJ; Li, JC; Zhao, WS; Lei, Z; Zhu, XY; Zhang, XY; Biesseck, B; Vidal, P; Coelho, L; Granada, R; Menotti, D;
Publicação
INFORMATION FUSION
Abstract
Synthetic data is gaining increasing popularity for face recognition technologies, mainly due to the privacy concerns and challenges associated with obtaining real data, including diverse scenarios, quality, and demographic groups, among others. It also offers some advantages over real data, such as the large amount of data that can be generated or the ability to customize it to adapt to specific problem-solving needs. To effectively use such data, face recognition models should also be specifically designed to exploit synthetic data to its fullest potential. In order to promote the proposal of novel Generative AI methods and synthetic data, and investigate the application of synthetic data to better train face recognition systems, we introduce the 2nd FRCSyn-onGoing challenge, based on the 2nd Face Recognition Challenge in the Era of Synthetic Data (FRCSyn), originally launched at CVPR 2024. This is an ongoing challenge that provides researchers with an accessible platform to benchmark (i) the proposal of novel Generative AI methods and synthetic data, and (ii) novel face recognition systems that are specifically proposed to take advantage of synthetic data. We focus on exploring the use of synthetic data both individually and in combination with real data to solve current challenges in face recognition such as demographic bias, domain adaptation, and performance constraints in demanding situations, such as age disparities between training and testing, changes in the pose, or occlusions. Very interesting findings are obtained in this second edition, including a direct comparison with the first one, in which synthetic databases were restricted to DCFace and GANDiffFace.
2024
Autores
Caldeira, E; Neto, PC; Gonçalves, T; Damer, N; Sequeira, AF; Cardoso, JS;
Publicação
Science Talks
Abstract
2024
Autores
Beirão, MM; Matos, J; Gonçalves, T; Kase, C; Nakayama, LF; Freitas, Dd; Cardoso, JS;
Publicação
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2024, Lisbon, Portugal, December 3-6, 2024
Abstract
Keratitis is an inflammatory corneal condition responsible for 10% of visual impairment in low- and middle-income countries (LMICs), with bacteria, fungi, or amoeba as the most common infection etiologies. While an accurate and timely diagnosis is crucial for the selected treatment and the patients' sight outcomes, due to the high cost and limited availability of laboratory diagnostics in LMICs, diagnosis is often made by clinical observation alone, despite its lower accuracy. In this study, we investigate and compare different deep learning approaches to diagnose the source of infection: 1) three separate binary models for infection type predictions; 2) a multitask model with a shared backbone and three parallel classification layers (Multitask V1); and, 3) a multitask model with a shared backbone and a multi-head classification layer (Multitask V2). We used a private Brazilian cornea dataset to conduct the empirical evaluation. We achieved the best results with Multitask V2, with an area under the receiver operating characteristic curve (AUROC) confidence intervals of 0.7413-0.7740 (bacteria), 0.83950.8725 (fungi), and 0.9448-0.9616 (amoeba). A statistical analysis of the impact of patient features on models' performance revealed that sex significantly affects amoeba infection prediction, and age seems to affect fungi and bacteria predictions. © 2024 IEEE.
2024
Autores
Beirão, MM; Matos, J; Gonçalves, T; Kase, C; Nakayama, LF; Freitas, Dd; Cardoso, JS;
Publicação
CoRR
Abstract
2024
Autores
Eduard-Alexandru Bonci; Orit Kaidar-Person; Marília Antunes; Oriana Ciani; Helena Cruz; Rosa Di Micco; Oreste Davide Gentilini; Nicole Rotmensz; Pedro Gouveia; Jörg Heil; Pawel Kabata; Nuno Freitas; Tiago Gonçalves; Miguel Romariz; Helena Montenegro; Hélder P. Oliveira; Jaime S. Cardoso; Henrique Martins; Daniela Lopes; Marta Martinho; Ludovica Borsoi; Elisabetta Listorti; Carlos Mavioso; Martin Mika; André Pfob; Timo Schinköthe; Giovani Silva; Maria-Joao Cardoso;
Publicação
Cancer Research
Abstract
2024
Autores
Zolfagharnasab, MH; Freitas, N; Gonçalves, T; Bonci, E; Mavioso, C; Cardoso, MJ; Oliveira, HP; Cardoso, JS;
Publicação
Artificial Intelligence and Imaging for Diagnostic and Treatment Challenges in Breast Care - First Deep Breast Workshop, Deep-Breath 2024, Held in Conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024, Proceedings
Abstract
Breast cancer treatments often affect patients’ body image, making aesthetic outcome predictions vital. This study introduces a Deep Learning (DL) multimodal retrieval pipeline using a dataset of 2,193 instances combining clinical attributes and RGB images of patients’ upper torsos. We evaluate four retrieval techniques: Weighted Euclidean Distance (WED) with various configurations and shallow Artificial Neural Network (ANN) for tabular data, pre-trained and fine-tuned Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), and a multimodal approach combining both data types. The dataset, categorised into Excellent/Good and Fair/Poor outcomes, is organised into over 20K triplets for training and testing. Results show fine-tuned multimodal ViTs notably enhance performance, achieving up to 73.85% accuracy and 80.62% Adjusted Discounted Cumulative Gain (ADCG). This framework not only aids in managing patient expectations by retrieving the most relevant post-surgical images but also promises broad applications in medical image analysis and retrieval. The main contributions of this paper are the development of a multimodal retrieval system for breast cancer patients based on post-surgery aesthetic outcome and the evaluation of different models on a new dataset annotated by clinicians for image retrieval. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.