2019
Authors
Pernes, D; Cardoso, JS;
Publication
International Joint Conference on Neural Networks, IJCNN 2019 Budapest, Hungary, July 14-19, 2019
Abstract
2019
Authors
Araújo, RJ; Fernandes, K; Cardoso, JS;
Publication
IEEE Trans. Image Process.
Abstract
2015
Authors
Micó, L; Sanches, JM; Cardoso, JS;
Publication
Neurocomputing
Abstract
2023
Authors
Montenegro, H; Silva, W; Cardoso, JS;
Publication
MEDICAL APPLICATIONS WITH DISENTANGLEMENTS, MAD 2022
Abstract
The lack of interpretability of Deep Learning models hinders their deployment in clinical contexts. Case-based explanations can be used to justify these models' decisions and improve their trustworthiness. However, providing medical cases as explanations may threaten the privacy of patients. We propose a generative adversarial network to disentangle identity and medical features from images. Using this network, we can alter the identity of an image to anonymize it while preserving relevant explanatory features. As a proof of concept, we apply the proposed model to biometric and medical datasets, demonstrating its capacity to anonymize medical images while preserving explanatory evidence and a reasonable level of intelligibility. Finally, we demonstrate that the model is inherently capable of generating counterfactual explanations.
2022
Authors
Huber, M; Boutros, F; Luu, AT; Raja, K; Ramachandra, R; Damer, N; Neto, PC; Goncalves, T; Sequeira, AF; Cardoso, JS; Tremoco, J; Lourenco, M; Serra, S; Cermeno, E; Ivanovska, M; Batagelj, B; Kronovsek, A; Peer, P; Struc, V;
Publication
2022 IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB)
Abstract
This paper presents a summary of the Competition on Face Morphing Attack Detection Based on Privacy-aware Synthetic Training Data (SYN-MAD) held at the 2022 International Joint Conference on Biometrics (IJCB 2022). The competition attracted a total of 12 participating teams, both from academia and industry and present in 11 different countries. In the end, seven valid submissions were submitted by the participating teams and evaluated by the organizers. The competition was held to present and attract solutions that deal with detecting face morphing attacks while protecting people's privacy for ethical and legal reasons. To ensure this, the training data was limited to synthetic data provided by the organizers. The submitted solutions presented innovations that led to outperforming the considered baseline in many experimental settings. The evaluation benchmark is now available at: https://github.com/marcohuber/SYN-MAD-2022.
2021
Authors
de Sousa, IM; Oliveira, Md; Lisboa Filho, PN; Santos Cardoso, Jd;
Publication
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2021, Houston, TX, USA, December 9-12, 2021
Abstract
Multiple Sclerosis (MS) is a chronic and inflammatory disorder that causes degeneration of axons in brain white matter and spinal cord. Magnetic Resonance Imaging (MRI) is extensively used to identify MS lesions and evaluate the progression of the disease, but the manual identification and quantification of lesions are time consuming and error-prone tasks. Thus, automated Deep Learning methods, in special Convolutional Neural Networks (CNNs), are becoming popular to segment medical images. It has been noticed that the performance of those methods tends to decrease when applied to MRI acquired under different protocols. The aim of this work is to statistically evaluate the possible influence of domain adaptation during the training process of CNNs models for segmenting MS lesions in MRI. The segmentation models were tested on MRIs (FLAIR and T1) of 20 patients diagnosed with Multiple Sclerosis. The set of segmented images of each different model was compared statistically, through the metrics Dice Similarity Coefficient (DSC), Predictive Positive Value (PPV) and Absolute Volume Difference (AVD). The results indicate that the domain adapted training can improve the performance of automatic segmentation methods, by CNNs, and have great potential to be used in medical clinics in the future. © 2021 IEEE.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.