2023
Autores
Oliveira, SP; Montezuma, D; Moreira, A; Oliveira, D; Neto, PC; Monteiro, A; Monteiro, J; Ribeiro, L; Goncalves, S; Pinto, IM; Cardoso, JS;
Publicação
SCIENTIFIC REPORTS
Abstract
Cervical cancer is the fourth most common female cancer worldwide and the fourth leading cause of cancer-related death in women. Nonetheless, it is also among the most successfully preventable and treatable types of cancer, provided it is early identified and properly managed. As such, the detection of pre-cancerous lesions is crucial. These lesions are detected in the squamous epithelium of the uterine cervix and are graded as low- or high-grade intraepithelial squamous lesions, known as LSIL and HSIL, respectively. Due to their complex nature, this classification can become very subjective. Therefore, the development of machine learning models, particularly directly on whole-slide images (WSI), can assist pathologists in this task. In this work, we propose a weakly-supervised methodology for grading cervical dysplasia, using different levels of training supervision, in an effort to gather a bigger dataset without the need of having all samples fully annotated. The framework comprises an epithelium segmentation step followed by a dysplasia classifier (non-neoplastic, LSIL, HSIL), making the slide assessment completely automatic, without the need for manual identification of epithelial areas. The proposed classification approach achieved a balanced accuracy of 71.07% and sensitivity of 72.18%, at the slide-level testing on 600 independent samples, which are publicly available upon reasonable request.
2023
Autores
Caldeira, E; Neto, PC; Gonçalves, T; Damer, N; Sequeira, AF; Cardoso, JS;
Publicação
31st European Signal Processing Conference, EUSIPCO 2023, Helsinki, Finland, September 4-8, 2023
Abstract
Morphing attacks keep threatening biometric systems, especially face recognition systems. Over time they have become simpler to perform and more realistic, as such, the usage of deep learning systems to detect these attacks has grown. At the same time, there is a constant concern regarding the lack of interpretability of deep learning models. Balancing performance and interpretability has been a difficult task for scientists. However, by leveraging domain information and proving some constraints, we have been able to develop IDistill, an interpretable method with state-of-the-art performance that provides information on both the identity separation on morph samples and their contribution to the final prediction. The domain information is learnt by an autoencoder and distilled to a classifier system in order to teach it to separate identity information. When compared to other methods in the literature it outperforms them in three out of five databases and is competitive in the remaining. © 2023 European Signal Processing Conference, EUSIPCO. All rights reserved.
2021
Autores
Neto, PC;
Publicação
CoRR
Abstract
2023
Autores
Neto, PC; Montezuma, D; de Oliveira, SP; Oliveira, D; Fraga, J; Monteiro, A; Monteiro, JC; Ribeiro, L; Gonçalves, S; Reinhard, S; Zlobec, I; Pinto, IM; Cardoso, JS;
Publicação
CoRR
Abstract
2022
Autores
Neto, PC; Gonçalves, T; Pinto, JR; Silva, W; Sequeira, AF; Ross, A; Cardoso, JS;
Publicação
CoRR
Abstract
2022
Autores
Neto, PC; Boutros, F; Pinto, JR; Damer, N; Sequeira, AF; Cardoso, JS; Bengherabi, M; Bousnat, A; Boucheta, S; Hebbadj, N; Erakin, ME; Demir, U; Ekenel, HK; Vidal, PBD; Menotti, D;
Publicação
2022 IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB)
Abstract
This work summarizes the IJCB Occluded Face Recognition Competition 2022 (IJCB-OCFR-2022) embraced by the 2022 International Joint Conference on Biometrics (IJCB 2022). OCFR-2022 attracted a total of 3 participating teams, from academia. Eventually, six valid submissions were submitted and then evaluated by the organizers. The competition was held to address the challenge of face recognition in the presence of severe face occlusions. The participants were free to use any training data and the testing data was built by the organisers by synthetically occluding parts of the face images using a well-known dataset. The submitted solutions presented innovations and performed very competitively with the considered baseline. A major output of this competition is a challenging, realistic, and diverse, and publicly available occluded face recognition benchmark with well defined evaluation protocols.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.