2023
Authors
Oliveira, SP; Montezuma, D; Moreira, A; Oliveira, D; Neto, PC; Monteiro, A; Monteiro, J; Ribeiro, L; Gonçalves, S; Pinto, IM; Cardoso, JS;
Publication
Scientific Reports
Abstract
Cervical cancer is the fourth most common female cancer worldwide and the fourth leading cause of cancer-related death in women. Nonetheless, it is also among the most successfully preventable and treatable types of cancer, provided it is early identified and properly managed. As such, the detection of pre-cancerous lesions is crucial. These lesions are detected in the squamous epithelium of the uterine cervix and are graded as low- or high-grade intraepithelial squamous lesions, known as LSIL and HSIL, respectively. Due to their complex nature, this classification can become very subjective. Therefore, the development of machine learning models, particularly directly on whole-slide images (WSI), can assist pathologists in this task. In this work, we propose a weakly-supervised methodology for grading cervical dysplasia, using different levels of training supervision, in an effort to gather a bigger dataset without the need of having all samples fully annotated. The framework comprises an epithelium segmentation step followed by a dysplasia classifier (non-neoplastic, LSIL, HSIL), making the slide assessment completely automatic, without the need for manual identification of epithelial areas. The proposed classification approach achieved a balanced accuracy of 71.07% and sensitivity of 72.18%, at the slide-level testing on 600 independent samples, which are publicly available upon reasonable request. © 2023, The Author(s).
2023
Authors
Ribeiro, M; Nunes, I; Castro, L; Costa-Santos, C; Henriques, TS;
Publication
FRONTIERS IN PUBLIC HEALTH
Abstract
IntroductionPerinatal asphyxia is one of the most frequent causes of neonatal mortality, affecting approximately four million newborns worldwide each year and causing the death of one million individuals. One of the main reasons for these high incidences is the lack of consensual methods of early diagnosis for this pathology. Estimating risk-appropriate health care for mother and baby is essential for increasing the quality of the health care system. Thus, it is necessary to investigate models that improve the prediction of perinatal asphyxia. Access to the cardiotocographic signals (CTGs) in conjunction with various clinical parameters can be crucial for the development of a successful model. ObjectivesThis exploratory work aims to develop predictive models of perinatal asphyxia based on clinical parameters and fetal heart rate (fHR) indices. MethodsSingle gestations data from a retrospective unicentric study from Centro Hospitalar e Universitario do Porto de Sao Joao (CHUSJ) between 2010 and 2018 was probed. The CTGs were acquired and analyzed by Omniview-SisPorto, estimating several fHR features. The clinical variables were obtained from the electronic clinical records stored by ObsCare. Entropy and compression characterized the complexity of the fHR time series. These variables' contribution to the prediction of asphyxia perinatal was probed by binary logistic regression (BLR) and Naive-Bayes (NB) models. ResultsThe data consisted of 517 cases, with 15 pathological cases. The asphyxia prediction models showed promising results, with an area under the receiver operator characteristic curve (AUC) >70%. In NB approaches, the best models combined clinical and SisPorto features. The best model was the univariate BLR with the variable compression ratio scale 2 (CR2) and an AUC of 94.93% [94.55; 95.31%]. ConclusionBoth BLR and Bayesian models have advantages and disadvantages. The model with the best performance predicting perinatal asphyxia was the univariate BLR with the CR2 variable, demonstrating the importance of non-linear indices in perinatal asphyxia detection. Future studies should explore decision support systems to detect sepsis, including clinical and CTGs features (linear and non-linear).
2023
Authors
Coelho, A; Campos, R; Ricardo, M;
Publication
AD HOC NETWORKS
Abstract
2023
Authors
Cruz, R; Silva, DTE; Goncalves, T; Carneiro, D; Cardoso, JS;
Publication
SENSORS
Abstract
2023
Authors
Gouveia, M; Castro, E; Rebelo, A; Cardoso, JS; Patrão, B;
Publication
Proceedings of the 16th International Joint Conference on Biomedical Engineering Systems and Technologies, BIOSTEC 2023, Volume 4: BIOSIGNALS, Lisbon, Portugal, February 16-18, 2023.
Abstract
2023
Authors
Patricio, C; Neves, JC;
Publication
EXPERT SYSTEMS WITH APPLICATIONS
Abstract
Zero-shot learning enables the recognition of classes not seen during training through the use of semantic information comprising a visual description of the class either in textual or attribute form. Despite the advances in the performance of zero-shot learning methods, most of the works do not explicitly exploit the correlation between the visual attributes of the image and their corresponding semantic attributes for learning discriminative visual features. In this paper, we introduce an attention-based strategy for deriving features from the image regions regarding the most prominent attributes of the image class. In particular, we train a Convolutional Neural Network (CNN) for image attribute prediction and use a gradient-weighted method for deriving the attention activation maps of the most salient image attributes. These maps are then incorporated into the feature extraction process of Zero-Shot Learning (ZSL) approaches for improving the discriminability of the features produced through the implicit inclusion of semantic information. For experimental validation, the performance of state-of-the-art ZSL methods was determined using features with and without the proposed attention model. Surprisingly, we discover that the proposed strategy degrades the performance of ZSL methods in classical ZSL datasets (AWA2), but it can significantly improve performance when using face datasets. Our experiments show that these results are a consequence of the interpretability of the dataset attributes, suggesting that existing ZSL datasets attributes are, in most cases, difficult to be identifiable in the image. Source code is available at https://github.com/CristianoPatricio/SGAM.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.