2022
Authors
Vilas-Boas, MD; Rocha, AP; Cardoso, MN; Fernandes, JM; Coelho, T; Cunha, JPS;
Publication
FRONTIERS IN NEUROLOGY
Abstract
In the published article, there was an error in Table 2 as published. The units of the Total body center of mass sway in x-axis (TBCMx) and y-axis (TBCMy) were shown in mm when they should be in cm. The corrected Table 2 and its caption appear below. In the published article, there was an error in Table 3 as published. The units of the Total body center of mass sway in x-axis (TBCMx) and y-axis (TBCMy) were shown in mm. The correct unit is cm. The corrected Table 3 and its caption appear below. In the published article, there was an error in Figure 3 as published. The units of the Total body center of mass sway in x-axis were shown in mm in the vertical axis of the plot. The correct unit is cm. The corrected Figure 3 and its caption appear below. In the published article, there was an error in Supplementary Table S.I. The units of the Total body center of mass sway in x-axis (TBCMx) and y-axis (TBCMy) were shown in mm. The correct unit is cm. The correct material statement appears below. In the published article, there was a mistake on the computation description of one of the assessed parameters (total body center of mass). A correction has been made to “Data Processing,” Paragraph 3: “For each gait cycle, we computed the 24 spatiotemporal and kinematic gait parameters listed in Table 2 and defined in (15). The total body center of mass (TBCM) sway was computed as the standard deviation of the distance (in the x/y-axis, i.e., medial-lateral and vertical directions) of the total body center of mass (TBCM), in relation to the RGBD sensor’s coordinate system, for all gait cycle frames. For each frame, TBCM’s position is the mean position of all body segments’ CM, which was obtained according to (21).” The authors apologize for these errors and state that this does not change the scientific conclusions of the article in any way. The original article has been updated. © 2022 Vilas-Boas, Rocha, Cardoso, Fernandes, Coelho and Cunha.
2022
Authors
Neto, PC; Sequeira, AF; Cardoso, JS;
Publication
2022 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW 2022)
Abstract
Presentation attacks are recurrent threats to biometric systems, where impostors attempt to bypass these systems. Humans often use background information as contextual cues for their visual system. Yet, regarding face-based systems, the background is often discarded, since face presentation attack detection (PAD) models are mostly trained with face crops. This work presents a comparative study of face PAD models (including multi-task learning, adversarial training and dynamic frame selection) in two settings: with and without crops. The results show that the performance is consistently better when the background is present in the images. The proposed multi-task methodology beats the state-of-the-art results on the ROSE-Youtu dataset by a large margin with an equal error rate of 0.2%. Furthermore, we analyze the models' predictions with Grad-CAM++ with the aim to investigate to what extent the models focus on background elements that are known to be useful for human inspection. From this analysis we can conclude that the background cues are not relevant across all the attacks. Thus, showing the capability of the model to leverage the background information only when necessary.
2022
Authors
Domingues, I; Sequeira, AF;
Publication
COMPUTATIONAL AND MATHEMATICAL ORGANIZATION THEORY
Abstract
2022
Authors
Gouveia, PF; Oliveira, HP; Monteiro, JP; Teixeira, JF; Silva, NL; Pinto, D; Mavioso, C; Anacleto, J; Martinho, M; Duarte, I; Cardoso, JS; Cardoso, F; Cardoso, MJ;
Publication
EUROPEAN SURGICAL RESEARCH
Abstract
Introduction: Breast volume estimation is considered crucial for breast cancer surgery planning. A single, easy, and reproducible method to estimate breast volume is not available. This study aims to evaluate, in patients proposed for mastectomy, the accuracy of the calculation of breast volume from a low-cost 3D surface scan (Microsoft Kinect) compared to the breast MRI and water displacement technique. Material and Methods: Patients with a Tis/T1-T3 breast cancer proposed for mastectomy between July 2015 and March 2017 were assessed for inclusion in the study. Breast volume calculations were performed using a 3D surface scan and the breast MRI and water displacement technique. Agreement between volumes obtained with both methods was assessed with the Spearman and Pearson correlation coefficients. Results: Eighteen patients with invasive breast cancer were included in the study and submitted to mastectomy. The level of agreement of the 3D breast volume compared to surgical specimens and breast MRI volumes was evaluated. For mastectomy specimen volume, an average (standard deviation) of 0.823 (0.027) and 0.875 (0.026) was obtained for the Pearson and Spearman correlations, respectively. With respect to MRI annotation, we obtained 0.828 (0.038) and 0.715 (0.018). Discussion: Although values obtained by both methodologies still differ, the strong linear correlation coefficient suggests that 3D breast volume measurement using a low-cost surface scan device is feasible and can approximate both the MRI breast volume and mastectomy specimen with sufficient accuracy. Conclusion: 3D breast volume measurement using a depth-sensor low-cost surface scan device is feasible and can parallel MRI breast and mastectomy specimen volumes with enough accuracy. Differences between methods need further development to reach clinical applicability. A possible approach could be the fusion of breast MRI and the 3D surface scan to harmonize anatomic limits and improve volume delimitation.
2022
Authors
Lopes, E; Caldeiras, C; Rito, M; Chamadoira, C; Santos, A; Cunha, JPS; Rego, R;
Publication
EPILEPSIA
Abstract
2022
Authors
Maximino, J; Coimbra, MT; Pedrosa, J;
Publication
44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society, EMBC 2022, Glasgow, Scotland, United Kingdom, July 11-15, 2022
Abstract
The coronavirus disease 2019 (COVID-19) evolved into a global pandemic, responsible for a significant number of infections and deaths. In this scenario, point-of-care ultrasound (POCUS) has emerged as a viable and safe imaging modality. Computer vision (CV) solutions have been proposed to aid clinicians in POCUS image interpretation, namely detection/segmentation of structures and image/patient classification but relevant challenges still remain. As such, the aim of this study is to develop CV algorithms, using Deep Learning techniques, to create tools that can aid doctors in the diagnosis of viral and bacterial pneumonia (VP and BP) through POCUS exams. To do so, convolutional neural networks were designed to perform in classification tasks. The architectures chosen to build these models were the VGG16, ResNet50, DenseNet169 e MobileNetV2. Patients images were divided in three classes: healthy (HE), BP and VP (which includes COVID-19). Through a comparative study, which was based on several performance metrics, the model based on the DenseNet169 architecture was designated as the best performing model, achieving 78% average accuracy value of the five iterations of 5- Fold Cross-Validation. Given that the currently available POCUS datasets for COVID-19 are still limited, the training of the models was negatively affected by such and the models were not tested in an independent dataset. Furthermore, it was also not possible to perform lesion detection tasks. Nonetheless, in order to provide explainability and understanding of the models, Gradient-weighted Class Activation Mapping (GradCAM) were used as a tool to highlight the most relevant classification regions. Clinical relevance - Reveals the potential of POCUS to support COVID-19 screening. The results are very promising although the dataset is limite
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.