Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2022

Lesion Volume Quantification Using Two Convolutional Neural Networks in MRIs of Multiple Sclerosis Patients

Authors
de Oliveira, M; Piacenti Silva, M; da Rocha, FCG; Santos, JM; Cardoso, JD; Lisboa, PN;

Publication
DIAGNOSTICS

Abstract
Background: Multiple sclerosis (MS) is a neurologic disease of the central nervous system which affects almost three million people worldwide. MS is characterized by a demyelination process that leads to brain lesions, allowing these affected areas to be visualized with magnetic resonance imaging (MRI). Deep learning techniques, especially computational algorithms based on convolutional neural networks (CNNs), have become a frequently used algorithm that performs feature self-learning and enables segmentation of structures in the image useful for quantitative analysis of MRIs, including quantitative analysis of MS. To obtain quantitative information about lesion volume, it is important to perform proper image preprocessing and accurate segmentation. Therefore, we propose a method for volumetric quantification of lesions on MRIs of MS patients using automatic segmentation of the brain and lesions by two CNNs. Methods: We used CNNs at two different moments: the first to perform brain extraction, and the second for lesion segmentation. This study includes four independent MRI datasets: one for training the brain segmentation models, two for training the lesion segmentation model, and one for testing. Results: The proposed brain detection architecture using binary cross-entropy as the loss function achieved a 0.9786 Dice coefficient, 0.9969 accuracy, 0.9851 precision, 0.9851 sensitivity, and 0.9985 specificity. In the second proposed framework for brain lesion segmentation, we obtained a 0.8893 Dice coefficient, 0.9996 accuracy, 0.9376 precision, 0.8609 sensitivity, and 0.9999 specificity. After quantifying the lesion volume of all patients from the test group using our proposed method, we obtained a mean value of 17,582 mm(3). Conclusions: We concluded that the proposed algorithm achieved accurate lesion detection and segmentation with reproducibility corresponding to state-of-the-art software tools and manual segmentation. We believe that this quantification method can add value to treatment monitoring and routine clinical evaluation of MS patients.

2022

Tackling unsupervised multi-source domain adaptation with optimism and consistency

Authors
Pernes, D; Cardoso, JS;

Publication
EXPERT SYSTEMS WITH APPLICATIONS

Abstract
It has been known for a while that the problem of multi-source domain adaptation can be regarded as a single source domain adaptation task where the source domain corresponds to a mixture of the original source domains. Nonetheless, how to adjust the mixture distribution weights remains an open question. Moreover, most existing work on this topic focuses only on minimizing the error on the source domains and achieving domain-invariant representations, which is insufficient to ensure low error on the target domain. In this work, we present a novel framework that addresses both problems and beats the current state of the art by using a mildly optimistic objective function and consistency regularization on the target samples.

2022

Streamlining Action Recognition in Autonomous Shared Vehicles with an Audiovisual Cascade Strategy

Authors
Pinto, JR; Carvalho, P; Pinto, C; Sousa, A; Capozzi, L; Cardoso, JS;

Publication
PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5

Abstract
With the advent of self-driving cars, and big companies such as Waymo or Bosch pushing forward into fully driverless transportation services, the in-vehicle behaviour of passengers must be monitored to ensure safety and comfort. The use of audio-visual information is attractive by its spatio-temporal richness as well as non-invasive nature, but faces tile likely constraints posed by available hardware and energy consumption. Hence new strategies are required to improve the usage of these scarce resources. We propose the processing of audio and visual data in a cascade pipeline for in-vehicle action recognition. The data is processed by modality-specific sub-modules. with subsequent ones being used when a confident classification is not reached. Experiments show an interesting accuracy-acceleration trade-off when compared with a parallel pipeline with late fusion, presenting potential for industrial applications on embedded devices.

2022

Myope Models - Are face presentation attack detection models short-sighted?

Authors
Neto, PC; Sequeira, AF; Cardoso, JS;

Publication
2022 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW 2022)

Abstract
Presentation attacks are recurrent threats to biometric systems, where impostors attempt to bypass these systems. Humans often use background information as contextual cues for their visual system. Yet, regarding face-based systems, the background is often discarded, since face presentation attack detection (PAD) models are mostly trained with face crops. This work presents a comparative study of face PAD models (including multi-task learning, adversarial training and dynamic frame selection) in two settings: with and without crops. The results show that the performance is consistently better when the background is present in the images. The proposed multi-task methodology beats the state-of-the-art results on the ROSE-Youtu dataset by a large margin with an equal error rate of 0.2%. Furthermore, we analyze the models' predictions with Grad-CAM++ with the aim to investigate to what extent the models focus on background elements that are known to be useful for human inspection. From this analysis we can conclude that the background cues are not relevant across all the attacks. Thus, showing the capability of the model to leverage the background information only when necessary.

2022

3D Breast Volume Estimation

Authors
Gouveia, PF; Oliveira, HP; Monteiro, JP; Teixeira, JF; Silva, NL; Pinto, D; Mavioso, C; Anacleto, J; Martinho, M; Duarte, I; Cardoso, JS; Cardoso, F; Cardoso, MJ;

Publication
EUROPEAN SURGICAL RESEARCH

Abstract
Introduction: Breast volume estimation is considered crucial for breast cancer surgery planning. A single, easy, and reproducible method to estimate breast volume is not available. This study aims to evaluate, in patients proposed for mastectomy, the accuracy of the calculation of breast volume from a low-cost 3D surface scan (Microsoft Kinect) compared to the breast MRI and water displacement technique. Material and Methods: Patients with a Tis/T1-T3 breast cancer proposed for mastectomy between July 2015 and March 2017 were assessed for inclusion in the study. Breast volume calculations were performed using a 3D surface scan and the breast MRI and water displacement technique. Agreement between volumes obtained with both methods was assessed with the Spearman and Pearson correlation coefficients. Results: Eighteen patients with invasive breast cancer were included in the study and submitted to mastectomy. The level of agreement of the 3D breast volume compared to surgical specimens and breast MRI volumes was evaluated. For mastectomy specimen volume, an average (standard deviation) of 0.823 (0.027) and 0.875 (0.026) was obtained for the Pearson and Spearman correlations, respectively. With respect to MRI annotation, we obtained 0.828 (0.038) and 0.715 (0.018). Discussion: Although values obtained by both methodologies still differ, the strong linear correlation coefficient suggests that 3D breast volume measurement using a low-cost surface scan device is feasible and can approximate both the MRI breast volume and mastectomy specimen with sufficient accuracy. Conclusion: 3D breast volume measurement using a depth-sensor low-cost surface scan device is feasible and can parallel MRI breast and mastectomy specimen volumes with enough accuracy. Differences between methods need further development to reach clinical applicability. A possible approach could be the fusion of breast MRI and the 3D surface scan to harmonize anatomic limits and improve volume delimitation.

2022

Quasi-Unimodal Distributions for Ordinal Classification

Authors
Albuquerque, T; Cruz, R; Cardoso, JS;

Publication
MATHEMATICS

Abstract
Ordinal classification tasks are present in a large number of different domains. However, common losses for deep neural networks, such as cross-entropy, do not properly weight the relative ordering between classes. For that reason, many losses have been proposed in the literature, which model the output probabilities as following a unimodal distribution. This manuscript reviews many of these losses on three different datasets and suggests a potential improvement that focuses the unimodal constraint on the neighborhood around the true class, allowing for a more flexible distribution, aptly called quasi-unimodal loss. For this purpose, two constraints are proposed: A first constraint concerns the relative order of the top-three probabilities, and a second constraint ensures that the remaining output probabilities are not higher than the top three. Therefore, gradient descent focuses on improving the decision boundary around the true class in detriment to the more distant classes. The proposed loss is found to be competitive in several cases.

  • 60
  • 346