Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2022

Lung Segmentation in CT Images: A Residual U-Net Approach on a Cross-Cohort Dataset

Authors
Sousa, J; Pereira, T; Silva, F; Silva, MC; Vilares, AT; Cunha, A; Oliveira, HP;

Publication
Applied Sciences

Abstract
Lung cancer is one of the most common causes of cancer-related mortality, and since the majority of cases are diagnosed when the tumor is in an advanced stage, the 5-year survival rate is dismally low. Nevertheless, the chances of survival can increase if the tumor is identified early on, which can be achieved through screening with computed tomography (CT). The clinical evaluation of CT images is a very time-consuming task and computed-aided diagnosis systems can help reduce this burden. The segmentation of the lungs is usually the first step taken in image analysis automatic models of the thorax. However, this task is very challenging since the lungs present high variability in shape and size. Moreover, the co-occurrence of other respiratory comorbidities alongside lung cancer is frequent, and each pathology can present its own scope of CT imaging appearances. This work investigated the development of a deep learning model, whose architecture consists of the combination of two structures, a U-Net and a ResNet34. The proposed model was designed on a cross-cohort dataset and it achieved a mean dice similarity coefficient (DSC) higher than 0.93 for the 4 different cohorts tested. The segmentation masks were qualitatively evaluated by two experienced radiologists to identify the main limitations of the developed model, despite the good overall performance obtained. The performance per pathology was assessed, and the results confirmed a small degradation for consolidation and pneumocystis pneumonia cases, with a DSC of 0.9015 ± 0.2140 and 0.8750 ± 0.1290, respectively. This work represents a relevant assessment of the lung segmentation model, taking into consideration the pathological cases that can be found in the clinical routine, since a global assessment could not detail the fragilities of the model.

2022

Myope Models - Are face presentation attack detection models short-sighted?

Authors
Neto, PC; Sequeira, AF; Cardoso, JS;

Publication
CoRR

Abstract

2022

Proof of Concept of a Low-Cost Beam-Steering Hybrid Reflectarray that Mixes Microstrip and Lens Elements Using Passive Demonstrators

Authors
Luo, Q; Gao, S; Hu, W; Liu, W; Pessoa, LM; Sobhy, M; Sun, YC;

Publication
IEEE COMMUNICATIONS MAGAZINE

Abstract
In this article, a proof-of-concept study on the use of a hybrid design technique to reduce the number of phase shifters of a beam-scanning reflectarray (RA) is presented. An extended hemispherical lens antenna with feeds inspired by the retrodirective array is developed as a reflecting element, and the hybrid design technique mixes the lenses with the microstrip patch elements to realize a reflecting surface. Compared to the conventional designs that only use microstrip antennas to realize a reflecting surface, given a fixed aperture size the presented design uses 25 percent fewer array elements while shows comparable beam-steering performance. As a result of using fewer elements, the number of required phase shifters or other equivalent components such as RF switches and tunable materials is reduced by 25 percent, which leads to the reduction of the overall antenna system's complexity, cost, and power consumption. To verify the design concept, two passive prototypes with a center frequency at 12.5 GHz were designed and fabricated. The reflecting surface was fabricated by using standard PCB manufacturing and the lenses were fabricated using 3D printing. Good agreement between the simulation and measurement results is obtained. The presented design concept can be extended to the design of RAs operating at different frequency bands including millimetre-wave frequencies with similar radiation performances. The presented design method is not limited to the microstrip patch reflecting elements and can also be applied to the design of the hybrid RAs with different types of reflecting elements.

2022

Streamlining Action Recognition in Autonomous Shared Vehicles with an Audiovisual Cascade Strategy

Authors
Pinto, JR; Carvalho, P; Pinto, C; Sousa, A; Capozzi, L; Cardoso, JS;

Publication
Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications

Abstract

2022

Photo2Video: Semantic-Aware Deep Learning-Based Video Generation from Still Content

Authors
Viana, P; Andrade, MT; Carvalho, P; Vilaca, L; Teixeira, IN; Costa, T; Jonker, P;

Publication
Journal of Imaging

Abstract
Applying machine learning (ML), and especially deep learning, to understand visual content is becoming common practice in many application areas. However, little attention has been given to its use within the multimedia creative domain. It is true that ML is already popular for content creation, but the progress achieved so far addresses essentially textual content or the identification and selection of specific types of content. A wealth of possibilities are yet to be explored by bringing the use of ML into the multimedia creative process, allowing the knowledge inferred by the former to influence automatically how new multimedia content is created. The work presented in this article provides contributions in three distinct ways towards this goal: firstly, it proposes a methodology to re-train popular neural network models in identifying new thematic concepts in static visual content and attaching meaningful annotations to the detected regions of interest; secondly, it presents varied visual digital effects and corresponding tools that can be automatically called upon to apply such effects in a previously analyzed photo; thirdly, it defines a complete automated creative workflow, from the acquisition of a photograph and corresponding contextual data, through the ML region-based annotation, to the automatic application of digital effects and generation of a semantically aware multimedia story driven by the previously derived situational and visual contextual data. Additionally, it presents a variant of this automated workflow by offering to the user the possibility of manipulating the automatic annotations in an assisted manner. The final aim is to transform a static digital photo into a short video clip, taking into account the information acquired. The final result strongly contrasts with current standard approaches of creating random movements, by implementing an intelligent content- and context-aware video.

2022

Lesion Volume Quantification Using Two Convolutional Neural Networks in MRIs of Multiple Sclerosis Patients

Authors
de Oliveira, M; Piacenti Silva, M; da Rocha, FCG; Santos, JM; Cardoso, JD; Lisboa, PN;

Publication
Diagnostics

Abstract
Background: Multiple sclerosis (MS) is a neurologic disease of the central nervous system which affects almost three million people worldwide. MS is characterized by a demyelination process that leads to brain lesions, allowing these affected areas to be visualized with magnetic resonance imaging (MRI). Deep learning techniques, especially computational algorithms based on convolutional neural networks (CNNs), have become a frequently used algorithm that performs feature self-learning and enables segmentation of structures in the image useful for quantitative analysis of MRIs, including quantitative analysis of MS. To obtain quantitative information about lesion volume, it is important to perform proper image preprocessing and accurate segmentation. Therefore, we propose a method for volumetric quantification of lesions on MRIs of MS patients using automatic segmentation of the brain and lesions by two CNNs. Methods: We used CNNs at two different moments: the first to perform brain extraction, and the second for lesion segmentation. This study includes four independent MRI datasets: one for training the brain segmentation models, two for training the lesion segmentation model, and one for testing. Results: The proposed brain detection architecture using binary cross-entropy as the loss function achieved a 0.9786 Dice coefficient, 0.9969 accuracy, 0.9851 precision, 0.9851 sensitivity, and 0.9985 specificity. In the second proposed framework for brain lesion segmentation, we obtained a 0.8893 Dice coefficient, 0.9996 accuracy, 0.9376 precision, 0.8609 sensitivity, and 0.9999 specificity. After quantifying the lesion volume of all patients from the test group using our proposed method, we obtained a mean value of 17,582 mm3 . Conclusions: We concluded that the proposed algorithm achieved accurate lesion detection and segmentation with reproducibility corresponding to state-of-the-art software tools and manual segmentation. We believe that this quantification method can add value to treatment monitoring and routine clinical evaluation of MS patients. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.

  • 1
  • 281