2020
Autores
Andrade, C; Teixeira, LF; Vasconcelos, MJM; Rosado, L;
Publicação
ICIAR (2)
Abstract
With the ever-increasing occurrence of skin cancer, timely and accurate skin cancer detection has become clinically more imperative. A clinical mobile-based deep learning approach is a possible solution for this challenge. Nevertheless, there is a major impediment in the development of such a model: the scarce availability of labelled data acquired with mobile devices, namely macroscopic images. In this work, we present two experiments to assemble a robust deep learning model for macroscopic skin lesion segmentation and to capitalize on the sizable dermoscopic databases. In the first experiment two groups of deep learning models, U-Net based and DeepLab based, were created and tested exclusively in the available macroscopic images. In the second experiment, the possibility of transferring knowledge between the domains was tested. To accomplish this, the selected model was retrained in the dermoscopic images and, subsequently, fine-tuned with the macroscopic images. The best model implemented in the first experiment was a DeepLab based model with a MobileNetV2 as feature extractor with a width multiplier of 0.35 and optimized with the soft Dice loss. This model comprehended 0.4 million parameters and obtained a thresholded Jaccard coefficient of 72.97% and 78.51% in the Dermofit and SMARTSKINS databases, respectively. In the second experiment, with the usage of transfer learning, the performance of this model was significantly improved in the first database to 75.46% and slightly decreased to 78.04% in the second.
2020
Autores
Pereira, A; Carvalho, P; Coelho, G; Corte Real, L;
Publicação
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Abstract
Color and color differences are critical aspects in many image processing and computer vision applications. A paradigmatic example is object segmentation, where color distances can greatly influence the performance of the algorithms. Metrics for color difference have been proposed in the literature, including the definition of standards such as CIEDE2000, which quantifies the change in visual perception of two given colors. This standard has been recommended for industrial computer vision applications, but the benefits of its application have been impaired by the complexity of the formula. This paper proposes a new strategy that improves the usability of the CIEDE2000 metric when a maximum acceptable distance can be imposed. We argue that, for applications where a maximum value, above which colors are considered to be different, can be established, then it is possible to reduce the amount of calculations of the metric, by preemptively analyzing the color features. This methodology encompasses the benefits of the metric while overcoming its computational limitations, thus broadening the range of applications of CIEDE2000 in both the computer vision algorithms and computational resource requirements.
2020
Autores
Martins, I; Carvalho, P; Corte Real, L; Luis Alba Castro, JL;
Publicação
COMPUTER VISION AND IMAGE UNDERSTANDING
Abstract
One of the most difficult scenarios for unsupervised segmentation of moving objects is found in nighttime videos where the main challenges are the poor illumination conditions resulting in low-visibility of objects, very strong lights, surface-reflected light, a great variance of light intensity, sudden illumination changes, hard shadows, camouflaged objects, and noise. This paper proposes a novel method, coined COLBMOG (COLlinearity Boosted MOG), devised specifically for the foreground segmentation in nighttime videos, that shows the ability to overcome some of the limitations of state-of-the-art methods and still perform well in daytime scenarios. It is a texture-based classification method, using local texture modeling, complemented by a color-based classification method. The local texture at the pixel neighborhood is modeled as an..-dimensional vector. For a given pixel, the classification is based on the collinearity between this feature in the input frame and the reference background frame. For this purpose, a multimodal temporal model of the collinearity between texture vectors of background pixels is maintained. COLBMOG was objectively evaluated using the ChangeDetection.net (CDnet) 2014, Night Videos category, benchmark. COLBMOG ranks first among all the unsupervised methods. A detailed analysis of the results revealed the superior performance of the proposed method compared to the best performing state-of-the-art methods in this category, particularly evident in the presence of the most complex situations where all the algorithms tend to fail.
2020
Autores
Gomes, R; Duarte, C; Pedro, JC;
Publicação
IEEE TRANSACTIONS ON MICROWAVE THEORY AND TECHNIQUES
Abstract
Typical polar digital power amplifiers (DPAs) employ unit-cells operated in class-E or D-1, denoting a switched-resistance operation which degrades linearity. Besides introducing higher demand on digital predistortion (DPD), it also requires extra quantization bits, impacting the overall efficiency and system complexity. To address this, the present work makes use of an optimized constant-current cascode unit-cell which is combined with reduced conduction angle to achieve linear and efficient operation, while minimizing the effort on DPD and/or calibration. A design strategy is developed which focuses on the cascode bias voltage and transistor relative dimensions as design parameters, allowing cascode efficiency optimization without compromising linearity or reliability. A single-ended polar switched constant-current DPA is implemented in 180-nm standard CMOS. Continuous-wave measurements performed at 800 MHz demonstrate an output power of 24 dBm with a PAE of 47%. The DPA dynamic behavior was tested with a 64-QAM signal with 10 MS/s, achieving an average PAE of 20.9% with a peak-to-average power ratio (PAPR) of 8.7 dB and adjacent-channel leakage ratio (ACLR) = 40.34 dB. These results demonstrate comparable performance with the prior art while using only 6-bits clocked at 100 MHz baseband sampling frequency.
2020
Autores
Weber, S; Duarte, C;
Publicação
IEEE Solid-State Circuits Magazine
Abstract
A high production yield,
2020
Autores
Pinheiro, G; Pereira, T; Dias, C; Freitas, C; Hespanhol, V; Costa, JL; Cunha, A; Oliveira, HP;
Publicação
SCIENTIFIC REPORTS
Abstract
EGFR and KRAS are the most frequently mutated genes in lung cancer, being active research topics in targeted therapy. The biopsy is the traditional method to genetically characterise a tumour. However, it is a risky procedure, painful for the patient, and, occasionally, the tumour might be inaccessible. This work aims to study and debate the nature of the relationships between imaging phenotypes and lung cancer-related mutation status. Until now, the literature has failed to point to new research directions, mainly consisting of results-oriented works in a field where there is still not enough available data to train clinically viable models. We intend to open a discussion about critical points and to present new possibilities for future radiogenomics studies. We conducted high-dimensional data visualisation and developed classifiers, which allowed us to analyse the results for EGFR and KRAS biological markers according to different combinations of input features. We show that EGFR mutation status might be correlated to CT scans imaging phenotypes; however, the same does not seem to hold for KRAS mutation status. Also, the experiments suggest that the best way to approach this problem is by combining nodule-related features with features from other lung structures.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.