Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

2021

Automatic classification of retinal blood vessels based on multilevel thresholding and graph propagation

Authors
Remeseiro, B; Mendonça, AM; Campilho, A;

Publication
VISUAL COMPUTER

Abstract
Several systemic diseases affect the retinal blood vessels, and thus, their assessment allows an accurate clinical diagnosis. This assessment entails the estimation of the arteriolar-to-venular ratio (AVR), a predictive biomarker of cerebral atrophy and cardiovascular events in adults. In this context, different automatic and semiautomatic image-based approaches for artery/vein (A/V) classification and AVR estimation have been proposed in the literature, to the point of having become a hot research topic in the last decades. Most of these approaches use a wide variety of image properties, often redundant and/or irrelevant, requiring a training process that limits their generalization ability when applied to other datasets. This paper presents a new automatic method for A/V classification that just uses the local contrast between blood vessels and their surrounding background, computes a graph that represents the vascular structure, and applies a multilevel thresholding to obtain a preliminary classification. Next, a novel graph propagation approach was developed to obtain the final A/V classification and to compute the AVR. Our approach has been tested on two public datasets (INSPIRE and DRIVE), obtaining high classification accuracy rates, especially in the main vessels, and AVR ratios very similar to those provided by human experts. Therefore, our fully automatic method provides the reliable results without any training step, which makes it suitable for use with different retinal image datasets and as part of any clinical routine.

2021

Covid-19 Automatic Test through Human Breathing

Authors
Faria R.; Solteiro Pires E.J.; Leite A.; Saraiva T.;

Publication
2021 IEEE Latin American Conference on Computational Intelligence, LA-CCI 2021

Abstract
A classifier using a Long Short-Term Memory (LSTM) network to identify human beings infected with Covid-19 is proposed in this work. This classifier has significant advantages over current testing methods: it is fast, contactless, and requires few monetary resources. The data considered for this study was extracted from the Coswara dataset using 140 individuals (70 healthy and 70 infected with Covid-19). This dataset contains respiratory signals, such as people counting numbers, coughing, or breathing. The classifier uses non-linear time sequence features extracted from the signals after a preprocessing stage. The classifier was able to discriminate whether a human is infected with Covid-19 with an accuracy of 92.1%, specificity of 85.7%, and sensitivity of 98.6% using 5-fold Cross-Validation. Based on the results obtained, the classifier can be used as an alternative for the Reverse Transcription Polymerase Chain Reaction (RT-PCR) tests.

2021

Photovoltaic generation data, for 3 years, regarding the 2022-3 Competition on solar generation forecasting

Authors
Gomes, L; Vale, Z; Pinto, T;

Publication

Abstract

2021

Forecasting Energy Technology Diffusion in Space and Time: Model Design, Parameter Choice and Calibration

Authors
Heymann, F; vom Scheidt, F; Soares, FJ; Duenas, P; Miranda, V;

Publication
IEEE TRANSACTIONS ON SUSTAINABLE ENERGY

Abstract
New energy technologies such as Distributed Energy Resources (DER) will affect the spatial and temporal patterns of electricity consumption. Models that mimic technology diffusion processes over time are fundamental to support decisions in power system planning and policymaking. This paper shows that spatiotemporal technology diffusion forecasts consist typically of three main modules: 1) a global technology diffusion forecast, 2) the cellular module that is a spatial data substrate with cell states and transition rules, and 3) a spatial mapping module, commonly based on Geographic Information Systems. This work provides a review of previous spatiotemporal DER diffusion models and details their common building blocks. Analyzing 16 model variants of an exemplary spatial simulation model used to predict electric vehicle adoption patterns in Portugal, the analysis suggests that model performance is strongly affected by careful tuning of spatial and temporal granularities and chosen inference techniques. In general, model validation remains challenging, as early diffusion stages have typically few observations for model calibration.

2021

Object Detection Under Challenging Lighting Conditions Using High Dynamic Range Imagery

Authors
Mukherjee, R; Bessa, M; Melo Pinto, P; Chalmers, A;

Publication
IEEE ACCESS

Abstract
Most Convolution Neural Network (CNN) based object detectors, to date, have been optimized for accuracy and/or detection performance on datasets typically comprised of well exposed 8-bits/pixel/channel Standard Dynamic Range (SDR) images. A major existing challenge in this area is to accurately detect objects under extreme/difficult lighting conditions as SDR image trained detectors fail to accurately detect objects under such challenging lighting conditions. In this paper, we address this issue for the first time by introducing High Dynamic Range (HDR) imaging to object detection. HDR imagery can capture and process approximate to 13 orders of magnitude of scene dynamic range similar to the human eye. HDR trained models are therefore able to extract more salient features from extreme lighting conditions leading to more accurate detections. However, introducing HDR also presents multiple new challenges such as the complete absence of resources and previous literature on such an approach. Here, we introduce a methodology to generate a large scale annotated HDR dataset from any existing SDR dataset and validate the quality of the generated dataset via a robust evaluation technique. We also discuss the challenges of training and validating HDR trained models using existing detectors. Finally, we provide a methodology to create an out of distribution (OOD) HDR dataset to test and compare the performance of HDR and SDR trained detectors under difficult lighting condition. Results suggest that using the proposed methodology, HDR trained models are able to achieve 10 - 12% more accuracy compared to SDR trained models on real-world OOD dataset consisting of high-contrast images under extreme lighting conditions.

2021

On Data Parallelism Code Restructuring for HLS Targeting FPGAs

Authors
Campos, R; Cardoso, JMP;

Publication
2021 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW)

Abstract
FPGAs have emerged as hardware accelerators, and in the last decade, researchers have proposed new languages and frameworks to improve the efficiency when mapping computations to FPGAs. One of the main tasks when considering the mapping of software code to FPGAs is code restructuring. Code restructuring is of paramount importance to achieve efficient FPGA-based accelerators, and its automation continues to be a challenge. This paper describes our recent work on techniques to automatically restructure and annotate C code with directives optimized for HLS targeting FPGAs. The input of our approach consists of an unfolded dataflow graph (DFG), currently obtained by a trace of the program's execution, and restructured C code with HLS directives as output. Specifically, in this paper we propose algorithms to optimize the input DFGs and use isomorphic graph detection for exposing data-level parallelism. The experimental results show that our approach is able to generate efficient FPGA implementations, with significant speedups over the input unmodified source codes, and very competitive to implementations obtained by manual optimizations and by previous approaches. Furthermore, the experiments show that, using our approach, it is possible to extract data-parallelism in linear to quadratic time with respect to the number of nodes of the input DFG.

  • 1108
  • 4387