Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2021

A Systematic Survey of ML Datasets for Prime CV Research Areas—Media and Metadata

Authors
Castro, HF; Cardoso, JS; Andrade, MT;

Publication
Data

Abstract
The ever-growing capabilities of computers have enabled pursuing Computer Vision through Machine Learning (i.e., MLCV). ML tools require large amounts of information to learn from (ML datasets). These are costly to produce but have received reduced attention regarding standardization. This prevents the cooperative production and exploitation of these resources, impedes countless synergies, and hinders ML research. No global view exists of the MLCV dataset tissue. Acquiring it is fundamental to enable standardization. We provide an extensive survey of the evolution and current state of MLCV datasets (1994 to 2019) for a set of specific CV areas as well as a quantitative and qualitative analysis of the results. Data were gathered from online scientific databases (e.g., Google Scholar, CiteSeerX). We reveal the heterogeneous plethora that comprises the MLCV dataset tissue; their continuous growth in volume and complexity; the specificities of the evolution of their media and metadata components regarding a range of aspects; and that MLCV progress requires the construction of a global standardized (structuring, manipulating, and sharing) MLCV “library”. Accordingly, we formulate a novel interpretation of this dataset collective as a global tissue of synthetic cognitive visual memories and define the immediately necessary steps to advance its standardization and integration.

2021

Transparent Control Flow Transfer between CPU and Accelerators for HPC

Authors
Granhão, D; Ferreira, JC;

Publication
Electronics

Abstract
Heterogeneous platforms with FPGAs have started to be employed in the High-Performance Computing (HPC) field to improve performance and overall efficiency. These platforms allow the use of specialized hardware to accelerate software applications, but require the software to be adapted in what can be a prolonged and complex process. The main goal of this work is to describe and evaluate mechanisms that can transparently transfer the control flow between CPU and FPGA within the scope of HPC. Combining such a mechanism with transparent software profiling and accelerator configuration could lead to an automatic way of accelerating regular applications. In this work, a mechanism based on the ptrace system call is proposed, and its performance on the Intel Xeon+FPGA platform is evaluated. The feasibility of the proposed approach is demonstrated by a working prototype that performs the transparent control flow transfer of any function call to a matching hardware accelerator. This approach is more general than shared library interposition at the cost of a small time overhead in each accelerator use (about 1.3ms in the prototype implementation).

2021

MONITORIA: The start of a new era of ambulatory heart failure monitoring? Part I – Theoretical Rationale [MONITORIA: o início de uma nova era na monitoração da insuficiência cardíaca? Parte I – Fundamentação teórica]

Authors
Martins, C; Machado da Silva, J; Guimarães, D; Martins, L; Vaz da Silva, M;

Publication
Revista Portuguesa de Cardiologia

Abstract
Heart failure (HF) is a multifactorial chronic syndrome with progressive increasing incidence causing a huge financial burden worldwide. Remote monitoring should, in theory, improve HF management, but given increasing morbidity and mortality, a question remains: are we monitoring it properly? Device-based home monitoring enables objective and continuous measurement of vital variables and non-invasive devices should be first choice for elderly patients. There is no shortage of literature on the subject, however, most studies were designed to monitor a single variable or class of variables that were not properly assembled and, to the best of our knowledge, there are no large randomized studies about their impact on HF patient management. To overcome this problem, we carefully selected the most critical possible HF decompensating factors to design MONITORIA, a non-invasive device for comprehensive HF home monitoring. MONITORIA stands for MOnitoring Non-Invasively To Overcome mortality Rates of heart Insufficiency on Ambulatory, and in this paper, which is part I of a series of three articles, we discuss the theoretical basis for its design. MONITORIA and its inherent follow-up strategy will optimize HF patient care as it is a promising device, which will essentially adapt innovation not to the disease but rather to the patients. © 2020 Sociedade Portuguesa de Cardiologia

2021

ECG Biometrics

Authors
Pinto, JR; Cardoso, JS;

Publication
Encyclopedia of Cryptography, Security and Privacy

Abstract

2021

Maximum Relevance Minimum Redundancy Dropout with Informative Kernel Determinantal Point Process

Authors
Saffari, M; Khodayar, M; Saadabadi, MSE; Sequeira, AF; Cardoso, JS;

Publication
Sensors

Abstract
In recent years, deep neural networks have shown significant progress in computer vision due to their large generalization capacity; however, the overfitting problem ubiquitously threatens the learning process of these highly nonlinear architectures. Dropout is a recent solution to mitigate overfitting that has witnessed significant success in various classification applications. Recently, many efforts have been made to improve the Standard dropout using an unsupervised merit-based semantic selection of neurons in the latent space. However, these studies do not consider the task-relevant information quality and quantity and the diversity of the latent kernels. To solve the challenge of dropping less informative neurons in deep learning, we propose an efficient end-to-end dropout algorithm that selects the most informative neurons with the highest correlation with the target output considering the sparsity in its selection procedure. First, to promote activation diversity, we devise an approach to select the most diverse set of neurons by making use of determinantal point process (DPP) sampling. Furthermore, to incorporate task specificity into deep latent features, a mutual information (MI)-based merit function is developed. Leveraging the proposed MI with DPP sampling, we introduce the novel DPPMI dropout that adaptively adjusts the retention rate of neurons based on their contribution to the neural network task. Empirical studies on real-world classification benchmarks including, MNIST, SVHN, CIFAR10, CIFAR100, demonstrate the superiority of our proposed method over recent state-of-the-art dropout algorithms in the literature.

2021

Mixture-Based Open World Face Recognition

Authors
Matta, A; Pinto, JR; Cardoso, JS;

Publication
Advances in Intelligent Systems and Computing - Trends and Applications in Information Systems and Technologies

Abstract

  • 1
  • 263