2019
Autores
Silveira, CS; Cardoso, JS; Lourenco, AL; Ahlstrom, C;
Publicação
IET INTELLIGENT TRANSPORT SYSTEMS
Abstract
The first in-depth study on the use of electrocardiogram and electrooculogram for subject-dependent classification in driver sleepiness/fatigue under realistic driving conditions is presented in this work. Since acquisitions in simulated environments may be misleading for sleepiness assessment, performing studies on road are required. For that purpose, the authors present a database resulting from a field driving study performed in the SleepEye project. Based on previous research, supervised machine learning methods are implemented and applied to 16 heart- and 25 eye-based extracted features, mostly related to heart rate variability and blink events, respectively, in order to study the influence of subject dependency in sleepiness classification, using different classifiers and dealing with imbalanced class distributions. Results showed a significantly worse performance in subject-independent classification: a decrease of similar to 40 and 20% in the detection rate of the 'sleepy' class for two and three classes, respectively. Since physiological signals are the ones that present the most individual characteristics, a subject-independent classification can be even harder to perform. Transfer learning techniques and methods for imbalanced distributions are promising approaches and need further investigation.
2019
Autores
Pernes, D; Fernande, K; Cardoso, JS;
Publicação
APPLIED SCIENCES-BASEL
Abstract
Several phenomena are represented by directional-angular or periodic-data; from time references on the calendar to geographical coordinates. These values are usually represented as real values restricted to a given range (e.g., [0, 2 pi)), hiding the real nature of this information. In order to handle these variables properly in supervised classification tasks, alternatives to the naive Bayes classifier and logistic regression were proposed in the past. In this work, we propose directional-aware support vector machines. We address several realizations of the proposed models, studying their kernelized counterparts and their expressiveness. Finally, we validate the performance of the proposed Support Vector Machines (SVMs) against the directional naive Bayes and directional logistic regression with real data, obtaining competitive results.
2019
Autores
Araujo, RJ; Fernandes, K; Cardoso, JS;
Publicação
IEEE TRANSACTIONS ON IMAGE PROCESSING
Abstract
Active contour models are one of the most emblematic algorithms of computer vision. Their strong theoretical foundations and high user interoperahility turned them into a reference approach for object segmentation and tracking tasks. A high number of modifications have already been proposed in order to overcome the known problems of traditional snakes, such as initialization dependence and poor convergence to concavities. In this paper, we address the scenario where the user wants to segment an object that has multiple dynamic regions but some of them do not correspond to the true object boundary. We propose a novel parametric active contour model, the Sparse Multi-Bending snake, which is capable of dividing the contour into a set of contiguous regions with different bending properties. We derive a new energy function that induces such behavior and presents a group optimization strategy that can be used to find the optimal bending resistance parameter for each point of the contour. We show the flexibility of our model in a set of synthetic images. In addition, we consider two real applications, lung segmentation in Computerized Tomography data and hand segmentation in depth images. We show how the proposed method is able to improve the segmentations obtained in both applications, when compared with other active contour models.
2019
Autores
Ferreira, PM; Cardoso, JS; Rebelo, A;
Publicação
MULTIMEDIA TOOLS AND APPLICATIONS
Abstract
Sign Language Recognition (SLR) has become one of the most important research areas in the field of human computer interaction. SLR systems are meant to automatically translate sign language into text or speech, in order to reduce the communicational gap between deaf and hearing people. The aim of this paper is to exploit multimodal learning techniques for an accurate SLR, making use of data provided by Kinect and Leap Motion. In this regard, single-modality approaches as well as different multimodal methods, mainly based on convolutional neural networks, are proposed. Our main contribution is a novel multimodal end-to-end neural network that explicitly models private feature representations that are specific to each modality and shared feature representations that are similar between modalities. By imposing such regularization in the learning process, the underlying idea is to increase the discriminative ability of the learned features and, hence, improve the generalization capability of the model. Experimental results demonstrate that multimodal learning yields an overall improvement in the sign recognition performance. In particular, the novel neural network architecture outperforms the current state-of-the-art methods for the SLR task.
2019
Autores
Carvalho, DV; Pereira, EM; Cardoso, JS;
Publicação
ELECTRONICS
Abstract
Machine learning systems are becoming increasingly ubiquitous. These systems's adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate decision support systems remain complex black boxes, meaning their internal logic and inner workings are hidden to the user and even experts cannot fully understand the rationale behind their predictions. Moreover, new regulations and highly regulated domains have made the audit and verifiability of decisions mandatory, increasing the demand for the ability to question, understand, and trust machine learning systems, for which interpretability is indispensable. The research community has recognized this interpretability problem and focused on developing both interpretable models and explanation methods over the past few years. However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation? The aim of this article is to provide a review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics. Furthermore, a complete literature review is presented in order to identify future directions of work on this field.
2018
Autores
Rosado, L; Silva, PT; Faria, J; Oliveira, J; Vasconcelos, MJM; Elias, D; da Costa, JMC; Cardoso, JS;
Publicação
BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES (BIOSTEC 2017)
Abstract
Microscopic examination is the reference diagnostic method for several neglected tropical diseases. However, its quality and availability in rural endemic areas is often limited by the lack of trained personnel and adequate equipment. These drawbacks are closely related with the increasing interest in the development of computer-aided diagnosis systems, particularly distributed solutions that provide access to complex diagnosis in rural areas. In this work we present our most recent advances towards the development of a fully automated 3D-printed smartphone microscope with a motorized stage, termed mu SmartScope. The developed prototype allows autonomous acquisition of a pre-defined number of images at 1000x magnification, by using a motorized automated stage fully powered and controlled by a smartphone, without the need of manual focus. In order to validate the prototype as a reliable alternative to conventional microscopy, we evaluated the mu SmartScope performance in terms of: resolution; field of view; illumination; motorized stage performance (mechanical movement precision/resolution and power consumption); and automated focus. These results showed similar performances when compared with conventional microscopy, plus the advantage of being low-cost and easy to use, even for non-experts in microscopy. To extract these results, smears infected with blood parasites responsible for the most relevant neglected tropical diseases were used. The acquired images showed that it was possible to detect those agents through images acquired via the mu SmartScope, which clearly illustrate the huge potential of this device, specially in developing countries with limited access to healthcare services.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.