Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CRIIS

2020

Voice-Based Classification of Amyotrophic Lateral Sclerosis: Where Are We and Where Are We Going? A Systematic Review

Authors
Vieira, H; Costa, N; Sousa, T; Reis, S; Coelho, L;

Publication
NEURODEGENERATIVE DISEASES

Abstract
Background:Amyotrophic lateral sclerosis (ALS) is a fatal progressive motor neuron disease. People with ALS demonstrate various speech problems.Summary:We aim to provide an overview of studies concerning the diagnosis of ALS based on the analysis of voice samples. The main focus is on the feasibility of the use of voice and speech assessment as an effective method to diagnose the disease, either in clinical or pre-clinical conditions, and to monitor the disease progression. Specifically, we aim to examine current knowledge on: (a) voice parameters and the data models that can, most effectively, provide robust results; (b) the feasibility of a semi-automatic or automatic diagnosis and outcomes; and (c) the factors that can improve or restrict the use of such systems in a real-world context.Key Messages:The studies already carried out on the possibility of diagnosis of ALS using the voice signal are still sparse but all point to the importance, feasibility and simplicity of this approach. Most cohorts are small which limits the statistical relevance and makes it difficult to infer broader conclusions. The set of features used, although diverse, is quite circumscribed. ALS is difficult to diagnose early because it may mimic several other neurological diseases. Promising results were found for the automatic detection of ALS from speech samples and this can be a feasible process even in pre-symptomatic stages. Improved guidelines must be set in order to establish a robust decision model.

2019

Parallelization of a Vine Trunk Detection Algorithm For a Real Time Robot Localization System

Authors
Azevedo, F; Shinde, P; Santos, L; Mendes, J; Santos, FN; Mendonca, H;

Publication
2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019)

Abstract
Developing ground robots for crop monitoring and harvesting in steep slope vineyards is a complex challenge due to two main reasons: harsh condition of the terrain and unstable localization accuracy obtained with Global Navigation Satellite System (GNSS). In this context, a reliable localization system requires an accurate detector for high density of natural/artificial features. In previous works, we presented a novel visual detector for Vineyards Trunks and Masts (ViTruDe) with high levels of detection accuracy. However, its implementation on the most common processing units -central processing units (CPU), using a standard programming language (C/C++), is unable to reach the processing efficiency requirements for real time operation. In this work, we explored parallelization capabilities of processing units, such as graphics processing units (GPU), in order to accelerate the processing time of ViTruDe. This work gives a general perspective on how to parallelize a generic problem in a GPU based solution, while exploring its efficiency when applied to the problem at hands. The ViTruDe detector for GPU was developed considering the constraints of a cost-effective robot to carry-out crop monitoring tasks in steep slope vineyard environments. We compared the proposed ViTruDe implementation on GPU using Compute Unified Compute Unified Device Architecture(CUDA) and CPU, and the achieved solution is over eighty times faster than its CPU counterpart. The training and test data are made public for future research work. This approach is a contribution for an accurate and reliable localization system that is GNSS-free.

2019

System-level study on impulse-radio integration-and-fire (IRIF) transceiver

Authors
Kianpour, I; Hussain, B; Mendonca, HS; Tavares, VG;

Publication
AEU-INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATIONS

Abstract
Integrate-and-fire (IFN) model of a biological neuron is an amplitude-to-time conversion technique that encodes information in the time-spacing between action potentials (spikes). In principle, this encoding scheme can be used to modulate signals in an impulse radio ultra wide-band (IR-UWB) transmitter, making it suitable for low-power applications, such as in wireless sensor networks (WSN) and biomedical monitoring. This paper then proposes an architecture based on IFN encoding method applied to a UWB transceiver scenario, referred to herein as impulse-radio integrate-and-fire (IRIF) transceiver, followed by a system-level study to attest its effectiveness. The transmitter is composed of an integrate-and-fire modulator, a digital controller and memory block, followed by a UWB pulse generator and filter. At the receiver side, a low-noise amplifier, a squarer, a low-pass filter and a comparator form an energy-detection receiver. A processor reconstructs the original signal at the receiver, and the quality of the synthesized signal is then verified in terms of effective number of bits (ENOB). Finally, a link budget is performed. (C) 2019 Published by Elsevier GmbH.

2019

Multi-Protocol LoRaWAN/Wi-Fi Sensor Node Performance Assessment for Industry 4.0 Energy Monitoring

Authors
Ferreira, P; Miranda, RN; Cruz, PM; Mendonca, HS;

Publication
Proceedings of the 2019 9th IEEE-APS Topical Conference on Antennas and Propagation in Wireless Communications, APWC 2019

Abstract
This paper describes the implementation of an end-to-end Internet of Things (IoT) solution, focusing specifically in the multi-protocol sensor node with LoRaWAN and Wi-Fi connectivity options (Pycom's FiPy). A performance assessment will be presented, addressing a comparison between the different protocols (LoRaWAN vs. Wi-Fi) in terms radio coverage, timing issues, among others. Further, it will be investigated the integration onto the sensor node of sensor/actuator circuit blocks for energy metering, supported on Microchip's ATM90E26 single-phase meter. This will provide a practical use case in the field of Industry 4.0, leading to preliminary insights for power quality monitoring. © 2019 IEEE.

2019

Collaborative Welding System using BIM for Robotic Reprogramming and Spatial Augmented Reality

Authors
Tavares, P; Costa, CM; Rocha, L; Malaca, P; Costa, P; Moreira, AP; Sousa, A; Veiga, G;

Publication
AUTOMATION IN CONSTRUCTION

Abstract
The optimization of the information flow from the initial design and through the several production stages plays a critical role in ensuring product quality while also reducing the manufacturing costs. As such, in this article we present a cooperative welding cell for structural steel fabrication that is capable of leveraging the Building Information Modeling (BIM) standards to automatically orchestrate the necessary tasks to be allocated to a human operator and a welding robot moving on a linear track. We propose a spatial augmented reality system that projects alignment information into the environment for helping the operator tack weld the beam attachments that will be later on seam welded by the industrial robot. This way we ensure maximum flexibility during the beam assembly stage while also improving the overall productivity and product quality since the operator no longer needs to rely on error prone measurement procedures and he receives his tasks through an immersive interface, relieving him from the burden of analyzing complex manufacturing design specifications. Moreover, no expert robotics knowledge is required to operate our welding cell because all the necessary information is extracted from the Industry Foundation Classes (IFC), namely the CAD models and welding sections, allowing our 3D beam perception systems to correct placement errors or beam bending, which coupled with our motion planning and welding pose optimization system ensures that the robot performs its tasks without collisions and as efficiently as possible while maximizing the welding quality.

2019

Modeling of video projectors in OpenGL for implementing a spatial augmented reality teaching system for assembly operations

Authors
Costa, CM; Veiga, G; Sousa, A; Rocha, L; Augusto Sousa, AA; Rodrigues, R; Thomas, U;

Publication
2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019)

Abstract
Teaching complex assembly and maintenance skills to human operators usually requires extensive reading and the help of tutors. In order to reduce the training period and avoid the need for human supervision, an immersive teaching system using spatial augmented reality was developed for guiding inexperienced operators. The system provides textual and video instructions for each task while also allowing the operator to navigate between the teaching steps and control the video playback using a bare hands natural interaction interface that is projected into the workspace. Moreover, for helping the operator during the final validation and inspection phase, the system projects the expected 3D outline of the final product. The proposed teaching system was tested with the assembly of a starter motor and proved to be more intuitive than reading the traditional user manuals. This proof of concept use case served to validate the fundamental technologies and approaches that were proposed to achieve an intuitive and accurate augmented reality teaching application. Among the main challenges were the proper modeling and calibration of the sensing and projection hardware along with the 6 DoF pose estimation of objects for achieving precise overlap between the 3D rendered content and the physical world. On the other hand, the conceptualization of the information flow and how it can be conveyed on-demand to the operator was also of critical importance for ensuring a smooth and intuitive experience for the operator.

  • 156
  • 377