2023
Autores
Romero, A; Carvalho, P; Corte-Real, L; Pereira, A;
Publicação
JOURNAL OF IMAGING
Abstract
The problem of gathering sufficiently representative data, such as those about human actions, shapes, and facial expressions, is costly and time-consuming and also requires training robust models. This has led to the creation of techniques such as transfer learning or data augmentation. However, these are often insufficient. To address this, we propose a semi-automated mechanism that allows the generation and editing of visual scenes with synthetic humans performing various actions, with features such as background modification and manual adjustments of the 3D avatars to allow users to create data with greater variability. We also propose an evaluation methodology for assessing the results obtained using our method, which is two-fold: (i) the usage of an action classifier on the output data resulting from the mechanism and (ii) the generation of masks of the avatars and the actors to compare them through segmentation. The avatars were robust to occlusion, and their actions were recognizable and accurate to their respective input actors. The results also showed that even though the action classifier concentrates on the pose and movement of the synthetic humans, it strongly depends on contextual information to precisely recognize the actions. Generating the avatars for complex activities also proved problematic for action recognition and the clean and precise formation of the masks.
2023
Autores
Magalhaes, SC; dos Santos, FN; Machado, P; Moreira, AP; Dias, J;
Publicação
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
Abstract
Purpose: Visual perception enables robots to perceive the environment. Visual data is processed using computer vision algorithms that are usually time-expensive and require powerful devices to process the visual data in real-time, which is unfeasible for open-field robots with limited energy. This work benchmarks the performance of different heterogeneous platforms for object detection in real-time. This research benchmarks three architectures: embedded GPU-Graphical Processing Units (such as NVIDIA Jetson Nano 2 GB and 4 GB, and NVIDIA Jetson TX2), TPU-Tensor Processing Unit (such as Coral Dev Board TPU), and DPU-Deep Learning Processor Unit (such as in AMD-Xilinx ZCU104 Development Board, and AMD-Xilinx Kria KV260 Starter Kit). Methods: The authors used the RetinaNet ResNet-50 fine-tuned using the natural VineSet dataset. After the trained model was converted and compiled for target-specific hardware formats to improve the execution efficiency.Conclusions and Results: The platforms were assessed in terms of performance of the evaluation metrics and efficiency (time of inference). Graphical Processing Units (GPUs) were the slowest devices, running at 3 FPS to 5 FPS, and Field Programmable Gate Arrays (FPGAs) were the fastest devices, running at 14 FPS to 25 FPS. The efficiency of the Tensor Processing Unit (TPU) is irrelevant and similar to NVIDIA Jetson TX2. TPU and GPU are the most power-efficient, consuming about 5 W. The performance differences, in the evaluation metrics, across devices are irrelevant and have an F1 of about 70 % and mean Average Precision (mAP) of about 60 %.
2023
Autores
Fonseca, SM; Cunha, S; Silva, M; Ramos, M; Azevedo, G; Campos, R; Faria, S; Queirós, C;
Publicação
PSICOLOGIA
Abstract
Medical rescuers are the frontline for COVID-19 and their psychological experience and health are major concerns to our society and healthcare system. This study aims to understand how medical rescuers psychologically experienced this pandemic and explore the contributing variables to COVID-19 anxiety. Portuguese medical rescuers (n = 203) answered questions about their COVID-19 experience, the COVID-19 Anxiety Scale, Patient-Health Questionnaire, Perceived Stress Scale, Obsessive-Compulsive Inventory, and Well-Being Questionnaire. Rescuers presented low COVID-19 anxiety and low-moderate levels of fear. Most already faced or were facing changes in their job-related tasks, did not change household and did not feel stigma/discrimination. COVID-19 workplace security measures were considered moderately adequate and low anxiety, depression and obsessive-compulsive symptoms, low to moderate stress and moderate well-being were found. Only COVID-19 fear and security measures, anxiety, depression and obsessive-compulsive symptoms explained COVID-19 anxiety. Overall, findings showed these rescuers were psychologically well adjusted during the pandemic's initial stages. © 2023 Associacao Portuguesa de Psicologia. All rights reserved.
2023
Autores
Queirós, R; Ferreira, L; Fontes, H; Campos, R;
Publicação
SimuTools
Abstract
The increasing complexity of recent Wi-Fi amendments is making the use of traditional algorithms and heuristics unfeasible to address the Rate Adaptation (RA) problem. This is due to the large combination of configuration parameters along with the high variability of the wireless channel. Recently, several works have proposed the usage of Reinforcement Learning (RL) techniques to address the problem. However, the proposed solutions lack sufficient technical explanation. Also, the lack of standard frameworks enabling the reproducibility of results and the limited availability of source code, makes the fair comparison with state of the art approaches a challenge. This paper proposes a framework, named RateRL, that integrates state of the art libraries with the well-known Network Simulator 3 (ns-3) to enable the implementation and evaluation of RL-based RA algorithms. To the best of our knowledge, RateRL is the first tool available to assist researchers during the implementation, validation and evaluation phases of RL-based RA algorithms and enable the fair comparison between competing algorithms.
2023
Autores
Loureiro, JP; Teixeira, FB; Campos, R;
Publicação
2023 IEEE 9TH WORLD FORUM ON INTERNET OF THINGS, WF-IOT
Abstract
The exploration of the ocean has got an increasing interest, including activities such as offshore wind farms and deep-sea mining. However, the ocean environment and the high cost of operations, namely for manned missions, have led to the development of Autonomous Underwater Vehicles (AUVs) and other sensing platforms. AUVs play a vital role in these environments, relying on communications systems to operate and exchange sensor data. Yet, reliable and energy-efficient broad-band wireless communications underwater remain an unsolved challenge, despite the recent advances in the field. We present a novel multimodal approach, named DURIUS, that considers the movement of the AUV to convey the sensor data and selects the most suitable underwater wireless communications technology - acoustic, optical or radio - according to the underwater context, targeting maximum performance and minimum energy consumption. Our analytical results show that DURIUS increases data throughput and reduces energy consumption when compared with the state of the art approaches.
2023
Autores
Pantaleão, G; Queirós, R; Fontes, H; Campos, R;
Publicação
SimuTools
Abstract
With the growing connectivity demands, Unmanned Aerial Vehicles (UAVs) have emerged as a prominent component in the deployment of Next Generation On-demand Wireless Networks. However, current UAV positioning solutions typically neglect the impact of Rate Adaptation (RA) algorithms or simplify its effect by considering ideal and non-implementable RA algorithms. This work proposes the Rate Adaptation aware RL-based Flying Gateway Positioning (RARL) algorithm, a positioning method for Flying Gateways that applies Deep Q-Learning, accounting for the dynamic data rate imposed by the underlying RA algorithm. The RARL algorithm aims to maximize the throughput of the flying wireless links serving one or more Flying Access Points, which in turn serve ground terminals. The performance evaluation of the RARL algorithm demonstrates that it is capable of taking into account the effect of the underlying RA algorithm and achieve the maximum throughput in all analysed static and mobile scenarios.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.