Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CRIIS

2023

Stereo Based 3D Perception for Obstacle Avoidance in Autonomous Wheelchair Navigation

Authors
Gomes, B; Torres, J; Sobral, P; Sousa, A; Reis, LP;

Publication
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1

Abstract
In recent years, scientific and technological advances in robotics, have enabled the development of disruptive solutions for human interaction with the real world. In particular, the application of robotics to support people with physical disabilities, improved their life quality with a high social impact. This paper presents a stereo image based perception solution for autonomous wheelchairs navigation. It was developed to extend the Intellwheels project, a development platform for intelligent wheelchairs. The current version of Intellwheels relies on a planar scanning sensor, the Laser Range Finder (LRF), to detect the surrounding obstacles. The need for robust navigation capabilities means that the robot is required to precept not only obstacles but also bumps and holes on the ground. The proposed stereo-based solution, supported in passive stereo ZED cameras, was evaluated in a 3D simulated world scenario designed with a challenging floor. The performance of the wheelchair navigation with three different configurations was compared: first, using a LRF sensor, next with an unfiltered stereo camera and finally, applying a stereo camera with a speckle filter. The LRF solution was unable to complete the planned navigation. The unfiltered stereo camera completed the challenge with a low navigation quality due to noise. The filtered stereo camera reached the target position with a nearly optimal path.

2023

ENHANCING SAMPLE EFFICIENCY FOR TEMPERATURE CONTROL IN DED WITH REINFORCEMENT LEARNING AND MOOSE FRAMEWORK

Authors
Sousa, J; Darabi, R; Sousa, A; Reis, LP; Brueckner, F; Reis, A; de Sá, JC;

Publication
PROCEEDINGS OF ASME 2023 INTERNATIONAL MECHANICAL ENGINEERING CONGRESS AND EXPOSITION, IMECE2023, VOL 3

Abstract
Directed Energy Deposition (DED) is crucial in additive manufacturing for various industries like aerospace, automotive, and biomedical. Precise temperature control is essential due to high-power lasers and dynamic environmental changes. Employing Reinforcement Learning (RL) can help with temperature control, but challenges arise from standardization and sample efficiency. In this study, a model-based Reinforcement Learning (MBRL) approach is used to train a DED model, improving control and efficiency. Computational models evaluate melt pool geometry and temporal characteristics during the process. The study employs the Allen-Cahn phase field (AC-PF) model using the Finite Element Method (FEM) with the Multi-physics Object-Oriented Simulation Environment (MOOSE). MBRL, specifically Dyna-Q+, outperforms traditional Q-learning, requiring fewer samples. Insights from this research aid in advancing RL techniques for laser metal additive manufacturing.

2023

Deep Learning-Based Tree Stem Segmentation for Robotic Eucalyptus Selective Thinning Operations

Authors
da Silva, DQ; Rodrigues, TF; Sousa, AJ; dos Santos, FN; Filipe, V;

Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT II

Abstract
Selective thinning is a crucial operation to reduce forest ignitable material, to control the eucalyptus species and maximise its profitability. The selection and removal of less vigorous stems allows the remaining stems to grow healthier and without competition for water, sunlight and nutrients. This operation is traditionally performed by a human operator and is time-intensive. This work simplifies selective thinning by removing the stem selection part from the human operator's side using a computer vision algorithm. For this, two distinct datasets of eucalyptus stems (with and without foliage) were built and manually annotated, and three Deep Learning object detectors (YOLOv5, YOLOv7 and YOLOv8) were tested on real context images to perform instance segmentation. YOLOv8 was the best at this task, achieving an Average Precision of 74% and 66% on non-leafy and leafy test datasets, respectively. A computer vision algorithm for automatic stem selection was developed based on the YOLOv8 segmentation output. The algorithm managed to get a Precision above 97% and a 81% Recall. The findings of this work can have a positive impact in future developments for automatising selective thinning in forested contexts.

2023

Using Deep Reinforcement Learning for Navigation in Simulated Hallways

Authors
Leao, G; Almeida, F; Trigo, E; Ferreira, H; Sousa, A; Reis, LP;

Publication
2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC

Abstract
Reinforcement Learning (RL) is a well-suited paradigm to train robots since it does not require any previous information or database to train an agent. This paper explores using Deep Reinforcement Learning (DRL) to train a robot to navigate in maps containing different sorts of obstacles and which emulate hallways. Training and testing were performed using the Flatland 2D simulator and a Deep Q-Network (DQN) provided by OpenAI gym. Different sets of maps were used for training and testing. The experiments illustrate how well the robot is able to navigate in maps distinct from the ones used for training by learning new behaviours (namely following walls) and highlight the key challenges when solving this task using DRL, including the appropriate definition of the state space and reward function, as well as of the stopping criteria during training.

2023

Sensor Placement Optimization using Random Sample Consensus for Best Views Estimation

Authors
Costa, CM; Veiga, G; Sousa, A; Thomas, U; Rocha, L;

Publication
2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC

Abstract
The estimation of a 3D sensor constellation for maximizing the observable surface area percentage of a given set of target objects is a challenging and combinatorial explosive problem that has a wide range of applications for perception tasks that may require gathering sensor information from multiple views due to environment occlusions. To tackle this problem, the Gazebo simulator was configured for accurately modeling 8 types of depth cameras with different hardware characteristics, such as image resolution, field of view, range of measurements and acquisition rate. Later on, several populations of depth sensors were deployed within 4 different testing environments targeting object recognition and bin picking applications with increasing level of occlusions and geometry complexity. The sensor populations were either uniformly or randomly inserted on a set of regions of interest in which useful sensor data could be retrieved and in which the real sensors could be installed or moved by a robotic arm. The proposed approach of using fusion of 3D point clouds from multiple sensors using color segmentation and voxel grid merging for fast surface area coverage computation, coupled with a random sample consensus algorithm for best views estimation, managed to quickly estimate useful sensor constellations for maximizing the observable surface area of a set of target objects, making it suitable to be used for deciding the type and spatial disposition of sensors and also guide movable 3D cameras for avoiding environment occlusions.

2023

Computational intelligence advances in educational robotics

Authors
Bellas, F; Sousa, A;

Publication
FRONTIERS IN ROBOTICS AND AI

Abstract

  • 59
  • 386