Cookies Policy
We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out More
Close
  • Menu
About

About

Armando Sousa received his Ph.D. degrees in the area of Robotics at the University of Porto, Portugal in 2004.
He is currently an Auxiliary Professor in the same faculty and an integrated researcher in the INESCTEC (Institute for Systems and Computer Engineering of Porto - Technology and Science).
He received several international awards in robotic soccer under the RoboCup Federation (mainly in the small size league). He has also received the Pedagogical Excellence award of the UP in year 2015.
His main research interests include education, robotics, data fusion and vision systems. He has co-authored over 50 international peer-reviewed publications and participated in over 10 international projects in the areas of education and robotics.

Interest
Topics
Details

Details

001
Publications

2019

Collaborative Welding System using BIM for Robotic Reprogramming and Spatial Augmented Reality

Authors
Tavares, P; Costa, CM; Rocha, L; Malaca, P; Costa, P; Moreira, AP; Sousa, A; Veiga, G;

Publication
Automation in Construction

Abstract
The optimization of the information flow from the initial design and through the several production stages plays a critical role in ensuring product quality while also reducing the manufacturing costs. As such, in this article we present a cooperative welding cell for structural steel fabrication that is capable of leveraging the Building Information Modeling (BIM) standards to automatically orchestrate the necessary tasks to be allocated to a human operator and a welding robot moving on a linear track. We propose a spatial augmented reality system that projects alignment information into the environment for helping the operator tack weld the beam attachments that will be later on seam welded by the industrial robot. This way we ensure maximum flexibility during the beam assembly stage while also improving the overall productivity and product quality since the operator no longer needs to rely on error prone measurement procedures and he receives his tasks through an immersive interface, relieving him from the burden of analyzing complex manufacturing design specifications. Moreover, no expert robotics knowledge is required to operate our welding cell because all the necessary information is extracted from the Industry Foundation Classes (IFC), namely the CAD models and welding sections, allowing our 3D beam perception systems to correct placement errors or beam bending, which coupled with our motion planning and welding pose optimization system ensures that the robot performs its tasks without collisions and as efficiently as possible while maximizing the welding quality. © 2019 Elsevier B.V.

2019

Modeling of video projectors in OpenGL for implementing a spatial augmented reality teaching system for assembly operations

Authors
Costal, CM; Veiga, G; Sousa, A; Rocha, L; Sousa, AA; Rodrigues, R; Thomas, U;

Publication
19th IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2019

Abstract
Teaching complex assembly and maintenance skills to human operators usually requires extensive reading and the help of tutors. In order to reduce the training period and avoid the need for human supervision, an immersive teaching system using spatial augmented reality was developed for guiding inexperienced operators. The system provides textual and video instructions for each task while also allowing the operator to navigate between the teaching steps and control the video playback using a bare hands natural interaction interface that is projected into the workspace. Moreover, for helping the operator during the final validation and inspection phase, the system projects the expected 3D outline of the final product. The proposed teaching system was tested with the assembly of a starter motor and proved to be more intuitive than reading the traditional user manuals. This proof of concept use case served to validate the fundamental technologies and approaches that were proposed to achieve an intuitive and accurate augmented reality teaching application. Among the main challenges were the proper modeling and calibration of the sensing and projection hardware along with the 6 DoF pose estimation of objects for achieving precise overlap between the 3D rendered content and the physical world. On the other hand, the conceptualization of the information flow and how it can be conveyed on-demand to the operator was also of critical importance for ensuring a smooth and intuitive experience for the operator. © 2019 IEEE.

2019

Monocular Visual Odometry Benchmarking and Turn Performance Optimization

Authors
Aguiar, A; Sousa, A; dos Santos, FN; Oliveira, M;

Publication
19th IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2019

Abstract
Developing ground robots for crop monitoring and harvesting in steep slope vineyards is a complex challenge due to two main reasons: harsh condition of the terrain and unstable localization accuracy obtained with Global Navigation Satellite System. In this context, a reliable localization system requires an accurate and redundant information to Global Navigation Satellite System and wheel odometry based system. To pursue this goal we benchmark 3 well known Visual Odometry methods with 2 datasets. Two of these are feature-based Visual Odometry algorithms: Libviso2 and SVO 2.0. The third is an appearance-based Visual Odometry algorithm called DSO. In monocular Visual Odometry, two main problems appear: pure rotations and scale estimation. In this paper, we focus on the first issue. To do so, we propose a Kalman Filter to fuse a single gyroscope with the output pose of monocular Visual Odometry, while estimating gyroscope's bias continuously. In this approach we propose a non-linear noise variation that ensures that bias estimation is not affected by Visual Odometry resultant rotations. We compare and discuss the three unchanged methods and the three methods with the proposed additional Kalman Filter. For tests, two public datasets are used: the Kitti dataset and another built in-house. Results show that our additional Kalman Filter highly improves Visual Odometry performance in rotation movements. © 2019 IEEE.

2019

Modeling of video projectors in OpenGL for implementing a spatial augmented reality teaching system for assembly operations

Authors
Costa, CM; Veiga, G; Sousa, A; Rocha, L; Augusto Sousa, AA; Rodrigues, R; Thomas, U;

Publication
2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019)

Abstract
Teaching complex assembly and maintenance skills to human operators usually requires extensive reading and the help of tutors. In order to reduce the training period and avoid the need for human supervision, an immersive teaching system using spatial augmented reality was developed for guiding inexperienced operators. The system provides textual and video instructions for each task while also allowing the operator to navigate between the teaching steps and control the video playback using a bare hands natural interaction interface that is projected into the workspace. Moreover, for helping the operator during the final validation and inspection phase, the system projects the expected 3D outline of the final product. The proposed teaching system was tested with the assembly of a starter motor and proved to be more intuitive than reading the traditional user manuals. This proof of concept use case served to validate the fundamental technologies and approaches that were proposed to achieve an intuitive and accurate augmented reality teaching application. Among the main challenges were the proper modeling and calibration of the sensing and projection hardware along with the 6 DoF pose estimation of objects for achieving precise overlap between the 3D rendered content and the physical world. On the other hand, the conceptualization of the information flow and how it can be conveyed on-demand to the operator was also of critical importance for ensuring a smooth and intuitive experience for the operator.

2019

Learning low level skills from scratch for humanoid robot soccer using deep reinforcement learning

Authors
Abreu, M; Lau, N; Sousa, A; Reis, LP;

Publication
19th IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2019

Abstract
Reinforcement learning algorithms are now more appealing than ever. Recent approaches bring power and tuning simplicity to the everyday work machine. The possibilities are endless, and the idea of automating learning without domain knowledge is quite tempting for many researchers. However, in competitive environments such as the RoboCup 3D Soccer Simulation League, there is a lot to be done regarding humanlike behaviors. Current teams use many mechanical movements to perform basic skills, such as running and dribbling the ball. This paper aims to use the PPO algorithm to optimize those skills, achieving natural gaits without sacrificing performance. We use Simspark to simulate a NAO humanoid robot, using visual and body sensors to control its actuators. Based on our results, we propose an indirect control approach and detailed parameter setups to obtain natural running and dribbling behaviors. The obtained performance is in some cases comparable or better than the top RoboCup teams. However, some skills are not ready to be applied in competitive environments yet, due to instability. This work contributes towards the improvement of RoboCup and some related technical challenges. © 2019 IEEE.

Supervised
thesis

2018

Heurísticas para problemas de corte de formas irregulares

Author
Duarte Nuno de Azevedo Fonseca

Institution
UP-FEUP

2018

Localização e Navegação de AGVs Industriais

Author
Emanuel Pereira Teixeira

Institution
UP-FEUP

2017

Vision Methods to Find Uniqueness Within a Class of Objects

Author
Valter Joaquim Ramos Costa

Institution
UP-FEUP

2017

Atualização de Simulador Físico de Condução Automóvel por Integração de Sistema de Realidade Virtual

Author
André Jesus de Carvalho Pinto

Institution
UP-FEUP

2017

Vision-based Feature matching as a tool for Robotic Localization

Author
Nolasco Amado dos Santos Napoleão

Institution
UP-FEUP