Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Armando Sousa received his Ph.D. degrees in the area of Robotics at the University of Porto, Portugal in 2004.
He is currently an Auxiliary Professor in the same faculty and an integrated researcher in the INESCTEC (Institute for Systems and Computer Engineering of Porto - Technology and Science).
He received several international awards in robotic soccer under the RoboCup Federation (mainly in the small size league). He has also received the Pedagogical Excellence award of the UP in year 2015.
His main research interests include education, robotics, data fusion and vision systems. He has co-authored over 50 international peer-reviewed publications and participated in over 10 international projects in the areas of education and robotics.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Armando Sousa
  • Cargo

    Investigador Sénior
  • Desde

    01 junho 2009
  • Nacionalidade

    Portugal
  • Contactos

    +351220413317
    armando.sousa@inesctec.pt
005
Publicações

2024

Inspection of Part Placement Within Containers Using Point Cloud Overlap Analysis for an Automotive Production Line

Autores
Costa, M; Dias, J; Nascimento, R; Rocha, C; Veiga, G; Sousa, A; Thomas, U; Rocha, L;

Publicação
Lecture Notes in Mechanical Engineering

Abstract
Reliable operation of production lines without unscheduled disruptions is of paramount importance for ensuring the proper operation of automated working cells involving robotic systems. This article addresses the issue of preventing disruptions to an automotive production line that can arise from incorrect placement of aluminum car parts by a human operator in a feeding container with 4 indexing pins for each part. The detection of the misplaced parts is critical for avoiding collisions between the containers and a high pressure washing machine and also to avoid collisions between the parts and a robotic arm that is feeding parts to a air leakage inspection machine. The proposed inspection system relies on a 3D sensor for scanning the parts inside a container and then estimates the 6 DoF pose of the container followed by an analysis of the overlap percentage between each part reference point cloud and the 3D sensor data. When the overlap percentage is below a given threshold, the part is considered as misplaced and the operator is alerted to fix the part placement in the container. The deployment of the inspection system on an automotive production line for 22 weeks has shown promising results by avoiding 18 hours of disruptions, since it detected 407 containers having misplaced parts in 4524 inspections, from which 12 were false negatives, while no false positives were reported, which allowed the elimination of disruptions to the production line at the cost of manual reinspection of 0.27% of false negative containers by the operator. © 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.

2023

Stereo Based 3D Perception for Obstacle Avoidance in Autonomous Wheelchair Navigation

Autores
Gomes, B; Torres, J; Sobral, P; Sousa, A; Reis, LP;

Publicação
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1

Abstract
In recent years, scientific and technological advances in robotics, have enabled the development of disruptive solutions for human interaction with the real world. In particular, the application of robotics to support people with physical disabilities, improved their life quality with a high social impact. This paper presents a stereo image based perception solution for autonomous wheelchairs navigation. It was developed to extend the Intellwheels project, a development platform for intelligent wheelchairs. The current version of Intellwheels relies on a planar scanning sensor, the Laser Range Finder (LRF), to detect the surrounding obstacles. The need for robust navigation capabilities means that the robot is required to precept not only obstacles but also bumps and holes on the ground. The proposed stereo-based solution, supported in passive stereo ZED cameras, was evaluated in a 3D simulated world scenario designed with a challenging floor. The performance of the wheelchair navigation with three different configurations was compared: first, using a LRF sensor, next with an unfiltered stereo camera and finally, applying a stereo camera with a speckle filter. The LRF solution was unable to complete the planned navigation. The unfiltered stereo camera completed the challenge with a low navigation quality due to noise. The filtered stereo camera reached the target position with a nearly optimal path.

2023

Intelligent Wheelchairs Rolling in Pairs Using Reinforcement Learning

Autores
Rodrigues, N; Sousa, A; Reis, LP; Coelho, A;

Publicação
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 2

Abstract
Intelligent wheelchairs aim to improve mobility limitations by providing ingenious mechanisms to control and move the chair. This paper aims to enhance the autonomy level of intelligent wheelchair navigation by applying reinforcement learning algorithms to move the chair to the desired location. Also, as a second objective, add one more chair and move both chairs in pairs to promote group social activities. The experimental setup is based on a simulated environment using gazebo and ROS where a leader chair moves towards a goal, and the follower chair should navigate near the leader chair. The collected metrics (time to complete the task and the trajectories of the chairs) demonstrated that Deep Q-Network (DQN) achieved better results than the Q-Learning algorithm by being the unique algorithm to accomplish the pair navigation behaviour between two chairs.

2023

Teaching ROS1/2 and Reinforcement Learning using a Mobile Robot and its Simulation

Autores
Ventuzelos, V; Leao, G; Sousa, A;

Publicação
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1

Abstract
Robotics is an ever-growing field, used in countless applications, from domestic to industrial, and taught in advanced courses of multiple higher education institutions. Robot Operating System (ROS), the most prominent robotics architecture, integrates several of these, and has recently moved to a new iteration in the form of ROS2. This project aims to design a complete educational package meant for teaching intelligent robotics in ROS1 and ROS2. A foundation for the package was constructed, using a small differential drive robot equipped with camera-based virtual sensors, a representation in the Flatland simulator, and introductory lessons to both ROS versions and Reinforcement Learning (RL) in robotics. To evaluate the package's pertinence, expected learning outcomes were set and the lessons were tested with users from varying backgrounds and levels of robotics experience. Encouraging results were obtained, especially in the ROS1 and ROS2 lessons, while the feedback from the RL lesson provided clear indications for future improvements. Therefore, this work provides solid groundwork for a more comprehensive educational package on robotics and ROS.

2023

Tree Trunks Cross-Platform Detection Using Deep Learning Strategies for Forestry Operations

Autores
da Silva, DQ; dos Santos, FN; Filipe, V; Sousa, AJ;

Publicação
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1

Abstract
To tackle wildfires and improve forest biomass management, cost effective and reliable mowing and pruning robots are required. However, the development of visual perception systems for forestry robotics needs to be researched and explored to achieve safe solutions. This paper presents two main contributions: an annotated dataset and a benchmark between edge-computing hardware and deep learning models. The dataset is composed by nearly 5,400 annotated images. This dataset enabled to train nine object detectors: four SSD MobileNets, one EfficientDet, three YOLO-based detectors and YOLOR. These detectors were deployed and tested on three edge-computing hardware (TPU, CPU and GPU), and evaluated in terms of detection precision and inference time. The results showed that YOLOR was the best trunk detector achieving nearly 90% F1 score and an inference average time of 13.7ms on GPU. This work will favour the development of advanced vision perception systems for robotics in forestry operations.

Teses
supervisionadas

2023

Teaching Robot Learning in ROS2

Autor
Filipe Reis Almeida

Instituição
UP-FEUP

2023

Robotic bin picking of flexible entangled tubes

Autor
Gonçalo da Mota Laranjeira Torres Leão

Instituição
UP-FEUP

2023

Generation and load forecasting for optimization of battery energy management in the context of a nanogrid

Autor
João Pedro de Bastos Ferreira

Instituição
UP-FEUP

2023

seguimento tridimensional da bola através de sistema de visão de baixo custo para aplicação em desportos praticados em recinto desportivo coberto

Autor
José Carlos Lobinho Gomes

Instituição
UP-FEUP

2023

AI-Based, Real-Time Object Detection in the Public Landscape

Autor
André Vilhena da Costa

Instituição
UP-FEUP