2023
Authors
Oliveira, A; Dias, A; Santos, T; Rodrigues, P; Martins, A; Silva, E; Almeida, J;
Publication
OCEANS 2023 - LIMERICK
Abstract
Offshore wind farms are becoming the main alternative to fossil fuels and the future key to mitigating climate change by achieving energy sustainability. With favorable indicators in almost every environmental index, these structures operate under varying and dynamic environmental conditions, leading to efficiency losses and sudden failures. For these reasons, it's fundamental to promote the development of autonomous solutions to monitor the health condition of the construction parts, preventing structural damage and accidents. This paper introduces a new simulation environment for testing and training autonomous inspection techniques under a more realistic offshore wind farm scenario. Combining the Gazebo simulator with ROS, this framework can include multi-robots with different sensors to operate in a customizable simulation environment regarding some external elements (fog, wind, buoyancy...). The paper also presents a use case composed of a 3D LiDAR-based technique for autonomous wind turbine inspection with UAV, including point cloud clustering, model estimation, and the preliminary results under this simulation framework using a mixed environment (offshore simulation with a real UAV platform).
2023
Authors
Moura, A; Antunes, J; Martins, JJ; Dias, A; Martins, A; Almeida, JM; Silva, E;
Publication
OCEANS 2023 - LIMERICK
Abstract
The use of autonomous vehicles in maritime operations is a technological challenge. In the particular case of autonomous aerial vehicles (UAVs), their application ranges from inspection and surveillance of offshore power plants, and marine life observation, to search and rescue missions. Manually landing UAVs onboard water vessels can be very challenging due to limited space onboard and wave agitation. This paper proposes an autonomous solution for the task of landing commercial multicopter UAVs with onboard cameras on water vessels, based on the detection of a custom landing platform with computer vision techniques. The autonomous landing behavior was tested in real conditions, using a research vessel at sea, where the UAV was able to detect, locate, and safely land on top of the developed landing platform.
1995
Authors
Sousa, JB; Pereira, FL; daSilva, EP;
Publication
PROCEEDINGS OF THE 34TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-4
Abstract
This paper presents a dynamically configurable architecture for the control of autonomous mobile robots based on hierarchic structure whose three levels, organization, coordination and functional layer, are organized linguistically. The main contribution is the concept of dynamic reconfigurability where the notion of architecture coordinator plays a crucial role. The design of the functional layer establishes the primitives for dynamic configuration.
2007
Authors
Silva, H; Almeida, JM; Lima, L; Martins, A; Silva, EP; Patacho, A;
Publication
COMPUTATIONAL MODELLING OF OBJECTS REPRESENTED IN IMAGES: FUNDAMENTALS, METHODS AND APPLICATIONS
Abstract
This paper propose a real-time vision architecture for mobile robotics, and describes a current implementation that is characterised by: low computational cost, low latency, low power, high modularity, configuration, adaptability and scalability. A pipeline structure further reduces latency and allows a paralleled hardware implementation. A dedicated hardware vision sensor was developed in order to take advantage of the proposed architecture. A new method using run length encoding (RLE) colour transition allows real-time edge determination at low computational cost. The real-time characteristics and hardware partial implementation, coupled with low energy consumption address typical autonomous systems applications.
2007
Authors
Martins, A; Almeida, JM; Ferreira, H; Silva, H; Dias, N; Dias, A; Almeida, C; Silva, EP;
Publication
PROCEEDINGS OF THE 2007 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-10
Abstract
This work presents a hybrid coordinated manoeuvre for docking an autonomous surface vehicle with an autonomous underwater vehicle. The control manoeuvre uses visual information to estimate the AUV relative position and attitude in relation to the ASV and steers the ASV in order to dock with the AUV. The AUV is assumed to be at surface with only a small fraction of its volume visible. The system implemented in the Autonomous Surface vehicle ROAZ, developed by LSA-ISEP to perform missions in river environment, test autonomous AUV docking capabilities and multiple AUV/ASV coordinated missions is presented. Information from a low cost embedded robotics vision system (LSAVision), along with inertial navigation sensors is fused in an extended kalman filter and used to determine AUV relative position and orientation to the surface vehicle The real time vision processing system is described and results are presented in operational scenario.
2011
Authors
Silva, H; Dias, A; Almeida, JM; Martins, A; da Silva, EP;
Publication
RoboCup 2011: Robot Soccer World Cup XV [papers from the 15th Annual RoboCup International Symposium, Istanbul, Turkey, July 2011]
Abstract
This paper proposes a novel architecture for real-time 3D ball trajectory estimation with a monocular camera in Middle Size League scenario. Our proposed system consists on detecting possible multiple ball candidates in the image, that are filtered in a multi-target data association layer. Validated ball candidates have their 3D trajectory estimated by Maximum Likelihood method (MLM) followed by a recursive refinement obtained with an Extended Kalman Filter (EKF). Our approach was validated in real RoboCup scenario, evaluated recurring to ground truth information obtained by alternative methods allowing overall performance and quality assessment. © 2012 Springer-Verlag Berlin Heidelberg.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.