2025
Authors
Sousa, J; Sousa, A; Brueckner, F; Reis, LP; Reis, A;
Publication
ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING
Abstract
Directed Energy Deposition (DED) is a free-form metal additive manufacturing process characterized as toolless, flexible, and energy-efficient compared to traditional processes. However, it is a complex system with a highly dynamic nature that presents challenges for modeling and optimization due to its multiphysics and multiscale characteristics. Additionally, multiple factors such as different machine setups and materials require extensive testing through single-track depositions, which can be time and resource-intensive. Single-track experiments are the foundation for establishing optimal initial parameters and comprehensively characterizing bead geometry, ensuring the accuracy and efficiency of computer-aided design and process quality validation. We digitized a DED setup using the Robot Operating System (ROS 2) and employed a thermal camera for real-time monitoring and evaluation to streamline the experimentation process. With the laser power and velocity as inputs, we optimized the dimensions and stability of the melt pool and evaluated different objective functions and approaches using a Response Surface Model (RSM). The three-objective approach achieved better rewards in all iterations and, when implemented in areal setup, allowed to reduce the number of experiments and shorten setup time. Our approach can minimize waste, increase the quality and reliability of DED, and enhance and simplify human-process interaction by leveraging the collaboration between human knowledge and model predictions.
2025
Authors
Sousa, J; Brandau, B; Darabi, R; Sousa, A; Brueckner, F; Reis, A; Reis, LP;
Publication
IEEE ACCESS
Abstract
Laser-based additive manufacturing (LAM) offers the ability to produce near-net-shape metal parts with unparalleled energy efficiency and flexibility in both geometry and material selection. Despite advantages, these processes are inherently, as they are characterized by multiphysics interactions, multiscale phenomena, and highly dynamic behaviors, making their modeling and optimization particularly challenging. Artificial intelligence (AI) has emerged as a promising tool for enhancing the monitoring and control of additive manufacturing. This paper presents a systematic review of AI applications for real-time control of laser-based manufacturing processes, analyzing 16 relevant articles sourced from Scopus, IEEE Xplore, and Web of Science databases. The primary objective of this work is to contribute to the advancement of autonomous manufacturing systems capable of self-monitoring and self-correction, ensuring optimal part quality, enhanced efficiency, and reduced human intervention. Our findings indicate that 62.5 % of the 16 analyzed studies have deployed AI-driven controllers in real-world scenarios, with over 56 % using AI for control strategies, such as Reinforcement Learning. Furthermore, 62.5 % of the studies employed AI for process modeling or monitoring, which was integral to the development or data pipelines of the controllers. By defining a groundwork for future developments, this review not only highlights current advancements but also hints future innovations that will likely include AI-based controllers.
2025
Authors
Simoes, I; Sousa, AJ; Baltazar, A; Santos, F;
Publication
AGRICULTURE-BASEL
Abstract
Precision agriculture seeks to optimize crop yields while minimizing resource use. A key challenge is achieving uniform pesticide spraying to prevent crop damage and environmental contamination. Water-sensitive paper (WSP) is a common tool used for assessing spray quality, as it visually registers droplet impacts through color change. This work introduces a smartphone-based solution for capturing WSP images within vegetation, offering a tool for farmers to assess spray quality in real-world conditions. To achieve this, two approaches were explored: classical computer vision techniques and machine learning (ML) models (YOLOv8, Mask-RCNN, and Cellpose). Addressing the challenges of limited real-world data and the complexity of manual annotation, a programmatically generated synthetic dataset was employed to enable sim-to-real transfer learning. For the task of WSP segmentation within vegetation, YOLOv8 achieved an average Intersection over Union of 97.76%. In the droplet detection task, which involves identifying individual droplets on WSP, Cellpose achieved the highest precision of 96.18%, in the presence of overlapping droplets. While classical computer vision techniques provided a reliable baseline, they struggled with complex cases. Additionally, ML models, particularly Cellpose, demonstrated accurate droplet detection even without fine-tuning.
2025
Authors
Ferreira, J; Darabi, R; Sousa, A; Brueckner, F; Reis, LP; Reis, A; Tavares, RS; Sousa, J;
Publication
Journal of Intelligent Manufacturing
Abstract
This work introduces Gen-JEMA, a generative approach based on joint embedding with multimodal alignment (JEMA), to enhance feature extraction in the embedding space and improve the explainability of its predictions. Gen-JEMA addresses these challenges by leveraging multimodal data, including multi-view images and metadata such as process parameters, to learn transferable semantic representations. Gen-JEMA enables more explainable and enriched predictions by learning a decoder from the embedding. This novel co-learning framework, tailored for directed energy deposition (DED), integrates multiple data sources to learn a unified data representation and predict melt pool images from the primary sensor. The proposed approach enables real-time process monitoring using only the primary modality, simplifying hardware requirements and reducing computational overhead. The effectiveness of Gen-JEMA for DED process monitoring was evaluated, focusing on its generalization to downstream tasks such as melt pool geometry prediction and the generation of external melt pool representations using off-axis sensor data. To generate these external representations, autoencoder (AE) and variational autoencoder (VAE) architectures were optimized using Bayesian optimization. The AE outperformed other approaches achieving a 38% improvement in melt pool geometry prediction compared to the baseline and 88% in data generation compared with the VAE. The proposed framework establishes the foundation for integrating multisensor data with metadata through a generative approach, enabling various downstream tasks within the DED domain and achieving a small embedding, allowing efficient process control based on model predictions and embeddings. © The Author(s) 2025.
2025
Authors
Martins, JG; Nutonen, K; Costa, P; Kuts, V; Otto, T; Sousa, A; Petry, MR;
Publication
2025 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC
Abstract
Digital twins enable real-time modeling, simulation, and monitoring of complex systems, driving advancements in automation, robotics, and industrial applications. This study presents a large-scale digital twin-testing facility for evaluating mobile robots and pilot robotic systems in a research laboratory environment. The platform integrates high-fidelity physical and environmental models, providing a controlled yet dynamic setting for analyzing robotic behavior. A key feature of the system is its comprehensive data collection framework, capturing critical parameters such as position, orientation, and velocity, which can be leveraged for machine learning, performance optimization, and decision-making. The facility also supports the simulation of discrete operational systems, using predictive modeling to bridge informational gaps when real-time data updates are unavailable. The digital twin was validated through a matrix manufacturing system simulation, with an Augmented Reality (AR) interface on the HoloLens 2 to overlay digital information onto mobile platform controllers, enhancing situational awareness. The main contributions include a digital twin framework for deploying data-driven robotic systems and three key AR/VR integration optimization methods. Demonstrated in a laboratory setting, the system is a versatile tool for research and industrial applications, fostering insights into robotic automation and digital twin scalability while reducing costs and risks associated with real-world testing.
2025
Authors
Rema, C; Sousa, A; Sobreira, H; Costa, P; Silva, MF;
Publication
2025 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC
Abstract
The rise of Industry 4.0 has revolutionized manufacturing by integrating real-time data analysis, artificial intelligence (AI), automation, and interconnected systems, enabling adaptive and resilient smart factories. Autonomous Mobile Robots (AMRs), with their advanced mobility and navigation capabilities, are a pillar of this transformation. However, their deployment in job shop environments adds complexity to the already challenging Job Shop Scheduling Problem (JSSP), expanding it to include task allocation, robot scheduling, and travel time optimization, creating a multi-faceted, non-deterministic polynomial-time hardness (NP-hard) problem. Traditional approaches such as heuristics, meta-heuristics, and mixed integer linear programming (MILP) are commonly used. Recent AI advancements, particularly large language models (LLM), have shown potential in addressing these scheduling challenges due to significant improvements in reasoning and decision-making from textual data. This paper examines the application of LLM to tackle scheduling complexities in smart job shops with mobile robots. Guided by tailored prompts inserted manually, LLM are employed to generate scheduling solutions, being these compared to an heuristic-based method. The results indicate that LLM currently have limitations in solving complex combinatorial problems, such as task scheduling with mobile robots. Due to issues with consistency and repeatability, they are not yet reliable enough for practical implementation in industrial environments. However, they offer a promising foundation for augmenting traditional approaches in the future.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.