2007
Autores
Veiga, G; Pires, JN; Nilsson, K;
Publicação
IFAC Proceedings Volumes (IFAC-PapersOnline)
Abstract
The integration of different robot automation technologies, with the aim for reusing available production solutions, is a major obstacle for deployment of low-cost components into productive (high-performance) systems. Technologies demanding high processing power, like machine vision or voice recognition systems, are normally easy to program but require proprietary languages and platforms, which constitutes an important problem during communications and setup. Instead of the current need for trained specialist, in particular flexible manufacturing in SMEs call for solutions that are easy easy to use and (re)configure. One attempt in that direction is the service-oriented architecture (SOA) approach, which here is accomplished by the use of Universal Plug-and-Play (UPnP) technologies and confronted with real robot application demand represented by an experimental manufacturing cell. Contributions include the way of building software applications to program manufacturing cells whose building blocks are represented by UPnP devices. Such devices encapsulate both manufacturing equipment and interaction methods. The latter is exemplified by a speech recognition system, for which a tool for automatic generation of UPnP devices based on the information contained in speech recognition XML grammars is presented. Experiences form experiments confirms the desired efficiency and simplicity when setting up advanced manufacturing equipment. © 2007 IFAC.
2022
Autores
Teixeira, S; Arrais, R; Dias, R; Veiga, G;
Publicação
Procedia Computer Science
Abstract
2023
Autores
Dias, J; Simoes, P; Soares, N; Costa, CM; Petry, MR; Veiga, G; Rocha, LF;
Publicação
SENSORS
Abstract
Machine vision systems are widely used in assembly lines for providing sensing abilities to robots to allow them to handle dynamic environments. This paper presents a comparison of 3D sensors for evaluating which one is best suited for usage in a machine vision system for robotic fastening operations within an automotive assembly line. The perception system is necessary for taking into account the position uncertainty that arises from the vehicles being transported in an aerial conveyor. Three sensors with different working principles were compared, namely laser triangulation (SICK TriSpector1030), structured light with sequential stripe patterns (Photoneo PhoXi S) and structured light with infrared speckle pattern (Asus Xtion Pro Live). The accuracy of the sensors was measured by computing the root mean square error (RMSE) of the point cloud registrations between their scans and two types of reference point clouds, namely, CAD files and 3D sensor scans. Overall, the RMSE was lower when using sensor scans, with the SICK TriSpector1030 achieving the best results (0.25 mm +/- 0.03 mm), the Photoneo PhoXi S having the intermediate performance (0.49 mm +/- 0.14 mm) and the Asus Xtion Pro Live obtaining the higher RMSE (1.01 mm +/- 0.11 mm). Considering the use case requirements, the final machine vision system relied on the SICK TriSpector1030 sensor and was integrated with a collaborative robot, which was successfully deployed in an vehicle assembly line, achieving 94% success in 53,400 screwing operations.
2023
Autores
Moutinho, D; Rocha, LF; Costa, CM; Teixeira, LF; Veiga, G;
Publicação
ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING
Abstract
Human-Robot Collaboration is a critical component of Industry 4.0, contributing to a transition towards more flexible production systems that are quickly adjustable to changing production requirements. This paper aims to increase the natural collaboration level of a robotic engine assembly station by proposing a cognitive system powered by computer vision and deep learning to interpret implicit communication cues of the operator. The proposed system, which is based on a residual convolutional neural network with 34 layers and a long -short term memory recurrent neural network (ResNet-34 + LSTM), obtains assembly context through action recognition of the tasks performed by the operator. The assembly context was then integrated in a collaborative assembly plan capable of autonomously commanding the robot tasks. The proposed model showed a great performance, achieving an accuracy of 96.65% and a temporal mean intersection over union (mIoU) of 94.11% for the action recognition of the considered assembly. Moreover, a task-oriented evaluation showed that the proposed cognitive system was able to leverage the performed human action recognition to command the adequate robot actions with near-perfect accuracy. As such, the proposed system was considered as successful at increasing the natural collaboration level of the considered assembly station.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.