Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2022

Hybrid Quality Inspection for the Automotive Industry: Replacing the Paper-Based Conformity List through Semi-Supervised Object Detection and Simulated Data

Authors
Rio-Torto, I; Campanico, AT; Pinho, P; Filipe, V; Teixeira, LF;

Publication
APPLIED SCIENCES-BASEL

Abstract
The still prevalent use of paper conformity lists in the automotive industry has a serious negative impact on the performance of quality control inspectors. We propose instead a hybrid quality inspection system, where we combine automated detection with human feedback, to increase worker performance by reducing mental and physical fatigue, and the adaptability and responsiveness of the assembly line to change. The system integrates the hierarchical automatic detection of the non-conforming vehicle parts and information visualization on a wearable device to present the results to the factory worker and obtain human confirmation. Besides designing a novel 3D vehicle generator to create a digital representation of the non conformity list and to collect automatically annotated training data, we apply and aggregate in a novel way state-of-the-art domain adaptation and pseudo labeling methods to our real application scenario, in order to bridge the gap between the labeled data generated by the vehicle generator and the real unlabeled data collected on the factory floor. This methodology allows us to obtain, without any manual annotation of the real dataset, an example-based F1 score of 0.565 in an unconstrained scenario and 0.601 in a fixed camera setup (improvements of 11 and 14.6 percentage points, respectively, over a baseline trained with purely simulated data). Feedback obtained from factory workers highlighted the usefulness of the proposed solution, and showed that a truly hybrid assembly line, where machine and human work in symbiosis, increases both efficiency and accuracy in automotive quality control.

2022

Detection of Epilepsy in EEGs Using Deep Sequence Models - A Comparative Study

Authors
Marques, M; Lourenco, CD; Teixeira, LF;

Publication
PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2022)

Abstract
The automation of interictal epileptiform discharges through deep learning models can increase assertiveness and reduce the time spent on epilepsy diagnosis, making the process faster and more reliable. It was demonstrated that deep sequence networks can be a useful type of algorithm to effectively detect IEDs. Several different deep networks were tested, of which the best three architectures reached average AUC values of 0.96, 0.95 and 0.94, with convergence of test specificity and sensitivity values around 90%, which indicates a good ability to detect IED samples in EEG records.

2022

Pattern Recognition and Image Analysis - 10th Iberian Conference, IbPRIA 2022, Aveiro, Portugal, May 4-6, 2022, Proceedings

Authors
Pinho, AJ; Georgieva, P; Teixeira, LF; Sánchez, JA;

Publication
IbPRIA

Abstract

2022

Classification of Facial Expressions Under Partial Occlusion for VR Games

Authors
Rodrigues, ASF; Lopes, JC; Lopes, RP; Teixeira, LF;

Publication
OPTIMIZATION, LEARNING ALGORITHMS AND APPLICATIONS, OL2A 2022

Abstract
Facial expressions are one of the most common way to externalize our emotions. However, the same emotion can have different effects on the same person and has different effects on different people. Based on this, we developed a system capable of detecting the facial expressions of a person in real-time, occluding the eyes (simulating the use of virtual reality glasses). To estimate the position of the eyes, in order to occlude them, Multi-task Cascade Convolutional Neural Networks (MTCNN) were used. A residual network, a VGG, and the combination of both models, were used to perform the classification of 7 different types of facial expressions (Angry, Disgust, Fear, Happy, Sad, Surprise, Neutral), classifying the occluded and non-occluded dataset. The combination of both models, achieved an accuracy of 64.9% for the occlusion dataset and 62.8% for no occlusion, using the FER-2013 dataset. The primary goal of this work was to evaluate the influence of occlusion, and the results show that the majority of the classification is done with the mouth and chin. Nevertheless, the results were far from the state-of-the-art, which is expect to be improved, mainly by adjusting the MTCNN.

2022

Pattern Recognition and Image Analysis

Authors
Pinho, AJ; Georgieva, P; Teixeira, LF; Sánchez, JA;

Publication
Lecture Notes in Computer Science

Abstract

2022

Boosting color similarity decisions using the CIEDE2000_PF Metric

Authors
Pereira, A; Carvalho, P; Corte Real, L;

Publication
SIGNAL IMAGE AND VIDEO PROCESSING

Abstract
Color comparison is a key aspect in many areas of application, including industrial applications, and different metrics have been proposed. In many applications, this comparison is required to be closely related to human perception of color differences, thus adding complexity to the process. To tackle this, different approaches were proposed through the years, culminating in the CIEDE2000 formulation. In our previous work, we showed that simple color properties could be used to reduce the computational time of a color similarity decision process that employed this metric, which is recognized as having high computational complexity. In this paper, we show mathematically and experimentally that these findings can be adapted and extended to the recently proposed CIEDE2000 PF metric, which has been recommended by the CIE for industrial applications. Moreover, we propose new efficient models that not only achieve lower error rates, but also outperform the results obtained for the CIEDE2000 metric.

  • 74
  • 368