2025
Autores
Khanal, SR; Sharma, P; Thapa, K; Fernandes, H; Barroso, JMP; Filipe, V;
Publicação
Proceedings of the 11th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion
Abstract
Facial expression is a way of communication that can be used to interact with computers or other electronic devices and the recognition of emotion from faces is an emerging practice with applications in many fields. Many cloud-based vision application programming interfaces are available that recognize emotion from facial images and video. In this article, the performances of two well-known APIs were compared using a public dataset of 980 images of facial emotions. For these experiments, a client program was developed that iterates over the image set, calls the cloud services, and caches the results of the emotion detection for each image. The performance was evaluated in each class of emotions using prediction accuracy. It has been found that the prediction accuracy for each emotion varies according to the cloud service being used. Similarly, each service provider presents a strong variation of performance according to the class being analyzed, as can be seen in more detail in these articles. © 2025 Elsevier B.V., All rights reserved.
2023
Autores
Cosme, J; Ribeiro, A; Filipe, V; Amorim, EV; Pinto, R;
Publicação
Web Information Systems and Technologies - 19th International Conference, WEBIST 2023, Rome, Italy, November 15-17, 2023, Revised Selected Papers
Abstract
The Digital Twin concept involves the transition to digital representations of factory floor equipment, the computerized simulation of processes and the visualization of data in real time. This type of digital transformations can be considered radical, encountering barriers in its implementation either due to resistance to change by the different elements that make up the industry or due to the disruption it can cause in the production process. The start of production on an assembly line is usually preceded by a checking procedure of parameters/conditions of the equipment present on the assembly line, using a sheet of paper containing the list of items to check and validate. In this article we describe the adoption of a paperless checklist to verify the configuration of assembly line equipment at production bootstrapping. A training program to coach the employees for a successful digital transition is also presented and discussed. Both the digital checklist and the training program are validated in a real-world industrial scenario. The results highlight the advantages of the digital approach given to the checklist with a multi-access viewing and maintenance of data for later analysis, with the training plan demonstrating effectiveness in breaking down barriers and resistance to the adoption of a new working method. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
2025
Autores
Fernandes, T; Silva, T; Vaz, J; Silva, J; Cruz, G; Sousa, A; Barroso, J; Martins, P; Filipe, V;
Publicação
Communications in Computer and Information Science - Technology and Innovation in Learning, Teaching and Education
Abstract
2025
Autores
Franco-Gonçalo, P; Leite, P; Alves-Pimenta, S; Colaço, B; Gonçalves, L; Filipe, V; McEvoy, F; Ferreira, M; Ginja, M;
Publicação
APPLIED SCIENCES-BASEL
Abstract
Canine hip dysplasia (CHD) screening relies on radiographic assessment, but traditional scoring methods often lack consistency due to inter-rater variability. This study presents an AI-driven system for automated measurement of the femoral head center to dorsal acetabular edge (FHC/DAE) distance, a key metric in CHD evaluation. Unlike most AI models that directly classify CHD severity using convolutional neural networks, this system provides an interpretable, measurement-based output to support a more transparent evaluation. The system combines a keypoint regression model for femoral head center localization with a U-Net-based segmentation model for acetabular edge delineation. It was trained on 7967 images for hip joint detection, 571 for keypoints, and 624 for acetabulum segmentation, all from ventrodorsal hip-extended radiographs. On a test set of 70 images, the keypoint model achieved high precision (Euclidean Distance = 0.055 mm; Mean Absolute Error = 0.0034 mm; Mean Squared Error = 2.52 x 10-5 mm2), while the segmentation model showed strong performance (Dice Score = 0.96; Intersection over Union = 0.92). Comparison with expert annotations demonstrated strong agreement (Intraclass Correlation Coefficients = 0.97 and 0.93; Weighted Kappa = 0.86 and 0.79; Standard Error of Measurement = 0.92 to 1.34 mm). By automating anatomical landmark detection, the system enhances standardization, reproducibility, and interpretability in CHD radiographic assessment. Its strong alignment with expert evaluations supports its integration into CHD screening workflows for more objective and efficient diagnosis and CHD scoring.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.