2025
Autores
Baccega, D; Aguilar, J; Baquero, C; Fernández Anta, A; Ramirez, JM;
Publicação
Abstract
2025
Autores
Tame, ID; Tolosana, R; Melzi, P; Rodríguez, RV; Kim, M; Rathgeb, C; Liu, X; Gomez, LF; Morales, A; Fierrez, J; Garcia, JO; Zhong, Z; Huang, Y; Mi, Y; Ding, S; Zhou, S; He, S; Fu, L; Cong, H; Zhang, R; Xiao, Z; Smirnov, E; Pimenov, A; Grigorev, A; Timoshenko, D; Asfaw, KM; Low, CY; Liu, H; Wang, C; Zuo, Q; He, Z; Shahreza, HO; George, A; Unnervik, A; Rahimi, P; Marcel, S; Neto, PC; Huber, M; Kolf, JN; Damer, N; Boutros, F; Cardoso, JS; Sequeira, AF; Atzori, A; Fenu, G; Marras, M; Struc, V; Yu, J; Li, Z; Li, J; Zhao, W; Lei, Z; Zhu, X; Zhang, X; Biesseck, B; Vidal, P; Coelho, L; Granada, R; Menotti, D;
Publicação
Inf. Fusion
Abstract
Synthetic data is gaining increasing popularity for face recognition technologies, mainly due to the privacy concerns and challenges associated with obtaining real data, including diverse scenarios, quality, and demographic groups, among others. It also offers some advantages over real data, such as the large amount of data that can be generated or the ability to customize it to adapt to specific problem-solving needs. To effectively use such data, face recognition models should also be specifically designed to exploit synthetic data to its fullest potential. In order to promote the proposal of novel Generative AI methods and synthetic data, and investigate the application of synthetic data to better train face recognition systems, we introduce the 2nd FRCSyn-onGoing challenge, based on the 2nd Face Recognition Challenge in the Era of Synthetic Data (FRCSyn), originally launched at CVPR 2024. This is an ongoing challenge that provides researchers with an accessible platform to benchmark (i) the proposal of novel Generative AI methods and synthetic data, and (ii) novel face recognition systems that are specifically proposed to take advantage of synthetic data. We focus on exploring the use of synthetic data both individually and in combination with real data to solve current challenges in face recognition such as demographic bias, domain adaptation, and performance constraints in demanding situations, such as age disparities between training and testing, changes in the pose, or occlusions. Very interesting findings are obtained in this second edition, including a direct comparison with the first one, in which synthetic databases were restricted to DCFace and GANDiffFace. © 2025
2025
Autores
Guimarães, M; Carneiro, D; Soares, L; Ribeiro, M; Loureiro, G;
Publicação
Advances in Information and Communication - Proceedings of the 2025 Future of Information and Communication Conference (FICC), Volume 1, Berlin, Germany, 27-28 April 2025.
Abstract
The interaction between humans and technology has always been a key determinant factor of adoption and efficiency. This is true whether the interaction is with hardware, software or data. In the particular case of Information Retrieval (IR), recent developments in Deep Learning and Natural Language Processing (NLP) techniques opened the door to more natural and efficient IR means, no longer based on keywords or similarity metrics but on a distributed representation of meaning. In this paper we propose an agent-based architecture to serve as an interface with industrial systems, in which agents are powered by specific Large Language Models (LLMs). Its main goal is to make the interaction with such systems (e.g. data sources, production systems, machines) natural, allowing users to execute complex tasks with simple prompts. To this end, key aspects considered in the architecture are human-centricity and context-awareness. This paper provides a high-level description of this architecture, and then focuses on the development and evaluation of one of its key agents, responsible for information retrieval. For this purpose, we detail three application scenarios, and evaluate the ability of this agent to select the appropriate data sources to answer a specific prompt. Depending on the scenario and on the underlying model, results show an accuracy of up to 80%, showing that the proposed agent can be used to autonomously select from among several available data sources to answer a specific information need. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
2025
Autores
Ribeiro, J; Brilhante, M; Matos, DM; Silva, CA; Sobreira, H; Costa, P;
Publicação
2025 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)
Abstract
2025
Autores
Cerqueira, V; Roque, L; Soares, C;
Publicação
DISCOVERY SCIENCE, DS 2024, PT I
Abstract
Accurate evaluation of forecasting models is essential for ensuring reliable predictions. Current practices for evaluating and comparing forecasting models focus on summarising performance into a single score, using metrics such as SMAPE. We hypothesize that averaging performance over all samples dilutes relevant information about the relative performance of models. Particularly, conditions in which this relative performance is different than the overall accuracy. We address this limitation by proposing a novel framework for evaluating univariate time series forecasting models from multiple perspectives, such as one-step ahead forecasting versus multi-step ahead forecasting. We show the advantages of this framework by comparing a state-of-the-art deep learning approach with classical forecasting techniques. While classical methods (e.g. ARIMA) are long-standing approaches to forecasting, deep neural networks (e.g. NHITS) have recently shown state-of-the-art forecasting performance in benchmark datasets. We conducted extensive experiments that show NHITS generally performs best, but its superiority varies with forecasting conditions. For instance, concerning the forecasting horizon, NHITS only outperforms classical approaches for multi-step ahead forecasting. Another relevant insight is that, when dealing with anomalies, NHITS is outperformed by methods such as Theta. These findings highlight the importance of evaluating forecasts from multiple dimensions.
2025
Autores
Gaudio, A; Giordano, N; Elhilali, M; Schmidt, S; Renna, F;
Publicação
IEEE Transactions on Biomedical Engineering
Abstract
The detection of Pulmonary Hypertension (PH) from the computer analysis of digitized heart sounds is a low-cost and non-invasive solution for early PH detection and screening. We present an extensive cross-domain evaluation methodology with varying animals (humans and porcine animals) and varying auscultation technologies (phonocardiography and seisomocardiography) evaluated across four methods. We introduce PH-ELM, a resource-efficient PH detection model based on the extreme learning machine that is smaller (300× fewer parameters), energy efficient (532× fewer watts of power), faster (36× faster to train, 44× faster at inference), and more accurate on out-of-distribution testing (improves median accuracy by 0.09 area under the ROC curve (auROC)) in comparison to a previously best performing deep network. We make four observations from our analysis: (a) digital auscultation is a promising technology for the detection of pulmonary hypertension; (b) seismocardiography (SCG) signals and phonocardiography (PCG) signals are interchangeable to train PH detectors; (c) porcine heart sounds in the training data can be used to evaluate PH from human heart sounds (the PH-ELM model preserves 88 to 95% of the best in-distribution baseline performance); (d) predictive performance of PH detection can be mostly preserved with as few as 10 heartbeats and capturing up to approximately 200 heartbeats per subject can improve performance. © 1964-2012 IEEE.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.