2024
Autores
Torres, JM; Oliveira, S; Sobral, PM; Moreira, RS; Soares, C;
Publicação
SN Comput. Sci.
Abstract
We spend about one-third of our life either sleeping or attempting to do so. Sleeping is a key aspect for most human body processes, affecting physical and mental health and the ability to fight diseases, develop immunity and control metabolism. Therefore, monitoring human sleep quality is extremely important for the detection of possible sleep disorders. Several technologies exist to achieve this goal, however, most of them are expensive proprietary systems, some require hospitalization and many use intrusive equipment that can, by itself, affect sleep quality. This paper presents an intelligent system, a complete low-cost hardware and software solution, for monitoring the sleep quality of an individual in a home environment. User privacy is guaranteed as all processing is done at the edge and no audio or video is stored. This system monitors several fundamental aspects of sleeping periods in real-time using a low cost single-board computer for processing, a camera for body motion detection (MD module) and for eye/sleep status detection (SSD module), and a microphone for audio recognition (AUDR module) of breath pattern analysis and snore detection. It can be strategically placed near the bed to avoid interfering with the natural sleep pattern. For each sleeping period, the system produces a final report that can be a valuable aid for improving the sleeping health of the monitored person. Functional unitary tests were carried successfully on the selected, low-cost, hardware platform (Raspberry Pi). The entire process was validated by an expert clinical psychologist, ensuring the reliability and effectiveness of the system. The visual and sound modules use sophisticated computer vision and machine learning techniques suitable for edge computing devices. Each of the system’s features have been independently tested, using properly organized audio and video datasets and the well established metrics of precision, recall and F1 score, to evaluate the binary classifiers in each of the three modules. The accuracy values obtained where 90.2% (MD), 79.1% (SSD) and 81.3% (AUDR), demonstrating the great application potential of our solution.
2024
Autores
Gomes, B; Soares, C; Torres, JM; Karmali, K; Karmali, S; Moreira, RS; Sobral, P;
Publicação
SENSORS
Abstract
In Portugal, more than 98% of domestic cooking oil is disposed of improperly every day. This avoids recycling/reconverting into another energy. Is also may become a potential harmful contaminant of soil and water. Driven by the utility of recycled cooking oil, and leveraging the exponential growth of ubiquitous computing approaches, we propose an IoT smart solution for domestic used cooking oil (UCO) collection bins. We call this approach SWAN, which stands for Smart Waste Accumulation Network. It is deployed and evaluated in Portugal. It consists of a countrywide network of collection bin units, available in public areas. Two metrics are considered to evaluate the system's success: (i) user engagement, and (ii) used cooking oil collection efficiency. The presented system should (i) perform under scenarios of temporary communication network failures, and (ii) be scalable to accommodate an ever-growing number of installed collection units. Thus, we choose a disruptive approach from the traditional cloud computing paradigm. It relies on edge node infrastructure to process, store, and act upon the locally collected data. The communication appears as a delay-tolerant task, i.e., an edge computing solution. We conduct a comparative analysis revealing the benefits of the edge computing enabled collection bin vs. a cloud computing solution. The studied period considers four years of collected data. An exponential increase in the amount of used cooking oil collected is identified, with the developed solution being responsible for surpassing the national collection totals of previous years. During the same period, we also improved the collection process as we were able to more accurately estimate the optimal collection and system's maintenance intervals.
2024
Autores
Teixeira, M; Oliveira, JM; Ramos, P;
Publicação
MACHINE LEARNING AND KNOWLEDGE EXTRACTION
Abstract
Retailers depend on accurate sales forecasts to effectively plan operations and manage supply chains. These forecasts are needed across various levels of aggregation, making hierarchical forecasting methods essential for the retail industry. As competition intensifies, the use of promotions has become a widespread strategy, significantly impacting consumer purchasing behavior. This study seeks to improve forecast accuracy by incorporating promotional data into hierarchical forecasting models. Using a sales dataset from a major Portuguese retailer, base forecasts are generated for different hierarchical levels using ARIMA models and Multi-Layer Perceptron (MLP) neural networks. Reconciliation methods including bottom-up, top-down, and optimal reconciliation with OLS and WLS (struct) estimators are employed. The results show that MLPs outperform ARIMA models for forecast horizons longer than one day. While the addition of regressors enhances ARIMA's accuracy, it does not yield similar improvements for MLP. MLPs present a compelling balance of simplicity and efficiency, outperforming ARIMA in flexibility while offering faster training times and lower computational demands compared to more complex deep learning models, making them highly suitable for practical retail forecasting applications.
2024
Autores
Oliveira, JM; Ramos, P;
Publicação
MATHEMATICS
Abstract
This study investigates the effectiveness of Transformer-based models for retail demand forecasting. We evaluated vanilla Transformer, Informer, Autoformer, PatchTST, and temporal fusion Transformer (TFT) against traditional baselines like AutoARIMA and AutoETS. Model performance was assessed using mean absolute scaled error (MASE) and weighted quantile loss (WQL). The M5 competition dataset, comprising 30,490 time series from 10 stores, served as the evaluation benchmark. The results demonstrate that Transformer-based models significantly outperform traditional baselines, with Transformer, Informer, and TFT leading the performance metrics. These models achieved MASE improvements of 26% to 29% and WQL reductions of up to 34% compared to the seasonal Na & iuml;ve method, particularly excelling in short-term forecasts. While Autoformer and PatchTST also surpassed traditional methods, their performance was slightly lower, indicating the potential for further tuning. Additionally, this study highlights a trade-off between model complexity and computational efficiency, with Transformer models, though computationally intensive, offering superior forecasting accuracy compared to the significantly slower traditional models like AutoARIMA. These findings underscore the potential of Transformer-based approaches for enhancing retail demand forecasting, provided the computational demands are managed effectively.
2024
Autores
Beirão, MM; Matos, J; Gonçalves, T; Kase, C; Nakayama, LF; Freitas, Dd; Cardoso, JS;
Publicação
CoRR
Abstract
2024
Autores
Rio Torto, I; Cardoso, JS; Teixeira, LF;
Publicação
MEDICAL IMAGING WITH DEEP LEARNING
Abstract
The increased interest and importance of explaining neural networks' predictions, especially in the medical community, associated with the known unreliability of saliency maps, the most common explainability method, has sparked research into other types of explanations. Natural Language Explanations (NLEs) emerge as an alternative, with the advantage of being inherently understandable by humans and the standard way that radiologists explain their diagnoses. We extend upon previous work on NLE generation for multi-label chest X-ray diagnosis by replacing the traditional decoder-only NLE generator with an encoder-decoder architecture. This constitutes a first step towards Reinforcement Learning-free adversarial generation of NLEs when no (or few) ground-truth NLEs are available for training, since the generation is done in the continuous encoder latent space, instead of in the discrete decoder output space. However, in the current scenario, large amounts of annotated examples are still required, which are especially costly to obtain in the medical domain, given that they need to be provided by clinicians. Thus, we explore how the recent developments in Parameter-Efficient Fine-Tuning (PEFT) can be leveraged for this usecase. We compare different PEFT methods and find that integrating the visual information into the NLE generator layers instead of only at the input achieves the best results, even outperforming the fully fine-tuned encoder-decoder-based model, while only training 12% of the model parameters. Additionally, we empirically demonstrate the viability of supervising the NLE generation process on the encoder latent space, thus laying the foundation for RL-free adversarial training in low ground-truth NLE availability regimes. The code is publicly available at https://github.com/icrto/peft-nles.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.