Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

2025

Extending the Quantitative Pattern-Matching Paradigm

Autores
Alves, S; Kesner, D; Ramos, M;

Publicação
PROGRAMMING LANGUAGES AND SYSTEMS, APLAS 2024

Abstract
We show how (well-established) type systems based on non-idempotent intersection types can be extended to characterize termination properties of functional programming languages with pattern matching features. To model such programming languages, we use a (weak and closed) lambda-calculus integrating a pattern matching mechanism on algebraic data types (ADTs). Remarkably, we also show that this language not only encodes Plotkin's CBV and CBN lambda-calculus as well as other subsuming frameworks, such as the bang-calculus, but can also be used to interpret the semantics of effectful languages with exceptions. After a thorough study of the untyped language, we introduce a type system based on intersection types, and we show through purely logical methods that the set of terminating terms of the language corresponds exactly to that of well-typed terms. Moreover, by considering non-idempotent intersection types, this characterization turns out to be quantitative, i.e. the size of the type derivation of a term t gives an upper bound for the number of evaluation steps from t to its normal form.

2025

CBVLM: Training-free Explainable Concept-based Large Vision Language Models for Medical Image Classification

Autores
Patrício, C; Torto, IR; Cardoso, JS; Teixeira, LF; Neves, JC;

Publicação
CoRR

Abstract

2025

FOMO as a Trigger to Embrace the Digital Nomad Lifestyle

Autores
de Almeida, MA; de Souza Nascimento, MG; Correia, A; Barbosa, CE; de Souza, JM; Schneider, D;

Publicação
2025 28th International Conference on Computer Supported Cooperative Work in Design (CSCWD)

Abstract

2025

Cognitive Ethical Design and Evaluation of Productive Reinforcing Spiral Model to Mitigate the Challenge of Extreme Polarization

Autores
Camargo Pimentel, AP; Motta, C; Correia, A; De Souza, JM; Schneider, D;

Publicação
2025 28th International Conference on Computer Supported Cooperative Work in Design (CSCWD)

Abstract

2025

Stress-Testing of Multimodal Models in Medical Image-Based Report Generation

Autores
Carvalhido, F; Cardoso, HL; Cerqueira, V;

Publicação
AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25 - March 4, 2025, Philadelphia, PA, USA

Abstract
Multimodal models, namely vision-language models, present unique possibilities through the seamless integration of different information mediums for data generation. These models mostly act as a black-box, making them lack transparency and explicability. Reliable results require accountable and trustworthy Artificial Intelligence (AI), namely when in use for critical tasks, such as the automatic generation of medical imaging reports for healthcare diagnosis. By exploring stress-testing techniques, multimodal generative models can become more transparent by disclosing their shortcomings, further supporting their responsible usage in the medical field. Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

2025

Sampling approaches to reduce very frequent seasonal time series

Autores
Baldo, A; Ferreira, PJS; Mendes Moreira, J;

Publicação
EXPERT SYSTEMS

Abstract
With technological advancements, much data is being captured by sensors, smartphones, wearable devices, and so forth. These vast datasets are stored in data centres and utilized to forge data-driven models for the condition monitoring of infrastructures and systems through future data mining tasks. However, these datasets often surpass the processing capabilities of traditional information systems and methodologies due to their significant size. Additionally, not all samples within these datasets contribute valuable information during the model training phase, leading to inefficiencies. The processing and training of Machine Learning algorithms become time-consuming, and storing all the data demands excessive space, contributing to the Big Data challenge. In this paper, we propose two novel techniques to reduce large time-series datasets into more compact versions without undermining the predictive performance of the resulting models. These methods also aim to decrease the time required for training the models and the storage space needed for the condensed datasets. We evaluated our techniques on five public datasets, employing three Machine Learning algorithms: Holt-Winters, SARIMA, and LSTM. The outcomes indicate that for most of the datasets examined, our techniques maintain, and in several instances enhance, the forecasting accuracy of the models. Moreover, we significantly reduced the time required to train the Machine Learning algorithms employed.

  • 21
  • 4180