Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CRAS

2025

Towards adaptive and transparent tourism recommendations: A survey

Authors
Leal, F; Veloso, B; Malheiro, B; Burguillo, JC;

Publication
EXPERT SYSTEMS

Abstract
Crowdsourced data streams are popular and extremely valuable in several domains, namely in tourism. Tourism crowdsourcing platforms rely on past tourist and business inputs to provide tailored recommendations to current users in real time. The continuous, open, dynamic and non-curated nature of the crowd-originated data demands specific stream mining techniques to support online profiling, recommendation, change detection and adaptation, explanation and evaluation. The sought techniques must, not only, continuously improve and adapt profiles and models; but must also be transparent, overcome biases, prioritize preferences, master huge data volumes and all in real time. This article surveys the state-of-art of adaptive and explainable stream recommendation, extends the taxonomy of explainable recommendations from the offline to the stream-based scenario, and identifies future research opportunities.

2025

Unraveling Emotions with Pre-Trained Models

Authors
Sanmartín, AP; Arriba Pérez, Fd; Méndez, SG; Leal, F; Malheiro, B; Burguillo Rial, JC;

Publication
CoRR

Abstract

2025

Let's Talk About It: Making Scientific Computational Reproducibility Easier

Authors
Costa, L; Barbosa, S; Cunha, J;

Publication
VL/HCC

Abstract

2025

CompRep: A Dataset For Computational Reproducibility

Authors
Costa, L; Barbosa, S; Cunha, J;

Publication
PROCEEDINGS OF THE 3RD ACM CONFERENCE ON REPRODUCIBILITY AND REPLICABILITY, ACM REP 2025

Abstract
Reproducibility in computational science is increasingly dependent on the ability to faithfully re-execute experiments involving code, data, and software environments. However, assessing the effectiveness of reproducibility tools is difficult due to the lack of standardized benchmarks. To address this, we collected 38 computational experiments from diverse scientific domains and attempted to reproduce each using 8 different reproducibility tools. From this initial pool, we identified 18 experiments that could be successfully reproduced using at least one tool. These experiments form our curated benchmark dataset, which we release along with reproducibility packages to support ongoing evaluation efforts. This article introduces the curated dataset, incorporating details about software dependencies, execution steps, and configurations necessary for accurate reproduction. The dataset is structured to reflect diverse computational requirements and methodologies, ranging from simple scripts to complex, multi-language workflows, ensuring it presents the wide range of challenges researchers face in reproducing computational studies. It provides a universal benchmark by establishing a standardized dataset for objectively evaluating and comparing the effectiveness of reproducibility tools. Each experiment included in the dataset is carefully documented to ensure ease of use. We added clear instructions following a standard, so each experiment has the same kind of instructions, making it easier for researchers to run each of them with their own reproducibility tool.The utility of the dataset is demonstrated through extensive evaluations using multiple reproducibility tools.

2025

Mind the gap: The missing features of the tools to support user studies in software engineering

Authors
Costa, L; Barbosa, S; Cunha, J;

Publication
JOURNAL OF COMPUTER LANGUAGES

Abstract
User studies are paramount for advancing research in software engineering, particularly when evaluating tools and techniques involving programmers. However, researchers face several barriers when performing them despite the existence of supporting tools. We base our study on a set of tools and researcher-reported barriers identified in prior work on user studies in software engineering. In this work, we study how existing tools and their features cope with previously identified barriers. Moreover, we propose new features for the barriers that lack support. We validated our proposal with 102 researchers, achieving statistically significant positive support for all but one feature. We study the current gap between tools and barriers, using features as the bridge. We show there is a significant lack of support for several barriers, as some have no single tool to support them.

2025

Recent decoupling of global mean sea level rise from decadal scale climate variability

Authors
Donner, RV; Barbosa, SM;

Publication

Abstract

  • 8
  • 179