Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Jácome Costa Cunha
  • Cargo

    Investigador
  • Desde

    01 novembro 2011
Publicações

2026

A framework for supporting the reproducibility of computational experiments in multiple scientific domains

Autores
Costa, L; Barbosa, S; Cunha, J;

Publicação
Future Gener. Comput. Syst.

Abstract
In recent years, the research community, but also the general public, has raised serious questions about the reproducibility and replicability of scientific work. Since many studies include some kind of computational work, these issues are also a technological challenge, not only in computer science, but also in most research domains. Computational replicability and reproducibility are not easy to achieve due to the variety of computational environments that can be used. Indeed, it is challenging to recreate the same environment via the same frameworks, code, programming languages, dependencies, and so on. We propose a framework, known as SciRep, that supports the configuration, execution, and packaging of computational experiments by defining their code, data, programming languages, dependencies, databases, and commands to be executed. After the initial configuration, the experiments can be executed any number of times, always producing exactly the same results. Our approach allows the creation of a reproducibility package for experiments from multiple scientific fields, from medicine to computer science, which can be re-executed on any computer. The produced package acts as a capsule, holding absolutely everything necessary to re-execute the experiment. To evaluate our framework, we compare it with three state-of-the-art tools and use it to reproduce 18 experiments extracted from published scientific articles. With our approach, we were able to execute 16 (89%) of those experiments, while the others reached only 61%, thus showing that our approach is effective. Moreover, all the experiments that were executed produced the results presented in the original publication. Thus, SciRep was able to reproduce 100% of the experiments it could run. © 2025 The Authors

2025

Addressing the Agony of Recruitment for Human-centric Computing Studies

Autores
Madampe, K; Grundy, J; Good, J; Hidellaarachchi, D; Cunha, J; Brown, C; Kuang, P; Tamime, RA; Anik, AI; Sarkar, A; Zhou, W; Khalid, S; Turchi, T; Wickramathilaka, S; Jiang, Y;

Publicação
ACM SIGSOFT Softw. Eng. Notes

Abstract

2025

Let's Talk About It: Making Scientific Computational Reproducibility Easy

Autores
Costa, L; Barbosa, S; Cunha, J;

Publicação
CoRR

Abstract

2025

CompRep: A Dataset For Computational Reproducibility

Autores
Costa, L; Barbosa, S; Cunha, J;

Publicação
PROCEEDINGS OF THE 3RD ACM CONFERENCE ON REPRODUCIBILITY AND REPLICABILITY, ACM REP 2025

Abstract
Reproducibility in computational science is increasingly dependent on the ability to faithfully re-execute experiments involving code, data, and software environments. However, assessing the effectiveness of reproducibility tools is difficult due to the lack of standardized benchmarks. To address this, we collected 38 computational experiments from diverse scientific domains and attempted to reproduce each using 8 different reproducibility tools. From this initial pool, we identified 18 experiments that could be successfully reproduced using at least one tool. These experiments form our curated benchmark dataset, which we release along with reproducibility packages to support ongoing evaluation efforts. This article introduces the curated dataset, incorporating details about software dependencies, execution steps, and configurations necessary for accurate reproduction. The dataset is structured to reflect diverse computational requirements and methodologies, ranging from simple scripts to complex, multi-language workflows, ensuring it presents the wide range of challenges researchers face in reproducing computational studies. It provides a universal benchmark by establishing a standardized dataset for objectively evaluating and comparing the effectiveness of reproducibility tools. Each experiment included in the dataset is carefully documented to ensure ease of use. We added clear instructions following a standard, so each experiment has the same kind of instructions, making it easier for researchers to run each of them with their own reproducibility tool.The utility of the dataset is demonstrated through extensive evaluations using multiple reproducibility tools.

2025

Mind the gap: The missing features of the tools to support user studies in software engineering

Autores
Costa, L; Barbosa, S; Cunha, J;

Publicação
JOURNAL OF COMPUTER LANGUAGES

Abstract
User studies are paramount for advancing research in software engineering, particularly when evaluating tools and techniques involving programmers. However, researchers face several barriers when performing them despite the existence of supporting tools. We base our study on a set of tools and researcher-reported barriers identified in prior work on user studies in software engineering. In this work, we study how existing tools and their features cope with previously identified barriers. Moreover, we propose new features for the barriers that lack support. We validated our proposal with 102 researchers, achieving statistically significant positive support for all but one feature. We study the current gap between tools and barriers, using features as the bridge. We show there is a significant lack of support for several barriers, as some have no single tool to support them.