Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Bruno Carvalhido Lima

2023

Towards Computer Assisted Compliance Assessment in the Development of Software as a Medical Device

Autores
Farshid, S; Lima, B; Faria, JP;

Publicação
Proceedings of the 18th International Conference on Software Technologies, ICSOFT 2023, Rome, Italy, July 10-12, 2023.

Abstract

2023

Automatic Test-Based Assessment of Assembly Programs

Autores
Tavares, L; Lima, B; Araújo, A;

Publicação
Proceedings of the 18th International Conference on Software Technologies

Abstract

2020

DCO Analyzer: Local Controllability and Observability Analysis and Enforcement of Distributed Test Scenarios

Autores
Lima, B; Faria, JP;

Publicação
CoRR

Abstract

2024

PlayField: An Adaptable Framework for Integrative Sports Data Analysis

Autores
Pinto, F; Lima, B;

Publicação
2024 IEEE/ACM INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING, APPLICATIONS AND TECHNOLOGIES, BDCAT

Abstract
As sports analytics evolve to include a broad spectrum of data from diverse sources, the challenge of integrating heterogeneous data becomes pronounced. Current methods struggle with flexibility and rapid adaptation to new data formats, risking data integrity and accuracy. This paper introduces PlayField, a framework designed to robustly handle diverse sports data through adaptable configuration and an automated API. PlayField ensures precise data integration and supports manual interventions for data integrity, making it essential for accurate and comprehensive sports analysis. A case study with ZeroZero demonstrates the framework's capability to improve data integration efficiency significantly, showcasing its potential for advanced analytics in sports.

2025

Acceptance Test Generation with Large Language Models: An Industrial Case Study

Autores
Ferreira, M; Viegas, L; Faria, JP; Lima, B;

Publicação
2025 IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATION OF SOFTWARE TEST, AST

Abstract
Large language model (LLM)-powered assistants are increasingly used for generating program code and unit tests, but their application in acceptance testing remains underexplored. To help address this gap, this paper explores the use of LLMs for generating executable acceptance tests for web applications through a two-step process: (i) generating acceptance test scenarios in natural language (in Gherkin) from user stories, and (ii) converting these scenarios into executable test scripts (in Cypress), knowing the HTML code of the pages under test. This two-step approach supports acceptance test-driven development, enhances tester control, and improves test quality. The two steps were implemented in the AutoUAT and Test Flow tools, respectively, powered by GPT-4 Turbo, and integrated into a partner company's workflow and evaluated on real-world projects. The users found the acceptance test scenarios generated by AutoUAT helpful 95% of the time, even revealing previously overlooked cases. Regarding Test Flow, 92% of the acceptance test cases generated by Test Flow were considered helpful: 60% were usable as generated, 8% required minor fixes, and 24% needed to be regenerated with additional inputs; the remaining 8% were discarded due to major issues. These results suggest that LLMs can, in fact, help improve the acceptance test process, with appropriate tooling and supervision.

  • 6
  • 6