Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por João Pascoal Faria

2016

A model for analyzing performance problems and root causes in the personal software process

Autores
Raza, M; Faria, JP;

Publicação
JOURNAL OF SOFTWARE-EVOLUTION AND PROCESS

Abstract
High-maturity software development processes, such as the Team Software Process and the accompanying Personal Software Process (PSP), can generate significant amounts of data that can be periodically analyzed to identify performance problems, determine their root causes, and devise improvement actions. However, there is a lack of tool support for automating that type of analysis, and hence diminish the manual effort and expert knowledge required. So, we propose in this paper a comprehensive performance model, addressing time estimation accuracy, quality, and productivity, to enable the automated (tool based) analysis of performance data produced by PSP developers, namely, identify and rank performance problems and their root causes. A PSP data set referring to more than 30000 projects was used to validate and calibrate the model. Copyright (c) 2015 John Wiley & Sons, Ltd.

2016

A Model-Based Approach for Product Testing and Certification in Digital Ecosystems

Autores
Lima, B; Faria, JP;

Publicação
2016 IEEE NINTH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW)

Abstract
In a growing number of domains, such as ambient-assisted living (AAL) and e-health, the provisioning of end-to-end services to the users depends on the proper interoperation of multiple products from different vendors, forming a digital ecosystem. To ensure interoperability and the integrity of the ecosystem, it is important that candidate products are independently tested and certified against applicable interoperability requirements. Based on the experience acquired in the AAL4ALL project, we propose in this paper a model-based approach to systematize, automate and increase the assurance of such testing and certification activities. The approach encompasses the construction of several models: a feature model, an interface model, a product model, and unit and integration test models. The abstract syntax and consistency rules of these models are specified by means of metamodels written in UML and Alloy and automatically checked with Alloy Analyzer. Using the model finding capabilities of Alloy Analyzer, integration tests can be automatically generated from the remaining models, through the composition and instantiation of unit tests. Examples of concrete models from the AAL4ALL project are also presented.

2015

An Approach for Automated Scenario-based Testing of Distributed and Heterogeneous Systems

Autores
Lima, B; Faria, JP;

Publicação
ICSOFT-EA 2015 - Proceedings of the 10th International Conference on Software Engineering and Applications, Colmar, Alsace, France, 20-22 July, 2015.

Abstract
The growing dependence of our society on increasingly complex software systems, makes software testing ever more important and challenging. In many domains, such as healthcare and transportation, several independent systems, forming a heterogeneous and distributed system of systems, are involved in the provisioning of endto- end services to users. However, existing testing techniques, namely in the model-based testing field, provide little tool support for properly testing such systems. Hence, in this paper, we propose an approach and a toolset architecture for automating the testing of end-to-end services in distributed and heterogeneous systems. The tester interacts with a visual modeling frontend to describe key behavioral scenarios, invoke test generation and execution, and visualize test results and coverage information back in the model. The visual modeling notation is converted to a formal notation amenable for runtime interpretation in the backend. A distributed test monitoring and control infrastructure is responsible for interacting with the components of the system under test, as test driver, monitor and stub. At the core of the toolset, a test execution engine coordinates test execution and checks the conformance of the observed execution trace with the expectations derived from the visual model. A real world example from the Ambient Assisted Living domain is presented to illustrate the approach.

2016

Automated Testing of Distributed and Heterogeneous Systems Based on UML Sequence Diagrams

Autores
Lima, B; Faria, JP;

Publicação
SOFTWARE TECHNOLOGIES (ICSOFT 2015)

Abstract
The growing dependence of our society on increasingly complex software systems makes software testing ever more important and challenging. In many domains, several independent systems, forming a distributed and heterogeneous system of systems, are involved in the provisioning of end-to-end services to users. However, existing test automation techniques provide little tool support for properly testing such systems. Hence, we propose an approach and toolset architecture for automating the testing of end-to-end services in distributed and heterogeneous systems, comprising a visual modeling environment, a test execution engine, and a distributed test monitoring and control infrastructure. The only manual activity required is the description of the participants and behavior of the services under test with UML sequence diagrams, which are translated to extended Petri nets for efficient test input generation and test output checking at runtime. A real world example from the Ambient Assisted Living domain illustrates the approach.

2016

Empirical Evaluation of the ProcessPAIR Tool for Automated Performance Analysis

Autores
Raza, Mushtaq; Faria, JoaoPascoal; Salazar, Rafael;

Publicação
The 28th International Conference on Software Engineering and Knowledge Engineering, SEKE 2016, Redwood City, San Francisco Bay, USA, July 1-3, 2016.

Abstract
Software development processes can generate significant amounts of data that can be periodically analyzed to identify performance problems, determine their root causes and devise improvement actions. However, conducting that analysis manually is challenging because of the potentially large amount of data to analyze and the effort and expertise required. ProcessPAIR is a novel tool designed to help developers analyze their performance data with less effort, by automatically identifying and ranking performance problems and potential root causes. The analysis is based on performance models derived from the performance data of a large community of developers. In this paper, we present the results of an experiment conducted in the context of Personal Software Process (PSP) training, to show that ProcessPAIR is able to accurately identify and rank performance problems and potential root causes of individual developers so that subsequent manual analysis for the identification of deeper causes and improvement actions can be properly focused.

2013

Inferring UI Patterns with Inductive Logic Programming

Autores
Nabuco, M; Paiva, ACR; Camacho, R; Faria, JP;

Publicação
PROCEEDINGS OF THE 2013 8TH IBERIAN CONFERENCE ON INFORMATION SYSTEMS AND TECHNOLOGIES (CISTI 2013)

Abstract
This paper presents an approach to infer UI patterns existent in a web application. This reverse engineering process is performed in two steps. First, execution traces are collected from user interactions using the Selenium software. Second, the existing UI patterns within those traces are identified using Machine Learning inference with the Aleph ILP system. The paper describes and illustrates the proposed methodology on a case study over the Amazon web site.

  • 1
  • 14