Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por HumanISE

2016

A Model-Based Approach for Product Testing and Certification in Digital Ecosystems

Autores
Lima, B; Faria, JP;

Publicação
2016 IEEE NINTH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW)

Abstract
In a growing number of domains, such as ambient-assisted living (AAL) and e-health, the provisioning of end-to-end services to the users depends on the proper interoperation of multiple products from different vendors, forming a digital ecosystem. To ensure interoperability and the integrity of the ecosystem, it is important that candidate products are independently tested and certified against applicable interoperability requirements. Based on the experience acquired in the AAL4ALL project, we propose in this paper a model-based approach to systematize, automate and increase the assurance of such testing and certification activities. The approach encompasses the construction of several models: a feature model, an interface model, a product model, and unit and integration test models. The abstract syntax and consistency rules of these models are specified by means of metamodels written in UML and Alloy and automatically checked with Alloy Analyzer. Using the model finding capabilities of Alloy Analyzer, integration tests can be automatically generated from the remaining models, through the composition and instantiation of unit tests. Examples of concrete models from the AAL4ALL project are also presented.

2016

Automated Testing of Distributed and Heterogeneous Systems Based on UML Sequence Diagrams

Autores
Lima, B; Faria, JP;

Publicação
SOFTWARE TECHNOLOGIES (ICSOFT 2015)

Abstract
The growing dependence of our society on increasingly complex software systems makes software testing ever more important and challenging. In many domains, several independent systems, forming a distributed and heterogeneous system of systems, are involved in the provisioning of end-to-end services to users. However, existing test automation techniques provide little tool support for properly testing such systems. Hence, we propose an approach and toolset architecture for automating the testing of end-to-end services in distributed and heterogeneous systems, comprising a visual modeling environment, a test execution engine, and a distributed test monitoring and control infrastructure. The only manual activity required is the description of the participants and behavior of the services under test with UML sequence diagrams, which are translated to extended Petri nets for efficient test input generation and test output checking at runtime. A real world example from the Ambient Assisted Living domain illustrates the approach.

2016

Empirical Evaluation of the ProcessPAIR Tool for Automated Performance Analysis

Autores
Raza, Mushtaq; Faria, JoaoPascoal; Salazar, Rafael;

Publicação
The 28th International Conference on Software Engineering and Knowledge Engineering, SEKE 2016, Redwood City, San Francisco Bay, USA, July 1-3, 2016.

Abstract
Software development processes can generate significant amounts of data that can be periodically analyzed to identify performance problems, determine their root causes and devise improvement actions. However, conducting that analysis manually is challenging because of the potentially large amount of data to analyze and the effort and expertise required. ProcessPAIR is a novel tool designed to help developers analyze their performance data with less effort, by automatically identifying and ranking performance problems and potential root causes. The analysis is based on performance models derived from the performance data of a large community of developers. In this paper, we present the results of an experiment conducted in the context of Personal Software Process (PSP) training, to show that ProcessPAIR is able to accurately identify and rank performance problems and potential root causes of individual developers so that subsequent manual analysis for the identification of deeper causes and improvement actions can be properly focused.

2016

Towards the Online Testing of Distributed and Heterogeneous Systems with Extended Petri Nets

Autores
Lima, B; Faria, JP;

Publicação
PROCEEDINGS 2016 10TH INTERNATIONAL CONFERENCE ON THE QUALITY OF INFORMATION AND COMMUNICATIONS TECHNOLOGY (QUATIC)

Abstract
The growing dependence of our society on increasingly complex software systems makes software testing ever more important and challenging. In many domains, such as healthcare and transportation, several independent systems, forming a heterogeneous and distributed system of systems, are involved in the provisioning of end-to-end services to users. However, existing testing techniques, namely in the model-based testing field, provide little support for properly testing such systems. To bridge the gaps identified in the state of the art we intend to develop a research work where the main goal is to significantly reduce the cost of testing distributed and heterogeneous systems, from the standpoint of time, resources and expertise required, as compared to existing approaches. For that, we propose a preliminary approach and a toolset architecture for automating the testing of end-to-end services in distributed and heterogeneous systems. The tester interacts with a visual modeling frontend to describe key behavioral scenarios, invoke test generation and execution, and visualize test results and coverage information back in the model. The visual modeling notation is converted to a formal notation amenable for runtime interpretation in the backend. A distributed test monitoring and control infrastructure is responsible for interacting with the components of the system under test, as test driver, monitor and stub. At the core of the toolset, a test execution engine coordinates test execution and checks the conformance of the observed execution trace with the expectations derived from the visual model. A real world example from the Ambient Assisted Living domain is presented to illustrate the approach. As future work we intend to develop distributed and incremental algorithms for online testing of distributed and heterogeneous systems based on Extended Petri Nets at runtime and validate them in real world case studies.

2016

MT4A: a no-programming test automation framework for Android applications

Autores
Coelho, Tiago; Lima, Bruno; Faria, JoaoPascoal;

Publicação
Proceedings of the 7th International Workshop on Automating Test Case Design, Selection, and Evaluation, A-TEST@SIGSOFT FSE 2016, Seattle, WA, USA, November 18, 2016

Abstract
The growing dependency of our society on increasingly complex software systems, combining mobile and cloud-based applications and services, makes the test activities even more important and challenging. However, sometimes software tests are not properly performed due to tight deadlines, due to the time and skills required to develop and execute the tests or because the developers are too optimistic about possible faults in their own code. Although there are several frameworks for mobile test automation, they usually require programming skills or complex configuration steps. Hence, in this paper, we propose a framework that allows creating and executing tests for Android applications without requiring programming skills. It is possible to create automated tests based on a set of pre-defined actions and it is also possible to inject data into device sensors. An experiment with programmers and non-programmers showed that both can develop and execute tests with a similar time. A real world example using a fall detection application is presented to illustrate the approach. © 2016 ACM.

2016

ProcessPAIR: A Tool for Automated Performance Analysis and Improvement Recommendation in Software Development

Autores
Raza, M; Faria, JP;

Publicação
2016 31ST IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE)

Abstract
High-maturity software development processes can generate significant amounts of data that can be periodically analyzed to identify performance problems, determine their root causes and devise improvement actions. However, conducting that analysis manually is challenging because of the potentially large amount of data to analyze and the effort and expertise required. In this paper, we present ProcessPAIR, a novel tool designed to help developers analyze their performance data with less effort, by automatically identifying and ranking performance problems and potential root causes, so that subsequent manual analysis for the identification of deeper causes and improvement actions can be properly focused. The analysis is based on performance models defined manually by process experts and calibrated automatically from the performance data of many developers. We also show how ProcessPAIR was successfully applied for the Personal Software Process (PSP). A video about ProcessPAIR is available in https://youtu.be/dEk3fhhkduo.

  • 353
  • 641