Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

2018

The Computational Power of Parsing Expression Grammars

Autores
Loff, B; Moreira, N; Reis, R;

Publicação
Developments in Language Theory - 22nd International Conference, DLT 2018, Tokyo, Japan, September 10-14, 2018, Proceedings

Abstract
We propose a new computational model, the scaffolding automaton, which exactly characterises the computational power of parsing expression grammars (PEGs). Using this characterisation we show that: PEGs have unexpected power and semantics. We present several PEGs with surprising behaviour, and languages which, unexpectedly, have PEGs, including a PEG for the language of palindromes whose length is a power of two.PEGs are computationally “universal”, in the following sense: take any computable function; then there exists a computable function such that has a PEG.There can be no pumping lemma for PEGs. There is no total computable function A with the following property: for every well-formed PEG G, there exists such that for every string of size the output is in and has |x|.PEGs are strongly non real-time for Turing machines. There exists a language with a PEG, such that neither it nor its reverse can be recognised by any multi-tape online Turing machine which is allowed to do only steps after reading each input symbol. © 2018, Springer Nature Switzerland AG.

2018

Forecasting Traffic Flow in Big Cities Using Modified Tucker Decomposition

Autores
Bhanu, M; Priya, S; Dandapat, SK; Chandra, J; Moreira, JM;

Publicação
Advanced Data Mining and Applications - 14th International Conference, ADMA 2018, Nanjing, China, November 16-18, 2018, Proceedings

Abstract
An efficient traffic-network is an essential demand for any smart city. Usually, city traffic forms a huge network with millions of locations and trips. Traffic flow prediction using such large data is a classical problem in intelligent transportation system (ITS). Many existing models such as ARIMA, SVR, ANN etc, are deployed to retrieve important characteristics of traffic-network and for forecasting mobility. However, these methods suffer from the inability to handle higher data dimensionality. The tensor-based approach has recently gained success over the existing methods due to its ability to decompose high dimension data into factor components. We present a modified Tucker decomposition method which predicts traffic mobility by approximating very large networks so as to handle the dimensionality problem. Our experiments on two big-city traffic-networks show that our method reduces the forecasting error, for up to 7 days, by around 80% as compared to the existing state of the art methods. Further, our method also efficiently handles the data dimensionality problem as compared to the existing methods. © 2018, Springer Nature Switzerland AG.

2018

ANÁLISE FATORIAL CONFIRMATÓRIA DA VERSÃO PORTUGUESA DO PITTSBURG SLEEP QUALITY INDEX

Autores
Teixeira, C; Caçador, A; Ferreira, T; Vasconcelos-Raposo, J;

Publicação
PSYCHTECH & HEALTH JOURNAL

Abstract

2018

Upframing Service Design and Innovation for Research Impact

Autores
Patricio, L; Gustafsson, A; Fisk, R;

Publicação
JOURNAL OF SERVICE RESEARCH

Abstract
Service design and innovation are receiving greater attention from the service research community because they play crucial roles in creating new forms of value cocreation with customers, organizations, and societal actors in general. Service innovation involves a new process or service offering that creates value for one or more actors in a service network. Service design brings new service ideas to life through a human-centered and holistic design thinking approach. However, service design and innovation build on dispersed multidisciplinary contributions that are still poorly understood. The special issue that follows offers important contributions through the examination of service design and innovation literature, the links between service design and innovation, the role of customers in service design and innovation, and service design and innovation for well-being. Building on these contributions, this article develops a future research agenda in three areas: (1) reinforcing and expanding the foundations of service design and innovation by integrating multiple perspectives and methods; (2) advancing service design and innovation by improving the connection between the two areas, deepening actor involvement, and leveraging the role of technology; and (3) upframing service design and innovation to strengthen research impact by innovating complex value networks and service ecosystems and by building a cornerstone for transformative service research.

2018

Impact of Vectorization Over 16-bit Data-Types on GPUs

Autores
Reis, L; Nobre, R; Cardoso, JMP;

Publicação
PARMA-DITAM 2018: 9TH WORKSHOP ON PARALLEL PROGRAMMING AND RUNTIME MANAGEMENT TECHNIQUES FOR MANY-CORE ARCHITECTURES AND 7TH WORKSHOP ON DESIGN TOOLS AND ARCHITECTURES FOR MULTICORE EMBEDDED COMPUTING PLATFORMS

Abstract
Since the introduction of Single Instruction Multiple Thread (SIMT) GPU architectures, vectorization has seldom been recommended. However, for efficient use of 8-bit and 16-bit data types, vector types are necessary even on these GPUs. When only integer types were natively supported in sizes of less than 32-bits, the usefulness of vectors was limited, but the introduction of hardware support for packed half-precision floating point computations in recent GPU architectures changes this, as now floating-point programs can also benefit from vector types. Given a GPU kernel, using smaller data-types might not be sufficient to achieve the optimal performance for a given device, even on hardware with native support for half-precision, because the compiler targeting the GPU may not able to automatically vectorize the code. In this paper, we present a number of examples that make use of the OpenCL vector data-types, which we are currently implementing in our tool for automatic vectorization. We present a number of experiments targeting a graphics card with an AMD Vega 10 XT GPU, which has 2x peak arithmetic throughput using half-precision when compared with single-precision. For comparison, we also target an older GPU architecture, without native support for half-precision arithmetic. We found that, on an AMD Vega 10 XT GPU, half-precision vectorization leads to performance improvements over the scalar version using the same precision (geometric mean speedup of 1.50x), which can be attributed to the GPU being able to make use of native native support for arithmetic over packed half-precision data. However, we found that most of the performance improvement of vectorization is caused by related transformations, such as thread coarsening or loop unrolling.

2018

Energyware Analysis

Autores
Pereira, R; Couto, M; Ribeiro, F; Rua, R; Saraiva, J;

Publicação
SQAMIA

Abstract
This documents introduces \Energyware" as a software engineering discipline aiming at defining, analyzing and optimizing the energy consumption by software systems. In this paper we present energyware analysis in the context of programming languages, software data structures and program's source code. For each of these areas we describe the research work done in the context of the Green Software Laboratory at Minho University: we describe energyaware techniques, tools, libraries, and repositories.

  • 2016
  • 4362