Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

André Tse, investigador no centro de Computação Centrada no Humano e Ciência da Informação (HUMANISE), no INESC TEC. A sua atividade tem sido orientada para o desenvolvimento de aplicações móveis, além de se envolver em trabalhos de DevOps. Mestre em Engenharia de Redes e Sistemas Informáticos pela Faculdade de Ciências da Universidade do Porto.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    André Tse
  • Cargo

    Investigador
  • Desde

    01 março 2021
006
Publicações

2023

Measuring Latency-Accuracy Trade-Offs in Convolutional Neural Networks

Autores
Tse, A; Oliveira, L; Vinagre, J;

Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT I

Abstract
Several systems that employ machine learning models are subject to strict latency requirements. Fraud detection systems, transportation control systems, network traffic analysis and footwear manufacturing processes are a few examples. These requirements are imposed at inference time, when the model is queried. However, it is not trivial how to adjust model architecture and hyperparameters in order to obtain a good trade-off between predictive ability and inference time. This paper provides a contribution in this direction by presenting a study of how different architectural and hyperparameter choices affect the inference time of a Convolutional Neural Network for network traffic analysis. Our case study focus on a model for traffic correlation attacks to the Tor network, that requires the correlation of a large volume of network flows in a short amount of time. Our findings suggest that hyperparameters related to convolution operations-such as stride, and the number of filters-and the reduction of convolution and max-pooling layers can substantially reduce inference time, often with a relatively small cost in predictive performance.