Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

João P. Vilela é professor no Departamento de Ciência de Computadores da Universidade do Porto e coordenador de centro no CRACS/INESC TEC. Previamente, foi professor no Departamento de Engenharia Informática da Universidade de Coimbra, após ter obtido o doutoramento em Ciência de Computadores pela Universidade do Porto, em 2011, e investigador visitante no Georgia Tech e no MIT, nos EUA. O Dr. Vilela tem sido coordenador e membro de equipa em diversos projetos nacionais, bilaterais e europeus financiados nas áreas da segurança e privacidade. As suas principais áreas de investigação são a segurança e privacidade de sistemas informáticos e de comunicação, com aplicações em redes sem fios e dispositivos móveis. Tópicos específicos de investigação incluem segurança na camada física de redes sem fios, segurança de redes de nova geração, aprendizagem automática com garantias de privacidade, privacidade de localização e proteção automatizada da privacidade.


João Vilela foi presidente da comissão de organização da conferência ACM CODASPY 2024, track-chair da IEEE Vehicular Technology Conference 2023 e é Editor Associado da revista ACM Transactions on Privacy and Security desde abril de 2025. Tem atuado como perito avaliador de projetos para o programa CHIST-ERA ERA-NET, o programa Horizon Europe da União Europeia, o Fundo Nacional de Investigação do Luxemburgo e o Conselho de Investigação dos Países Baixos. O Dr. Vilela apresentou o seu trabalho de investigação em diversas instituições internacionais de relevo, como a PUC-Rio, a Universidade Federal do Rio de Janeiro, a Unicamp e a Universidade de São Paulo (Brasil), a IMDEA Networks (Espanha), a INRIA (França), a Universidade de Harvard e o MIT (EUA), e a Universidade de Cambridge (Reino Unido), entre outras.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    João Paulo Vilela
  • Cargo

    Coordenador de Centro
  • Desde

    01 março 2020
002
Publicações

2025

Geo-Indistinguishability

Autores
Mendes, R; Vilela, P;

Publicação
Encyclopedia of Cryptography, Security and Privacy, Third Edition

Abstract
[No abstract available]

2025

Computational complexity-constrained spectral efficiency analysis for 6G waveforms

Autores
Queiroz, S; Vilela, JP; Ng, BKK; Lam, C; Monteiro, E;

Publicação
ITU Journal on Future and Evolving Technologies

Abstract
In this work, we present a tutorial on how to account for the computational time complexity overhead of signal processing in the Spectral Efficiency (SE) analysis of wireless waveforms. Our methodology is particularly relevant in scenarios where achieving higher SE entails a penalty in complexity, a common trade-off present in 6G candidate waveforms. We consider that SE derives from the bit rate, which is impacted by time-dependent overheads. Thus, neglecting the computational complexity overhead in the SE analysis grants an unfair advantage to more computationally complex waveforms, as they require larger computational resources to meet a signal processing runtime below the symbol period. We demonstrate our points with two case studies. In the first, we refer to IEEE 802.11a-compliant baseband processors from the literature to show that their runtime significantly impacts the SE perceived by upper layers. In the second case study, we show that waveforms considered less efficient in terms of SE can outperform their more computationally expensive counterparts, if provided with equivalent high-performance computational resources. Based on these cases, we believe our tutorial can address the comparative SE analysis of waveforms that operate under different computational resource constraints.

2025

Compromising location privacy through Wi-Fi RSSI tracking

Autores
Cunha, M; Mendes, R; de Montjoye, YA; Vilela, JP;

Publicação
SCIENTIFIC REPORTS

Abstract
The widespread availability of wireless networking, such as Wi-Fi, has led to the pervasiveness of always connected mobile devices. These devices are provided with several sensors that allow the collection of large amounts of data, which pose a threat to personal privacy. It is well known that Wi-Fi connectivity information (e.g. BSSID) can be used for inferring user locations. This has caused the imposition of limitations to the access to such data in mobile devices. However, other sources of information about wireless connectivity are available, such as the Received Signal Strength Indicator (RSSI). In this work, we show that RSSI can be used to infer the presence of a user at common locations throughout time. This information can be correlated with other features, such as the hour of the day, to further learn semantic context about such locations with a prediction performance above 90%. Our analysis shows the privacy implications of inferring user locations through Wi-Fi RSSI, but also emphasizes the fingerprinting risk that results from the lack of protection when accessing RSSI measurements.

2025

Active Attribute Inference Against Well-Generalized Models In Federated Learning

Autores
Gomes, C; Mendes, R; Vilela, JP;

Publicação
2025 IEEE 10TH EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY, EUROS&P

Abstract
Federated Learning (FL), a distributed learning mechanism where data is decentralized across multiple devices and periodic gradient updates are shared, is an alternative to centralized training that aims to address privacy issues arising from raw data sharing. Despite the expected privacy benefits, prior research showcases the potential privacy leakage derived from overfitting, exploited by passive attacks. However, limited attention has been given to understanding and defending against active threats that increase model leakage by interfering with the training process, instead of relying on overfitting. This work addresses this gap by introducing Active Attribute Inference (AAI*), a novel active attack that encodes sensitive attribute information by making any targeted training sample leave a distinguishable footprint on the gradient of maliciously modified neurons [8]. Results, using two real-world datasets, show that it is possible to successfully encode sensitive information incurring a small error in terms of neuron activation. More importantly, on a practical scenario, AAI. can improve upon a state-of-theart approach by achieving over 90% of restricted ROC AUC, therefore increasing model leakage. To defend against such active attacks, this work introduces several attack detection strategies tailored for different levels of the defender's knowledge. Including the novel White-box Attack Detection Mechanism (WADM*) that detects abnormal changes in weights distribution, and two black-box strategies based on the monitorization of model performance. Results show that the detection rate can be 100% on both datasets. Remarkably, WADM. reduces any attack to random guessing while preserving model utility, offering significant improvements over existing defenses, particularly when clients are non-IID. By proposing active attacks against well-generalized models and effective countermeasures, this research contributes to a better understanding of privacy in FL systems.

2025

Popular Content Prediction Through Adversarial Autoencoder Using Anonymised Data

Autores
Maia, DVDA; Vilela, JP; Curado, M;

Publicação
2025 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC

Abstract
The increasing number of connected and autonomous vehicles generates an even greater demand for efficient content delivery in vehicular networks. Estimating the popularity of content is an important task to proactively cache and distribute content throughout the networks to add value to users' experiences and reduce network congestion. This paper presents a novel approach for predicting popular content on vehicular networks based on a Federated Learning-Adversarial Autoencoder model and anonymised data. Unlike prior works that relied on users' raw features, our model protects user privacy through data anonymisation. This allows us to learn from the hidden patterns of content popularity and deliver popular content without compromising user privacy. Experiments showed that our approach exceeded traditional collaborative filtering and deep learning methods in terms of accuracy and robustness, even with sparse data.