2022
Authors
Ruas, R; Barbosa, B;
Publication
ICT as Innovator Between Tourism and Culture - Advances in Business Strategy and Competitive Advantage
Abstract
2022
Authors
Barbosa, B; Santos, CA; Katti, C; Filipe, S;
Publication
Handbook of Research on Smart Management for Digital Transformation - Advances in E-Business Research
Abstract
2022
Authors
Carvalho, CL; Barbosa, B; Santos, CA;
Publication
Advances in Human Services and Public Health - Handbook of Research on Digital Citizenship and Management During Crises
Abstract
2022
Authors
Parente, J; Alonso, AN; Coelho, F; Vinagre, J; Bastos, P;
Publication
2022 FOURTH INTERNATIONAL CONFERENCE ON BLOCKCHAIN COMPUTING AND APPLICATIONS (BCCA)
Abstract
As blockchains go beyond cryptocurrencies into applications in multiple industries such as Insurance, Healthcare and Banking, handling personal or sensitive data, data access control becomes increasingly relevant. Access control mechanisms proposed so far are mostly based on requester identity, particularly for permissioned blockchain platforms, and are limited to binary, all-or-nothing access decisions. This is the case with Hyperledger Fabric's native access control mechanisms and, as permission updates require consensus, these fall short regarding the flexibility required to address GDPR-derived policies and client consent management. We propose SDAM, a novel access control mechanism for Fabric that enables fine-grained and dynamic control policies, using both contextual and resource attributes for decisions. Instead of binary results, decisions may also include mandatory data transformations as to conform with the expressed policy, all without modifications to Fabric. Results show that SDAM's overhead w.r.t baseline Fabric is acceptable. The scalability of the approach w.r.t to the number of concurrent clients is also evaluated and found to follow Fabric's.
2022
Authors
Lopes, D; Medeiros, P; Dong, JD; Barradas, D; Portela, B; Vinagre, J; Ferreira, B; Christin, N; Santos, N;
Publication
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, CCS 2022, Los Angeles, CA, USA, November 7-11, 2022
Abstract
Tor is the most popular anonymity network in the world. It relies on advanced security and obfuscation techniques to ensure the privacy of its users and free access to the Internet. However, the investigation of traffic correlation attacks against Tor Onion Services (OSes) has been relatively overlooked in the literature. In particular, determining whether it is possible to emulate a global passive adversary capable of deanonymizing the IP addresses of both the Tor OSes and of the clients accessing them has remained, so far, an open question. In this paper, we present ongoing work toward addressing this question and reveal some preliminary results on a scalable traffic correlation attack that can potentially be used to deanonymize Tor OS sessions. Our attack is based on a distributed architecture involving a group of colluding ISPs from across the world. After collecting Tor traffic samples at multiple vantage points, ISPs can run them through a pipeline where several stages of traffic classifiers employ complementary techniques that result in the deanonymization of OS sessions with high confidence (i.e., low false positives). We have responsibly disclosed our early results with the Tor Project team and are currently working not only on improving the effectiveness of our attack but also on developing countermeasures to preserve Tor users' privacy.
2022
Authors
Pereira, K; Vinagre, J; Alonso, AN; Coelho, F; Carvalho, M;
Publication
Machine Learning and Principles and Practice of Knowledge Discovery in Databases - International Workshops of ECML PKDD 2022, Grenoble, France, September 19-23, 2022, Proceedings, Part II
Abstract
The application of machine learning to insurance risk prediction requires learning from sensitive data. This raises multiple ethical and legal issues. One of the most relevant ones is privacy. However, privacy-preserving methods can potentially hinder the predictive potential of machine learning models. In this paper, we present preliminary experiments with life insurance data using two privacy-preserving techniques: discretization and encryption. Our objective with this work is to assess the impact of such privacy preservation techniques in the accuracy of ML models. We instantiate the problem in three general, but plausible Use Cases involving the prediction of insurance claims within a 1-year horizon. Our preliminary experiments suggest that discretization and encryption have negligible impact in the accuracy of ML models. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.