2025
Authors
Avraam, D; Wilson, RC; Chan, NA; Banerjee, S; Bishop, TRP; Butters, O; Cadman, T; Cederkvist, L; Duijts, L; Montagut, XE; Garner, H; Gonçalves, G; González, JR; Haakma, S; Hartlev, M; Hasenauer, J; Huth, M; Hyde, E; Jaddoe, VWV; Marcon, Y; Mayrhofer, MT; Molnar-Gabor, F; Morgan, AS; Murtagh, M; Nestor, M; Andersen, AMN; Parker, S; de Moira, AP; Schwarz, F; Strandberg-Larsen, K; Swertz, MA; Welten, M; Wheater, S; Burton, P;
Publication
BIOINFORMATICS ADVANCES
Abstract
Motivation The validity of epidemiologic findings can be increased using triangulation, i.e. comparison of findings across contexts, and by having sufficiently large amounts of relevant data to analyse. However, access to data is often constrained by practical considerations and by ethico-legal and data governance restrictions. Gaining access to such data can be time-consuming due to the governance requirements associated with data access requests to institutions in different jurisdictions.Results DataSHIELD is a software solution that enables remote analysis without the need for data transfer (federated analysis). DataSHIELD is a scientifically mature, open-source data access and analysis platform aligned with the 'Five Safes' framework, the international framework governing safe research access to data. It allows real-time analysis while mitigating disclosure risk through an active multi-layer system of disclosure-preventing mechanisms. This combination of real-time remote statistical analysis, disclosure prevention mechanisms, and federation capabilities makes DataSHIELD a solution for addressing many of the technical and regulatory challenges in performing the large-scale statistical analysis of health and biomedical data. This paper describes the key components that comprise the disclosure protection system of DataSHIELD. These broadly fall into three classes: (i) system protection elements, (ii) analysis protection elements, and (iii) governance protection elements.Availability and implementation Information about the DataSHIELD software is available in https://datashield.org/ and https://github.com/datashield.
2025
Authors
Ferreira, J; Darabi, R; Sousa, A; Brueckner, F; Reis, LP; Reis, A; Tavares, RS; Sousa, J;
Publication
Journal of Intelligent Manufacturing
Abstract
This work introduces Gen-JEMA, a generative approach based on joint embedding with multimodal alignment (JEMA), to enhance feature extraction in the embedding space and improve the explainability of its predictions. Gen-JEMA addresses these challenges by leveraging multimodal data, including multi-view images and metadata such as process parameters, to learn transferable semantic representations. Gen-JEMA enables more explainable and enriched predictions by learning a decoder from the embedding. This novel co-learning framework, tailored for directed energy deposition (DED), integrates multiple data sources to learn a unified data representation and predict melt pool images from the primary sensor. The proposed approach enables real-time process monitoring using only the primary modality, simplifying hardware requirements and reducing computational overhead. The effectiveness of Gen-JEMA for DED process monitoring was evaluated, focusing on its generalization to downstream tasks such as melt pool geometry prediction and the generation of external melt pool representations using off-axis sensor data. To generate these external representations, autoencoder (AE) and variational autoencoder (VAE) architectures were optimized using Bayesian optimization. The AE outperformed other approaches achieving a 38% improvement in melt pool geometry prediction compared to the baseline and 88% in data generation compared with the VAE. The proposed framework establishes the foundation for integrating multisensor data with metadata through a generative approach, enabling various downstream tasks within the DED domain and achieving a small embedding, allowing efficient process control based on model predictions and embeddings. © The Author(s) 2025.
2025
Authors
Arriaga, A; Barbosa, M; Jarecki, S; Skrobot, M;
Publication
ADVANCES IN CRYPTOLOGY - ASIACRYPT 2024, PT V
Abstract
Driven by the NIST's post-quantum standardization efforts and the selection of Kyber as a lattice-based Key-Encapsulation Mechanism (KEM), severalPasswordAuthenticated KeyExchange (PAKE) protocols have been recently proposed that leverage a KEM to create an efficient, easy-to-implement and secure PAKE. In two recent works, Beguinet et al. (ACNS 2023) and Pan and Zeng (ASIACRYPT 2023) proposed generic compilers that transform KEM into PAKE, relying on an Ideal Cipher (IC) defined over a group. However, although IC on a group is often used in cryptographic protocols, special care must be taken to instantiate such objects in practice, especially when a low-entropy key is used. To address this concern, Dos Santos et al. (EUROCRYPT 2023) proposed a relaxation of the ICmodel under the Universal Composability (UC) framework called Half-Ideal Cipher (HIC). They demonstrate how to construct a UC-secure PAKE protocol, EKE-KEM, from a KEM and a modified 2round Feistel construction called m2F. Remarkably, the m2F sidesteps the use of an IC over a group, and instead employs an IC defined over a fixed-length bitstring domain, which is easier to instantiate. In this paper, we introduce a novel PAKE protocol called CHIC that improves the communication and computation efficiency of EKE-KEM, by avoiding the HIC abstraction. Instead, we split the KEM public key in two parts and use the m2F directly, without further randomization. We provide a detailed proof of the security of CHIC and establish precise security requirements for the underlying KEM, including one-wayness and anonymity of ciphertexts, and uniformity of public keys. Our findings extend to general KEM-based EKE-style protocols and show that a passively secure KEM is not sufficient. In this respect, our results align with those of Pan and Zeng (ASIACRYPT 2023), but contradict the analyses of KEM-to-PAKE compilers by Beguinet et al. (ACNS 2023) and Dos Santos et al. (EUROCRYPT 2023). Finally, we provide an implementation of CHIC, highlighting its minimal overhead compared to the underlying KEM - Kyber. An interesting aspect of the implementation is that we reuse the rejection sampling procedure in Kyber reference code to address the challenge of hashing onto the public key space. As of now, to the best of our knowledge, CHIC stands as the most efficient PAKE protocol from black-box KEM that offers rigorously proven UC security.
2025
Authors
Barbosa, J; Florido, M; Costa, VS;
Publication
ELECTRONIC PROCEEDINGS IN THEORETICAL COMPUTER SCIENCE
Abstract
Here we define a new unification algorithm for terms interpreted in semantic domains denoted by a subclass of regular types here called deterministic regular types. This reflects our intention not to handle the semantic universe as a homogeneous collection of values, but instead, to partition it in a way that is similar to data types in programming languages. We first define the new unification algorithm which is based on constraint generation and constraint solving, and then prove its main properties: termination, soundness, and completeness with respect to the semantics. Finally, we discuss how to apply this algorithm to a dynamically typed version of Prolog.
2025
Authors
Caetano, F; Carvalho, P; Mastralexi, C; Cardoso, JS;
Publication
IEEE ACCESS
Abstract
Anomaly Detection has been a significant field in Machine Learning since it began gaining traction. In the context of Computer Vision, the increased interest is notorious as it enables the development of video processing models for different tasks without the need for a cumbersome effort with the annotation of possible events, that may be under represented. From the predominant strategies, weakly and semi-supervised, the former has demonstrated potential to achieve a higher score in its analysis, adding to its flexibility. This work shows that using temporal ranking constraints for Multiple Instance Learning can increase the performance of these models, allowing the focus on the most informative instances. Moreover, the results suggest that altering the ranking process to include information about adjacent instances generates best-performing models.
2025
Authors
Alves, GA; Tavares, R; Amorim, P; Camargo, VCB;
Publication
COMPUTERS & INDUSTRIAL ENGINEERING
Abstract
The textile industry is a complex and dynamic system where structured decision-making processes are essential for efficient supply chain management. In this context, mathematical programming models offer a powerful tool for modeling and optimizing the textile supply chain. This systematic review explores the application of mathematical programming models, including linear programming, nonlinear programming, stochastic programming, robust optimization, fuzzy programming, and multi-objective programming, in optimizing the textile supply chain. The review categorizes and analyzes 163 studies across the textile manufacturing stages, from fiber production to integrated supply chains. Key results reveal the utility of these models in solving a wide range of decision-making problems, such as blending fibers, production planning, scheduling orders, cutting patterns, transportation optimization, network design, and supplier selection, considering the challenges found in the textile sector. Analyzing those models, we point out that sustainability considerations, such as environmental and social aspects, remain underexplored and present significant opportunities for future research. In addition, this study emphasizes the importance of incorporating multi-objective approaches and addressing uncertainties in decision-making to advance sustainable and efficient textile supply chain management.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.