2025
Authors
Finich, S; Elsaid, M; Inacio, SI; Salgado, HM; Pessoa, LM;
Publication
2025 19TH EUROPEAN CONFERENCE ON ANTENNAS AND PROPAGATION, EUCAP
Abstract
A comparative analysis of Ka and D-band unit cells is presented using a Waveguide Simulator and infinite array models with a Floquet port. Initially, a single-unit cell design is employed with a tapered transition section. Subsequently, a 1 x 2-unit cell is designed and integrated into standard rectangular waveguides WR-34 and WR-7. For the Ka-band, the results obtained from both models exhibit excellent agreement in terms of magnitude and phase. In the D-band, the 1 x 2-unit cell demonstrated low loss for both techniques, and the phase responses were reasonably accurate with differences of less than 40 degrees. At such high frequencies (145-175 GHz), the Waveguide Simulator offers a viable solution for assessing the behavior of the unit cell without the need for a full array.
2025
Authors
Ferreira, JS; Jesus, MT; Leal, LM; Spratley, JEF;
Publication
Journal of Voice
Abstract
This paper addresses two challenges that are intertwined and are key in informing signal processing methods restoring natural (voiced) speech from whispered speech. The first challenge involves characterizing and modeling the evolution of the harmonic phase/magnitude structure of a sequence of individual pitch periods in a voiced region of natural speech comprising sustained or co-articulated vowels. A novel algorithm segmenting individual pitch pulses is proposed, which is then used to obtain illustrative results highlighting important differences between sustained and co-articulated vowels, and suggesting practical synthetic voicing approaches. The second challenge involves model-based synthetic voicing restoration in real-time and on-the-fly. Three implementation alternatives are described that differ in their signal reconstruction approaches: frequency-domain, combined frequency- and time-domain, and physiologically inspired filtering of glottal excitation pulses individually generated. The three alternatives are compared objectively using illustrative examples, and subjectively using the results of listening tests involving synthetic voicing of sustained and co-articulated vowels in word context. © 2025 Elsevier B.V., All rights reserved.
2025
Authors
Yamamura, F; Scalassara, R; Oliveira, A; Ferreira, JS;
Publication
U.Porto Journal of Engineering
Abstract
Whispers are common and essential for secondary communication. Nonetheless, individuals with aphonia, including laryngectomees, rely on whispers as their primary means of communication. Due to the distinct features between whispered and regular speech, debates have emerged in the field of speech recognition, highlighting the challenge of effectively converting between them. This study investigates the characteristics of whispered speech and proposes a system for converting whispered vowels into normal ones. The system is developed using multilayer perceptron networks and two types of generative adversarial networks. Three metrics are analyzed to evaluate the performance of the system: mel-cepstral distortion, root mean square error of the fundamental frequency, and accuracy with f1-score of a vowel classifier. Overall, the perceptron networks demonstrated better results, with no significant differences observed between male and female voices or the presence/absence of speech silence, except for improved accuracy in estimating the fundamental frequency during the conversion process. © 2025, Universidade do Porto - Faculdade de Engenharia. All rights reserved.
2025
Authors
da Silva, JMPP; Duarte Nunes, G; Ferreira, A;
Publication
Abstract
2025
Authors
Teixeira, FB; Ricardo, M; Coelho, A; Oliveira, HP; Viana, P; Paulino, N; Fontes, H; Marques, P; Campos, R; Pessoa, L;
Publication
EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING
Abstract
Telecommunications and computer vision solutions have evolved significantly in recent years, allowing a huge advance in the functionalities and applications offered. However, these two fields have been making their way as separate areas, not exploring the potential benefits of merging the innovations brought from each of them. In challenging environments, for example, combining radio sensing and computer vision can strongly contribute to solving problems such as those introduced by obstructions or limited lighting. Machine learning algorithms, able to fuse heterogeneous and multi-modal data, are also a key element for understanding and inferring additional knowledge from raw and low-level data, able to create a new abstracting level that can significantly enhance many applications. This paper introduces the CONVERGE vision-radio concept, a new paradigm that explores the benefits of integrating two fields of knowledge towards the vision of View-to-Communicate, Communicate-to-View. The main concepts behind this vision, including supporting use cases and the proposed architecture, are presented. CONVERGE introduces a set of tools integrating wireless communications and computer vision to create a novel experimental infrastructure that will provide open datasets to the scientific community of both experimental and simulated data, enabling new research addressing various 6 G verticals, including telecommunications, automotive, manufacturing, media, and health.
2025
Authors
Duarte, P; Coelho, A; Ricardo, M;
Publication
2025 21TH INTERNATIONAL CONFERENCE ON WIRELESS AND MOBILE COMPUTING, NETWORKING AND COMMUNICATIONS, WIMOB
Abstract
The increasing complexity of wireless environments, driven by user mobility and dynamic obstructions, poses significant challenges to maintaining Line-of-Sight (LoS) connectivity. Mobile base stations (gNBs) offer a promising solution by physically relocating to restore or sustain LoS. This paper explores how reinforcement learning (RL) can be applied to gNB mobility control within vision-aided network systems. As part of the CONVERGE project, we present the CONVERGE Chamber Simulator (CC-SIM), a 3D environment for developing, training, and testing gNB mobility control algorithms. CC-SIM models user and obstacle mobility, visual occlusion, and Radio Frequency (RF) propagation while supporting both offline reinforcement learning and real-time validation through integration with OpenAirInterface (OAI). Leveraging CC-SIM, we trained a Deep Q-Network (DQN) agent that proactively repositions gNBs under dynamic conditions. Across three representative use cases, the agent reduced LoS blockage by up to 42% compared to static deployments, highlighting the potential of RL-driven mobility control and positioning CC-SIM as a practical platform for advancing adaptive, next-generation wireless networks.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.