Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Nuno Pereira (PhD, U. Minho, 2010; orcid.org/0000-0001-6370-9373) is a Professor at the School of Engineering of the Polytechnic of Porto (ISEP). He is working on the infrastructure needed to create Mixed Reality distributed applications that seamlessly span the Cloud and Edge. I'm one of the main contributors to ARENA (Augmented Reality Edge Networking Architecture). Between 2019-2022, he was a Visiting Scholar at Carnegie Mellon University and the Executive Director of the CONIX Research Center. Nuno had relevant contributions to 12 research projects (European and National), published more than 60 technical papers, and served on technical program committees of numerous scientific events.


Interest
Topics
Details

Details

Publications

2023

Scaling VR Video Conferencing

Authors
Dasari, M; Lu, E; Farb, MW; Pereira, N; Liang, I; Rowe, A;

Publication
2023 IEEE CONFERENCE VIRTUAL REALITY AND 3D USER INTERFACES, VR

Abstract
Virtual Reality (VR) telepresence platforms are being challenged to support live performances, sporting events, and conferences with thousands of users across seamless virtual worlds. Current systems have struggled to meet these demands which has led to high-profile performance events with groups of users isolated in parallel sessions. The core difference in scaling VR environments compared to classic 2D video content delivery comes from the dynamic peer-to-peer spatial dependence on communication. Users have many pair-wise interactions that grow and shrink as they explore spaces. In this paper, we discuss the challenges of VR scaling and present an architecture that supports hundreds of users with spatial audio and video in a single virtual environment. We leverage the property of spatial locality with two key optimizations: (1) a Quality of Service (QoS) scheme to prioritize audio and video traffic based on users' locality, and (2) a resource manager that allocates client connections across multiple servers based on user proximity within the virtual world. Through real-world deployments and extensive evaluations under real and simulated environments, we demonstrate the scalability of our platform while showing improved QoS compared with existing approaches.

2023

From a Visual Scene to a Virtual Representation: A Cross-Domain Review

Authors
Pereira, A; Carvalho, P; Pereira, N; Viana, P; Corte-Real, L;

Publication
IEEE ACCESS

Abstract
The widespread use of smartphones and other low-cost equipment as recording devices, the massive growth in bandwidth, and the ever-growing demand for new applications with enhanced capabilities, made visual data a must in several scenarios, including surveillance, sports, retail, entertainment, and intelligent vehicles. Despite significant advances in analyzing and extracting data from images and video, there is a lack of solutions able to analyze and semantically describe the information in the visual scene so that it can be efficiently used and repurposed. Scientific contributions have focused on individual aspects or addressing specific problems and application areas, and no cross-domain solution is available to implement a complete system that enables information passing between cross-cutting algorithms. This paper analyses the problem from an end-to-end perspective, i.e., from the visual scene analysis to the representation of information in a virtual environment, including how the extracted data can be described and stored. A simple processing pipeline is introduced to set up a structure for discussing challenges and opportunities in different steps of the entire process, allowing to identify current gaps in the literature. The work reviews various technologies specifically from the perspective of their applicability to an end-to-end pipeline for scene analysis and synthesis, along with an extensive analysis of datasets for relevant tasks.

2022

Cappella: Establishing Multi-User Augmented Reality Sessions Using Inertial Estimates and Peer-to-Peer Ranging

Authors
Miller J.; Soltanaghai E.; Duvall R.; Chen J.; Bhat V.; Pereira N.; Rowe A.;

Publication
Proceedings - 21st ACM/IEEE International Conference on Information Processing in Sensor Networks, IPSN 2022

Abstract
Current collaborative augmented reality (AR) systems establish a common localization coordinate frame among users by exchanging and comparing maps comprised of feature points. However, relative positioning through map sharing struggles in dynamic or feature-sparse environments. It also requires that users exchange identical regions of the map, which may not be possible if they are separated by walls or facing different directions. In this paper, we present Cappella11Like its musical inspiration, Cappella utilizes collaboration among agents to forgo the need for instrumentation, an infrastructure-free 6-degrees-of-freedom (6DOF) positioning system for multi-user AR applications that uses motion estimates and range measurements between users to establish an accurate relative coordinate system. Cappella uses visual-inertial odometry (VIO) in conjunction with ultra-wideband (UWB) ranging radios to estimate the relative position of each device in an ad hoc manner. The system leverages a collaborative particle filtering formulation that operates on sporadic messages exchanged between nearby users. Unlike visual landmark sharing approaches, this allows for collaborative AR sessions even if users do not share the same field of view, or if the environment is too dynamic for feature matching to be reliable. We show that not only is it possible to perform collaborative positioning without infrastructure or global coordinates, but that our approach provides nearly the same level of accuracy as fixed infrastructure approaches for AR teaming applications. Cappella consists of an open source UWB firmware and reference mobile phone application that can display the location of team members in real time using mobile AR. We evaluate Cappella across mul-tiple buildings under a wide variety of conditions, including a contiguous 30,000 square foot region spanning multiple floors, and find that it achieves median geometric error in 3D of less than 1 meter.

2021

Teaching Programming with a Limited Infrastructure

Authors
Ferreira, P; Nogueira, L; Pereira, N; Maia, C; Fernandes, M; Andrade, A; Faria, R; Goncalves, C;

Publication
2021 WORLD ENGINEERING EDUCATION FORUM/GLOBAL ENGINEERING DEANS COUNCIL (WEEF/GEDC)

Abstract
Programming courses are needed for an increasing number of students in the Higher Education Institutions of today. Of all the programming languages covered in typical courses, the C and Assembly languages are among the most critical. As they are very low level languages, their knowledge helps the students to understand the inner workings of a computer. At the same time, their differences from other programming languages, demands from the learner a serious adjustment of the mental model. As the programming tools and environments are also different, there is the need of supporting the students in their learning, using a minimum of infrastructure, due to financial restrictions, and to support the maximum number of students, with the existing resources. The use of a Virtual Machine based on a Live Linux distribution, together with an enhanced set of software tests can provide students with an easy to install development platform, providing a good amount feedback, with very limited network usage. The methods described in this paper have been applied with good results, and can be used to support live or online classes.

2021

Hybrid Conference Experiences in the ARENA

Authors
Pereira N.; Rowe A.; Farb M.W.; Liang I.; Lu E.; Riebling E.;

Publication
Proceedings - 2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct, ISMAR-Adjunct 2021

Abstract
We propose supporting hybrid conference experiences using the Augmented Reality Edge Network Architecture (ARENA). ARENA is a platform based on web technologies that simplifies the creation of collaborative mixed reality for standard Web Browsers (Chrome, Firefox) in VR, Headset AR/VR Browsers (Magic Leap, Hololens, Oculus Quest 2), and mobile AR (WebXR Viewer for iOS, Chrome with experimental flags for Android, and our own custom WebXR fork for iOS). We use a 3D scan of the conference venue as the backdrop environment for remote users and a model to stage various AR interactions for in-person users. Remote participants can use VR in a browser or a VR headset to navigate the scene. In-person participants can use AR headsets or mobile AR through WebXR browsers to see and hear remote users. ARENA can scale up to hundreds of users in the same scene and provides audio and video with spatial sound that can more closely capture real-world interactions.