2022
Autores
Miller J.; Soltanaghai E.; Duvall R.; Chen J.; Bhat V.; Pereira N.; Rowe A.;
Publicação
Proceedings - 21st ACM/IEEE International Conference on Information Processing in Sensor Networks, IPSN 2022
Abstract
Current collaborative augmented reality (AR) systems establish a common localization coordinate frame among users by exchanging and comparing maps comprised of feature points. However, relative positioning through map sharing struggles in dynamic or feature-sparse environments. It also requires that users exchange identical regions of the map, which may not be possible if they are separated by walls or facing different directions. In this paper, we present Cappella11Like its musical inspiration, Cappella utilizes collaboration among agents to forgo the need for instrumentation, an infrastructure-free 6-degrees-of-freedom (6DOF) positioning system for multi-user AR applications that uses motion estimates and range measurements between users to establish an accurate relative coordinate system. Cappella uses visual-inertial odometry (VIO) in conjunction with ultra-wideband (UWB) ranging radios to estimate the relative position of each device in an ad hoc manner. The system leverages a collaborative particle filtering formulation that operates on sporadic messages exchanged between nearby users. Unlike visual landmark sharing approaches, this allows for collaborative AR sessions even if users do not share the same field of view, or if the environment is too dynamic for feature matching to be reliable. We show that not only is it possible to perform collaborative positioning without infrastructure or global coordinates, but that our approach provides nearly the same level of accuracy as fixed infrastructure approaches for AR teaming applications. Cappella consists of an open source UWB firmware and reference mobile phone application that can display the location of team members in real time using mobile AR. We evaluate Cappella across mul-tiple buildings under a wide variety of conditions, including a contiguous 30,000 square foot region spanning multiple floors, and find that it achieves median geometric error in 3D of less than 1 meter.
2022
Autores
Martins, IS; Pinheiro, MR; Silva, HF; Tuchin, VV; Oliveira, LM;
Publicação
2022 International Conference Laser Optics, ICLO 2022 - Proceedingss
Abstract
The evaluation of the diffusion properties of optical clearing agents in biological tissues, which are necessary to characterize the transparency mechanisms, has been traditionally made using ex vivo tissues. With the objective of performing such evaluation in vivo, this study was made to evaluate and compare those properties for propylene glycol in skeletal muscle, as obtained with the collimated transmittance and diffuse reflectance kinetics. The diffusion time and the diffusion coefficient of propylene glycol in the muscle that were calculated both from transmittance and reflectance kinetics presented a deviation of 0.8%, a result that opens the possibility to use such a method in vivo. © 2022 IEEE.
2022
Autores
Oliveira, LM; Goncalves, TM; Botelho, AR; Martins, IS; Silva, HF; Carneiro, I; Carvalho, S; Henrique, R; Tuchin, VV;
Publicação
2022 International Conference Laser Optics, ICLO 2022 - Proceedingss
Abstract
The direct calculation of the absorption coefficient spectra of various tissues from spectral measurements allowed to retrieve the contents of melanin and lipofuscin. In the rabbit brain cortex, 1.8 times higher melanin content is explained by the neuron degeneration process. Similar melanin and lipofuscin contents were found in the rabbit pancreas as a result of the tissue aging process. The conversion of 83 % of the melanin in the human normal kidney into lipofuscin in the cancer kidney indicates that lipofuscin can be considered a kidney cancer marker in humans. © 2022 IEEE.
2022
Autores
Martins, IS; Silva, HF; Tuchin, VV; Oliveira, LM;
Publicação
PHOTONICS
Abstract
The pancreas is a highly important organ, since it produces insulin and prevents the occurrence of diabetes. Although rare, pancreatic cancer is highly lethal, with a small life expectancy after being diagnosed. The pancreas is one of the organs less studied in the field of biophotonics. With the objective of acquiring information that can be used in the development of future applications to diagnose and treat pancreas diseases, the spectral optical properties of the rabbit pancreas were evaluated in a broad-spectral range, between 200 and 1000 nm. The method used to obtain such optical properties is simple, based almost on direct calculations from spectral measurements. The optical properties obtained show similar wavelength dependencies to the ones obtained for other tissues, but a further analysis on the spectral absorption coefficient showed that the pancreas tissues contain pigments, namely melanin, and lipofuscin. Using a simple calculation, it was possible to retrieve similar contents of these pigments from the absorption spectrum of the pancreas, which indicates that they accumulate in the same proportion as a result of the aging process. Such pigment accumulation was camouflaging the real contents of DNA, hemoglobin, and water, which were precisely evaluated after subtracting the pigment absorption.
2022
Autores
Nogueira, AFR; Oliveira, HS; Machado, JJM; Tavares, JMRS;
Publicação
SENSORS
Abstract
Many relevant sound events occur in urban scenarios, and robust classification models are required to identify abnormal and relevant events correctly. These models need to identify such events within valuable time, being effective and prompt. It is also essential to determine for how much time these events prevail. This article presents an extensive analysis developed to identify the best-performing model to successfully classify a broad set of sound events occurring in urban scenarios. Analysis and modelling of Transformer models were performed using available public datasets with different sets of sound classes. The Transformer models' performance was compared to the one achieved by the baseline model and end-to-end convolutional models. Furthermore, the benefits of using pre-training from image and sound domains and data augmentation techniques were identified. Additionally, complementary methods that have been used to improve the models' performance and good practices to obtain robust sound classification models were investigated. After an extensive evaluation, it was found that the most promising results were obtained by employing a Transformer model using a novel Adam optimizer with weight decay and transfer learning from the audio domain by reusing the weights from AudioSet, which led to an accuracy score of 89.8% for the UrbanSound8K dataset, 95.8% for the ESC-50 dataset, and 99% for the ESC-10 dataset, respectively.
2022
Autores
Nogueira, AFR; Oliveira, HS; Machado, JJM; Tavares, JMRS;
Publicação
SENSORS
Abstract
Audio recognition can be used in smart cities for security, surveillance, manufacturing, autonomous vehicles, and noise mitigation, just to name a few. However, urban sounds are everyday audio events that occur daily, presenting unstructured characteristics containing different genres of noise and sounds unrelated to the sound event under study, making it a challenging problem. Therefore, the main objective of this literature review is to summarize the most recent works on this subject to understand the current approaches and identify their limitations. Based on the reviewed articles, it can be realized that Deep Learning (DL) architectures, attention mechanisms, data augmentation techniques, and pretraining are the most crucial factors to consider while creating an efficient sound classification model. The best-found results were obtained by Mushtaq and Su, in 2020, using a DenseNet-161 with pretrained weights from ImageNet, and NA-1 and NA-2 as augmentation techniques, which were of 97.98%, 98.52%, and 99.22% for UrbanSound8K, ESC-50, and ESC-10 datasets, respectively. Nonetheless, the use of these models in real-world scenarios has not been properly addressed, so their effectiveness is still questionable in such situations.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.