2024
Autores
Marin, E; Chin, JCY; Cetre, S; Wizinowich, P; Ragland, S; Wetherell, E; Surendran, A; Bouchez, A; Delorme, JR; Lilley, S; Lyke, J; Service, M; Tsubota, K; Correia, C; van Dam, M; Biasi, R; Pataunar, C; Pescoller, D; Glazebrook, K; Jameson, A; Gauvin, W; Rigaut, F; Gratadour, D; Bernard, J;
Publicação
ADAPTIVE OPTICS SYSTEMS IX
Abstract
The Real Time Controllers (RTCs) for the W. M. Keck Observatory Adaptive Optics (AO) systems have been upgraded from a Field Programmable Gate Array (FPGA) to a Graphics Processing Unit (GPU) based solution. The previous RTCs, operating since 2007, had reached their limitations after upgrades to support new hardware including an Infra-Red (IR) Tip/Tilt (TT) Wave Front Sensor (WFS) on Keck I and a Pyramid WFS on Keck II. The new RTC, fabricated by a Microgate-led consortium with SUT leading the computation engine development, provides a flexible platform that improves processing bandwidth and allows for easier integration with new hardware and control algorithms. Along with the new GPU-based RTC, the upgrade includes a new hardware Interface Module (IM), new OCAM2K EMCCD cameras, and a new Telemetry Recording Server (TRS). The first system upgrade to take advantage of the new RTC is the Keck I All-sky Precision Adaptive Optics (KAPA) Laser Tomography AO (LTAO) system, which uses the larger and more sensitive OCAM2K EMCCD camera, tomographic reconstruction from four Laser Guide Stars (LGS), and improvements to the IR TT WFS. On Keck II the new RTC will enable a new higher-order Deformable Mirror (DM) as part of the HAKA (High order Advanced Keck Adaptive optics) project, which will also use an EMCCD camera. In the future, the new RTC will allow the possibility for new developments such as the proposed 'IWA (Infrared Wavefront sensor Adaptive optics) system. The new RTC saw first light in 2021. The Keck I system was released for science observations in late 2023, with the Keck II system released for science in early 2024.
2024
Autores
Ramalho, FR; Moreno, T; Soares, AL; Almeida, AH; Oliveira, M;
Publicação
FLEXIBLE AUTOMATION AND INTELLIGENT MANUFACTURING: ESTABLISHING BRIDGES FOR MORE SUSTAINABLE MANUFACTURING SYSTEMS, FAIM 2023, VOL 2
Abstract
European industrial value chains and manufacturing companies have recently faced critical challenges imposed by disruptive events related to the pandemic and associated social/political problems. Many European manufacturing industries have already recognized the importance of digitalization to increase manufacturing systems' autonomy and, consequently, become more resilient to adapt to new contexts and environments. Augmented reality (AR) is one of the emerging technologies associated with the European Industry 5.0 initiative, responsible for increasing human-machine interactions, promoting resilience through decision-making, and flexibility to deal with variability and unexpected events. However, the application and benefits of AR in increasing manufacturing resilience are still poorly perceived by academia and by European Manufacturing companies. Thus, the purpose of this paper is to contribute to the state of the art by relating the application of AR with current industrial processes towards manufacturing systems resilience. In order to cope with this objective, the industrial resilience and augmented human worker concepts are first presented. Then, through an exploratory study involving different manufacturing companies, a list of relevant disruptive events is compiled, as well as a proposal with specific ideas and functionalities on how AR can be applied to address them. In conclusion, this research work highlights the importance of AR in coping mainly with disruptive events related to Human Workforce Management and Market/Sales Management. The AR application ideas shared a common thread of availability and delivery of information to the worker at the right time, place, and format, acting on the standardization and flexibility of the work to support manufacturing resilience.
2024
Autores
Furlan, M; Almada Lobo, B; Santos, M; Morabito, R;
Publicação
COMPUTERS & INDUSTRIAL ENGINEERING
Abstract
Vertical pulp and paper production is challenging from a process point of view. Managers must deal with floating bottlenecks, intermediate storage levels, and by-product production to control the whole process while reducing unexpected downtimes. Thus, this paper aims to address the integrated lot sizing and scheduling problem considering continuous digester production, multiple paper machines, and a chemical recovery line to treat by-products. The aim is to minimize the total production cost to meet customer demands, considering all productive resources and encouraging steam production (which can be used in power generation). Production planning should define the sizes of production lots, the sequence of paper types produced in each machine, and the digester working speed throughout the planning horizon. Furthermore, it should indicate the rate of byproduct treatment at each stage of the recovery line and ensure the minimum and maximum storage limits. Due to the difficulty of exactly solving the mixed integer programming model representing this problem for realworld instances, mainly with planning horizons of over two weeks, constructive and improvement heuristics are proposed in this work. Different heuristic combinations are tested on hundreds of instances generated from data collected from the industry. Comparisons are made with a commercial Mixed-Integer and Linear Programming solver and a hybrid metaheuristic. The results show that combining the greedy constructive heuristic with the new variation of a fix-and-optimize improvement method delivers the best performance in both solution quality and computational time and effectively solves realistic size problems in practice. The proposed method achieved 69.41% of the best solutions for the generated set and 55.40% and 64.00% for the literature set for 1 and 2 machines, respectively, compared with the best solution method from the literature and a commercial solver.
2024
Autores
Santos, R; Baeza, R; Filipe, VM; Renna, F; Paredes, H; Pedrosa, J;
Publicação
2024 IEEE 22ND MEDITERRANEAN ELECTROTECHNICAL CONFERENCE, MELECON 2024
Abstract
Coronary artery calcium is a good indicator of coronary artery disease and can be used for cardiovascular risk stratification. Over the years, different deep learning approaches have been proposed to automatically segment coronary calcifications in computed tomography scans and measure their extent through calcium scores. However, most methodologies have focused on using 2D architectures which neglect most of the information present in those scans. In this work, we use a 3D convolutional neural network capable of leveraging the 3D nature of computed tomography scans and including more context in the segmentation process. In addition, the selected network is lightweight, which means that we can have 3D convolutions while having low memory requirements. Our results show that the predictions of the model, trained on the COCA dataset, are close to the ground truth for the majority of the patients in the test set obtaining a Dice score of 0.90 +/- 0.16 and a Cohen's linearly weighted kappa of 0.88 in Agatston score risk categorization. In conclusion, our approach shows promise in the tasks of segmenting coronary artery calcifications and predicting calcium scores with the objectives of optimizing clinical workflow and performing cardiovascular risk stratification.
2024
Autores
Monteiro, M; Correia, FF; Queiroz, PGG; Ramos, R; Trigo, D; Gonçalves, G;
Publicação
Proceedings of the 29th European Conference on Pattern Languages of Programs, People, and Practices, EuroPLoP 2024, Irsee, Germany, July 3-7, 2024
Abstract
Over the years, sensitive data has been growing in software systems. To comply with ethical and legal requirements, the General Data Protection Regulation (GDPR) recommends using pseudonymization and anonymization techniques to ensure appropriate protection and privacy of personal data. Many anonymization techniques have been described in the literature, such as generalization or suppression, but deciding which methods to use in different contexts is not a straightforward task. Furthermore, anonymization poses two major challenges: choosing adequate techniques for a given context and achieving an optimal level of privacy while maintaining the utility of the data for the context within which it is meant to be used. To address these challenges, this paper describes four new design patterns: Generalization, Hierarchical Generalization, Suppress Outliers, and Relocate Outliers, building on existing literature to offer solutions for common anonymization challenges, including avoiding linkage attacks and managing the privacy-utility trade-off. © 2024 Copyright held by the owner/author(s).
2024
Autores
de Jesus G.; Nunes S.;
Publicação
2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC-COLING 2024 - Main Conference Proceedings
Abstract
This paper proposes Labadain Crawler, a data collection pipeline tailored to automate and optimize the process of constructing textual corpora from the web, with a specific target to low-resource languages. The system is built on top of Nutch, an open-source web crawler and data extraction framework, and incorporates language processing components such as a tokenizer and a language identification model. The pipeline efficacy is demonstrated through successful testing with Tetun, one of Timor-Leste's official languages, resulting in the construction of a high-quality Tetun text corpus comprising 321.7k sentences extracted from over 22k web pages. The contributions of this paper include the development of a Tetun tokenizer, a Tetun language identification model, and a Tetun text corpus, marking an important milestone in Tetun text information retrieval.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.