2018
Autores
Migueis, VL; Freitas, A; Garcia, PJV; Silva, A;
Publicação
DECISION SUPPORT SYSTEMS
Abstract
The early classification of university students according to their potential academic performance can be a useful strategy to mitigate failure, to promote the achievement of better results and to better manage resources in higher education institutions. This paper proposes a two-stage model, supported by data mining techniques, that uses the information available at the end of the first year of students' academic career (path) to predict their overall academic performance. Unlike most literature on educational data mining, academic success is inferred from both the average grade achieved and the time taken to conclude the degree. Furthermore, this study proposes to segment students based on the dichotomy between the evidence of failure or high performance at the beginning of the degree program, and the students' performance levels predicted by the model. A data set of 2459 students, spanning the years from 2003 to 2015, from a European Engineering School of a public research University, is used to validate the proposed methodology. The empirical results demonstrate the ability of the proposed model to predict the students' performance level with an accuracy above 95%, in an early stage of the students' academic path. It is found that random forests are superior to the other classification techniques that were considered (decision trees, support vector machines, naive Bayes, bagged trees and boosted trees). Together with the prediction model, the suggested segmentation framework represents a useful tool to delineate the optimum strategies to apply, in order to promote higher performance levels and mitigate academic failure, overall increasing the quality of the academic experience provided by a higher education institution.
2018
Autores
Rodrigues, JC; Freitas, A; Garcia, P; Maia, C; Pierre Favre, M;
Publicação
2018 3RD INTERNATIONAL CONFERENCE OF THE PORTUGUESE SOCIETY FOR ENGINEERING EDUCATION (CISPEE)
Abstract
Doctoral programmes are facing several challenges in modern societies. The societal role of the University, funded by the state, requires it to: a) increase the offer and admission of third cycle students; b) to reach industry/companies expectations; c) to ensure reasonable employability prospects for the PhD candidates. With the current demography, most candidates can only find a job in industry/companies. Therefore, significant pressure is being put on doctoral programmes to include transferable skills in their curriculum. This paper presents a course "Fit for Industry?" aiming at filling this need. The course design methodology is presented in detail. It includes: a) the involvement of industry since its inception; b) the joint identification of a small number of key competencies to be addressed; c) the inclusion of assessment and feedback mechanisms in its design; d) an immersive and international dimension. It was found that the course had a profound impact on the candidates' perceptions of industry and valued by industry participants. Other stakeholders, such as PhD supervisors, also had a positive perception. The paper concludes with recommendations for those willing to replicate the course locally.
2019
Autores
Andrade, PP; Garcia, PJV; Correia, CM; Kolb, J; Carvalho, MI;
Publicação
MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY
Abstract
The estimation of atmospheric turbulence parameters is of relevance for the following: (a) site evaluation and characterization; (b) prediction of the point spread function; (c) live assessment of error budgets and optimization of adaptive optics performance; (d) optimization of fringe trackers for long baseline optical interferometry. The ubiquitous deployment of Shack-Hartmann wavefront sensors in large telescopes makes them central for atmospheric turbulence parameter estimation via adaptive optics telemetry. Several methods for the estimation of the Fried parameter and outer scale have been developed, most of which are based on the fitting of Zernike polynomial coefficient variances reconstructed from the telemetry. The non-orthogonality of Zernike polynomial derivatives introduces modal cross coupling, which affects the variances. Furthermore, the finite resolution of the sensor introduces aliasing. In this article the impact of these effects on atmospheric turbulence parameter estimation is addressed with simulations. It is found that cross-coupling is the dominant bias. An iterative algorithm to overcome it is presented. Simulations are conducted for typical ranges of the outer scale (4-32 m), Fried parameter (10 cm) and noise in the variances (signal-to-noise ratio of 10 and above). It is found that, using the algorithm, both parameters are recovered with sub-per cent accuracy.
2024
Autores
Ribeiro, FSF; Garcia, PJV; Silva, M; Cardoso, JS;
Publicação
IEEE ACCESS
Abstract
Point source detection algorithms play a pivotal role across diverse applications, influencing fields such as astronomy, biomedical imaging, environmental monitoring, and beyond. This article reviews the algorithms used for space imaging applications from ground and space telescopes. The main difficulties in detection arise from the incomplete knowledge of the impulse function of the imaging system, which depends on the aperture, atmospheric turbulence (for ground-based telescopes), and other factors, some of which are time-dependent. Incomplete knowledge of the impulse function decreases the effectiveness of the algorithms. In recent years, deep learning techniques have been employed to mitigate this problem and have the potential to outperform more traditional approaches. The success of deep learning techniques in object detection has been observed in many fields, and recent developments can further improve the accuracy. However, deep learning methods are still in the early stages of adoption and are used less frequently than traditional approaches. In this review, we discuss the main challenges of point source detection, as well as the latest developments, covering both traditional and current deep learning methods. In addition, we present a comparison between the two approaches to better demonstrate the advantages of each methodology.
2018
Autores
Anugu, N; Garcia, PJV; Correia, CM;
Publicação
MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY
Abstract
Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of similar to 7 to values of less than or similar to 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.
2020
Autores
Morris, T; Osborn, J; Reyes, M; Montilla, I; Rousset, G; Gendron, E; Fusco, T; Neichel, B; Esposito, S; Garcia, PJV; Kulcsar, C; Correia, C; Beuzit, JL; Bharmal, NA; Bardou, L; Staykov, L; Bonaccini Calia, D;
Publicação
Proceedings of SPIE - The International Society for Optical Engineering
Abstract
On-sky testing of new instrumentation concepts is required before they can be incorporated within facility-class instrumentation with certainty that they will work as expected within a real telescope environment. Increasingly, many of these concepts are not designed to work in seeing-limited conditions and require an upstream adaptive optics system for testing. Access to on-sky AO systems to test such systems is currently limited to a few research groups and observatories worldwide, leaving many concepts unable to be tested. A pilot program funded through the H2020 OPTICON program offering up to 15 nights of on-sky time at the CANARY Adaptive Optics demonstrator is currently running but this ends in 2021. Pre-run and on-sky support is provided to visitor experiments by the CANARY team. We have supported 6 experiments over this period, and plan one more run in early 2021. We have recently been awarded for funding through the H2020 OPTICON-RADIO PILOT call to continue and extend this program up until 2024, offering access to CANARY at the 4.2m William Herschel Telescope and 3 additional instruments and telescopes suitable for instrumentation development. Time on these facilities will be open to researchers from across the European research community and time will be awarded by answering a call for proposals that will be assessed by an independent panel of instrumentation experts. Unlike standard observing proposals we plan to award time up to 2 years in advance to allow time for the visitor instrument to be delivered. We hope to announce the first call in mid-2021. Here we describe the facilities offered, the support available for on-sky testing and detail the eligibility and application process. © 2020 SPIE.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.