Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Interest
Topics
Details

Details

002
Publications

2020

Predicting students' performance using survey data

Authors
Félix, C; Sobral, SR;

Publication
2020 IEEE Global Engineering Education Conference, EDUCON 2020, Porto, Portugal, April 27-30, 2020

Abstract

2020

Predicting students' performance using survey data

Authors
Felix, C; Sobral, SR;

Publication
PROCEEDINGS OF THE 2020 IEEE GLOBAL ENGINEERING EDUCATION CONFERENCE (EDUCON 2020)

Abstract
The acquisition of competences for the development of computer programs is one of the main challenges faced by computer science students. As a result of not being able to develop the abilities needed (for example, abstraction), students drop out the subjects and sometimes even the course. There is a need to study the causes of student success (or failure) in introductory curricular units to check for behaviours or characteristics that may be determinant and thus try to prevent and change said causes. The students of one programming curricular unit were invited to answer four surveys. We use machine learning techniques to try to predict the students' grades based on the answers obtained on the surveys. The results obtained enable us to plan the semester accordingly, by anticipating how many students might need extra support. We hope to increase the students' motivation and, with this, increase their interest on the subject. This way we aim to accomplish our ultimate goal: reducing the drop out and increasing the overall average student performance.

2018

Using metalearning for parameter tuning in neural networks

Authors
Felix, C; Soares, C; Jorge, A; Ferreira, H;

Publication
Lecture Notes in Computational Vision and Biomechanics

Abstract
Neural networks have been applied as a machine learning tool in many different areas. Recently, they have gained increased attention with what is now called deep learning. Neural networks algorithms have several parameters that need to be tuned in order to maximize performance. The definition of these parameters can be a difficult, extensive and time consuming task, even for expert users. One approach that has been successfully used for algorithm and parameter selection is metalearning. Metalearning consists in using machine learning algorithm on (meta)data from machine learning experiments to map the characteristics of the data with the performance of the algorithms. In this paper we study how a metalearning approach can be used to obtain a good set of parameters to learn a neural network for a given new dataset. Our results indicate that with metalearning we can successfully learn classifiers from past learning tasks that are able to define appropriate parameters. © 2018, Springer International Publishing AG.

2016

Can Metalearning Be Applied to Transfer on Heterogeneous Datasets?

Authors
Felix, C; Soares, C; Jorge, A;

Publication
Hybrid Artificial Intelligent Systems

Abstract
Machine learning processes consist in collecting data, obtaining a model and applying it to a given task. Given a new task, the standard approach is to restart the learning process and obtain a new model. However, previous learning experience can be exploited to assist the new learning process. The two most studied approaches for this are meta-learning and transfer learning. Metalearning can be used for selecting the predictive model to use on a new dataset. Transfer learning allows the reuse of knowledge from previous tasks. However, when multiple heterogeneous tasks are available as potential sources for transfer, the question is which one to use. One approach to address this problem is metalearning. In this paper we investigate the feasibility of this approach. We propose a method to transfer weights from a source trained neural network to initialize a network that models a potentially very different target dataset. Our experiments with 14 datasets indicate that this method enables faster convergence without significant difference in accuracy provided that the source task is adequately chosen. This means that there is potential for applying metalearning to support transfer between heterogeneous datasets.

2015

Metalearning for multiple-domain transfer learning

Authors
Félix, C; Soares, C; Jorge, A;

Publication
CEUR Workshop Proceedings

Abstract
Machine learning processes consist in collecting data, obtaining a model and applying it to a given task. Given a new task, the standard approach is to restart the learning process and obtain a new model. However, previous learning experience can be exploited to assist the new learning process. The two most studied approaches for this are metalearning and transfer learning. Metalearning can be used for selecting the predictive model to use over a determined dataset. Transfer learning allows the reuse of knowledge from previous tasks. Our aim is to use metalearning to support transfer learning and reduce the computational cost without loss in terms of performance, as well as the user effort needed for the algorithm selection. In this paper we propose some methods for mapping the transfer of weights between neural networks to improve the performance of the target network, and describe some experiments performed in order to test our hypothesis.