Cookies Policy
We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out More
Close
  • Menu
Interest
Topics
Details

Details

  • Name

    Salisu Mamman Abdulrhaman
  • Cluster

    Computer Science
  • Role

    External Research Collaborator
  • Since

    01st March 2014
001
Publications

2018

Speeding up algorithm selection using average ranking and active testing by introducing runtime

Authors
Abdulrahman, SM; Brazdil, P; van Rijn, JN; Vanschoren, J;

Publication
Machine Learning

Abstract

2018

Impact of Feature Selection on Average Ranking Method via Metalearning

Authors
Abdulrahman, SM; Cachada, MV; Brazdil, P;

Publication
VIPIMAGE 2017

Abstract
Selecting appropriate classification algorithms for a given dataset is crucial and useful in practice but is also full of challenges. In order to maximize performance, users of machine learning algorithms need methods that can help them identify the most relevant features in datasets, select algorithms and determine their appropriate hyperparameter settings. In this paper, a method of recommending classification algorithms is proposed. It is oriented towards the average ranking method, combining algorithm rankings observed on prior datasets to identify the best algorithms for a new dataset. Our method uses a special case of data mining workflow that combines algorithm selection preceded by a feature selection method (CFS).

2017

Combining Feature and Algorithm Hyperparameter Selection using some Metalearning Methods

Authors
Cachada, M; Abdulrahman, SM; Brazdil, P;

Publication
Proceedings of the International Workshop on Automatic Selection, Configuration and Composition of Machine Learning Algorithms co-located with the European Conference on Machine Learning & Principles and Practice of Knowledge Discovery in Databases, AutoML@PKDD/ECML 2017, Skopje, Macedonia, September 22, 2017.

Abstract
Machine learning users need methods that can help them identify algorithms or even workflows (combination of algorithms with preprocessing tasks, using or not hyperparameter configurations that are different from the defaults), that achieve the potentially best performance. Our study was oriented towards average ranking (AR), an algorithm selection method that exploits meta-data obtained on prior datasets. We focused on extending the use of a variant of AR* that takes A3R as the relevant metric (combining accuracy and run time). The extension is made at the level of diversity of the portfolio of workflows that is made available to AR. Our aim was to establish whether feature selection and different hyperparameter configurations improve the process of identifying a good solution. To evaluate our proposal we have carried out extensive experiments in a leave-one-out mode. The results show that AR* was able to select workflows that are likely to lead to good results, especially when the portfolio is diverse. We additionally performed a comparison of AR* with Auto-WEKA, running with different time budgets. Our proposed method shows some advantage over Auto-WEKA, particularly when the time budgets are small.

2016

Effect of Incomplete Meta-dataset on Average Ranking Method

Authors
Abdulrahman, SM; Brazdil, P;

Publication
CoRR

Abstract

2016

Effect of Incomplete Meta-dataset on Average Ranking Method

Authors
Abdulrahman, SalisuMamman; Brazdil, Pavel;

Publication
Proceedings of the 2016 Workshop on Automatic Machine Learning, AutoML 2016, co-located with 33rd International Conference on Machine Learning (ICML 2016), New York City, NY, USA, June 24, 2016

Abstract