Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by HumanISE

2014

Dynamic cluster scheduling for cluster-tree WSNs

Authors
Severino, R; Pereira, N; Tovar, E;

Publication
SPRINGERPLUS

Abstract
While Cluster-Tree network topologies look promising for WSN applications with timeliness and energy-efficiency requirements, we are yet to witness its adoption in commercial and academic solutions. One of the arguments that hinder the use of these topologies concerns the lack of flexibility in adapting to changes in the network, such as in traffic flows. This paper presents a solution to enable these networks with the ability to self-adapt their clusters' duty-cycle and scheduling, to provide increased quality of service to multiple traffic flows. Importantly, our approach enables a network to change its cluster scheduling without requiring long inaccessibility times or the re-association of the nodes. We show how to apply our methodology to the case of IEEE 802.15.4/ZigBee cluster-tree WSNs without significant changes to the protocol. Finally, we analyze and demonstrate the validity of our methodology through a comprehensive simulation and experimental validation using commercially available technology on a Structural Health Monitoring application scenario.

2014

Social and solidarity economy web information systems: State of the art and an interoperability framework

Authors
Malta, MC; Baptista, AA; Parente, C;

Publication
Journal of Electronic Commerce in Organizations

Abstract
This paper presents the state of the art on interoperability developments for the social and solidarity economy (SSE) community web based information systems (WIS); it also presents a framework of interoperability for the SSE' WIS and the developments made in a research-in-progress PhD project in the last 3 years. A search on the bibliographic databases showed that so far there are no papers on interoperability initiatives on the SSE, so it was necessary to have other sources of information: a preliminary analysis of the WIS that support SSE activities; and interviews with the representatives of some of the world's most important SSE organisations. The study showed that the WIS are still not interoperable yet. In order to become interoperable a group of the SSE community has been developing a Dublin Corre Application Profile to be used by the SSE community as reference and binding to describe their resources. This paper also describes this on-going process. Copyright © 2014, IGI Global.

2014

A panoramic view on metadata application profiles of the last decade

Authors
Malta, MC; Baptista, AA;

Publication
International Journal of Metadata, Semantics and Ontologies

Abstract
This paper describes a study developed with the goal to understand the panorama of the metadata Application Profiles (AP): (i) what AP have been developed so far; (ii) what type of institutions have developed these AP; (iii) what are the application domains of these AP; (iv) what are the Metadata Schemes (MS) used by these AP; (v) what application domains have been producing MS; (vi) what are the Syntax Encoding Schemes (SES) and the Vocabulary Encoding Schemes (VES) used by these AP; and finally (vii) if these AP have followed the Singapore Framework (SF). We found (i) 74 AP; (ii) the AP are mostly developed by the scientific community, (iii) the 'Learning Objects' domain is the most intensive producer; (iv) Dublin Core metadata vocabularies are the most used and are being used in all domains of application and IEEE LOM is the second most used but only inside the 'Learning Objects' application domain; (v) the most intensive producer of MS is the domain of 'Libraries and Repositories'; (vi) 13 distinct SES and 90 distinct VES were used; (vi) five of the 74 AP found follow the SF. Copyright © 2014 Inderscience Enterprises Ltd.

2014

Contributo metodológico para o desenvolvimento de perfis de aplicação no contexto da Web Semântica

Authors
Malta, Mariana Curado;

Publication

Abstract
A Web Semântica (WS) é um paradigma da Web que surgiu com o intuito de ligar dados, permitindo a partilha de conteúdos para além das fronteiras das aplicações e dos sítios Web. Neste contexto, um perfil de aplicação de metadados (PA) é um constructo genérico para desenhar registos de metadados que satisfazem necessidades específicas das aplicações, proporcionando interoperabilidade semântica com outras aplicações. Esse desenho dos registos de metadados tem como base vocabulários e modelos definidos globalmente pela comunidade de metadados. A Dublin Core Metadata Initiative, provavelmente a mais conhecida e mais importante iniciativa a nível mundial no que diz respeito a metadados, definiu um modelo abstracto (Dublin Core Abstract Model) onde um dos constructos é o Dublin Core Aplication Profile (DCAP). A DCMI refere que a utilização de um DCAP é essencial para implementar interoperabilidade no contexto da WS. A concepção, o desenvolvimento e a implementação de modelos, sendo um processo complexo, necessitam de um suporte metodológico; um DCAP não foge a essa regra. Por essa razão, realizámos o estudo do estado da arte dos métodos para o desenvolvimento de PA, com o sentido de fazer o levantamento das práticas da comunidade de metadados no desenvolvimento de PA. Este estudo revelou que até à data não existe nenhum método para o desenvolvimento de um PA ou DCAP. O objectivo deste projecto de doutoramento foi o de fornecer um primeiro contributo para um tal método. A concepção do Method for the develoment of DCAP (Me4DCAP) teve como suporte: (i) as primeiras fases (até à modelação de dados) dos métodos de desenvolvimento de software; (ii) os resultados de entrevistas realizadas a desenvolvedores DCAP; (iii) as práticas identificadas no estudo já referido do estado da arte dos métodos para desenvolvimento de PA. O Me4DCAP tem como base o Singapore Framework for DCAP, e como ponto de partida o Rational Unified Process, um dos mais conhecidos e utilizados processos de desenvolvimento de software. Para realizar o nosso trabalho utilizámos a metodologia de investigação Design Science Research (DSR), no enquadramento particular para a área dos Sistemas de Informação dos três ciclos de Hevner (2007). Este enquadramento define a possibilidade da utilização de uma situação real, a que Hevner chama de “situação experimental”, para a execução de ciclos de construção-avaliação, onde o artefacto em construção - no nosso caso o Me4DCAP - vai sendo avaliado na situação experimental e redefinido, num processo iterativo. A situação experimental por nós utilizada foi o desenvolvimento de um DCAP para os Sistemas de Informação Web da comunidade mundial de Economia Social e Solidária (ESS). Esse desenvolvimento foi realizado conjuntamente com um grupo que saiu do seio da comunidade da ESS mundial. Este trabalho, identificado como uma oportunidade, resultou num DCAP-ESS. O Me4DCAP foi validado através de um grupo de discussão integrado numa conferência da especialidade, e ainda através de um Focus Group com sete especialistas mundiais de metadados. Como resultados deste doutoramento obtivemos o Me4DCAP V1.0 e o DCAP-ESS V 1.0.;The Semantic Web (SW) is a Web paradigm that emerged with the aim of linking data, enabling content share beyond the borders of Web applications and Web sites. In this context, a metadata application profile (AP) is a generic construct to design metadata records that satisfy specific needs of the applications, enabling semantic interoperability with other applications. This metadata record design is based on vocabularies and models globally defined by the metadata community. The Dublin Core Metadata Initiative (DCMI), probably the best known and most important global initiative with regard to metadata, defined an abstract model (Dublin Core Abstract Model) where one of its constructs is the Dublin Core Application Profile (DCAP). DCMI states that the use of a DCAP is very important in order to implement interoperability in the context of the SW. The conception, development and implementation of models need a methodological support since they are complex processes. Therefore, in order to understand the metadata community practices in the development of AP, we performed the study of the state of the art of the methods for the development of AP. This study revealed that until now there is no method for the development of an AP. The goal of this PhD project was to develop a first approach to such a method. The design of the Method for the development of DCAP (Me4DCAP) had as support: (i) the first stages (up to the data modeling) of software development; (ii) the results of interviews conducted to DCAP developers, (iii) the practices identified in the state of the art of the methods for AP development. Me4DCAP is based on the Singapore Framework for DCAP, and has as starting point the Rational Unified Process, one of the best known and most used software development processes. We used the research methodology Design Science Research (DSR) in our work, and the “three cycles” Information Systems specific framework defined by Hevner (2007). This framework defines the possibility of using a real situation, called by Hevner (2007) as the “experimental situation”, for the execution of the construction-evaluation cycles. In these cycles the artifact in development – in our case Me4DCAP – is being evaluated in the experimental situation, and with the feedbacks from this evaluation, it is redefined, in an iterative process. The experimental situation we used was the development of a DCAP for the Web based information systems of the Social Solidarity Economy (SSE) world community (DCAP-SSE). The DCAP-SSE development was jointly undertaken with a group from the world SSE community. This work, identified as an opportunity, resulted in a DCAP-SSE V1.0. The Me4DCAP was validated through a discussion group integrated in a metadata community international conference, and in a Focus Group with seven metadata world experts. This PhD has as results the Me4DCAP V1.0 and the DCAP-SSE V 1.0.

2014

Resampling Approaches to Improve News Importance Prediction

Authors
Moniz, N; Torgo, L; Rodrigues, F;

Publication
ADVANCES IN INTELLIGENT DATA ANALYSIS XIII

Abstract
The methods used to produce news rankings by recommender systems are not public and it is unclear if they reflect the real importance assigned by readers. We address the task of trying to forecast the number of times a news item will be tweeted, as a proxy for the importance assigned by its readers. We focus on methods for accurately forecasting which news will have a high number of tweets as these are the key for accurate recommendations. This type of news is rare and this creates difficulties to standard prediction methods. Recent research has shown that most models will fail on tasks where the goal is accuracy on a small sub-set of rare values of the target variable. In order to overcome this, resampling approaches with several methods for handling imbalanced regression tasks were tested in our domain. This paper describes and discusses the results of these experimental comparisons.

2014

A system for formative assessment and monitoring of students' progress

Authors
Rodrigues, F; Oliveira, P;

Publication
COMPUTERS & EDUCATION

Abstract
Assessment plays a central role in any educational process as a way of evaluating the students' knowledge on the concepts associated with learning objectives. The assessment of free-text answers is a process that, besides being very costly in terms of time spent by teachers, may lead to inequities due to the difficulty in applying the same evaluation criteria to all answers. This paper describes a system composed by several modules whose main goal is to work as a formative assessment tool for students and to help teachers creating and assessing exams as well monitoring students' progress. The system automatically creates training exams for students to practice based on questions from previous exams and assists teachers in the creation of evaluation exams with various kinds of information about students' performance. The system automatically assesses training exams to give automatic feedback to students. The correction of free-text answers is based on the syntactic and semantic similarity between the student answers and various reference answers, thus going beyond the simple lexical matching. For this, several pre-processing tasks are performed in order to reduce each answer to its more manageable canonical form. Besides the syntactic and semantic similarity between answers, the way the teacher evaluates the answers is also acquired. To accomplish that, the assessment is done using sub scores defined by the teacher concerning parts of the answer or its subgoals. The system has been trained and tested on exams manually graded by History teachers. There is a good correlation between the evaluation of the instructors and the evaluation performed by our system.

  • 499
  • 678