Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

2017

Digital Governance for Sustainable Development

Autores
Barbosa, LS;

Publicação
Digital Nations - Smart Cities, Innovation, and Sustainability - 16th IFIP WG 6.11 Conference on e-Business, e-Services, and e-Society, I3E 2017, Delhi, India, November 21-23, 2017, Proceedings

Abstract
This lecture discusses the impact of digital transformation of governance mechanisms as a tool to promote sustainable development and more inclusive societies, in the spirit of the United Nations 2030 Agenda. Three main challenges are addressed: the pursuit of inclusiveness, trustworthiness of software infrastructures, and the mechanisms to enforce more transparent and accountable public institutions. © IFIP International Federation for Information Processing 2017.

2017

Predictive model based architecture for energy biomass supply chains tactical decisions

Autores
Pinho, TM; Coelho, JP; Veiga, G; Moreira, AP; Oliveira, PM; Boaventura Cunha, J;

Publicação
IFAC PAPERSONLINE

Abstract
Renewable sources of energy play a decisive role in the current energetic paradigm to mitigate climate changes associated with greenhouse gases emissions and problems of energy security. Biomass energy and in particular forest wood biomass supply chains have the potential to enhance these changes due to its several benefits such as ability to produce both bioenergy and bioproducts, generate energy on-demand, among others. However, this energy source has some drawbacks mainly associated with the involved costs. In this work, the use of a Model Predictive Control approach is proposed to plan, monitor and control the wood-biomass supply chain for energy production at a tactical level. With this methodology the biomass supply chain becomes more efficient ensuring the service quality in a more competitive way. In order to test and validate the proposed approach different simulation scenarios were considered that proved the efficiency of the proposed tool regarding the decisions definition and control.

2017

Spatial Enhancement by Dehazing for Detection of Microcalcifications with Convolutional Nets

Autores
Bria, A; Marrocco, C; Galdran, A; Campilho, A; Marchesi, A; Mordang, JJ; Karssemeijer, N; Molinara, M; Tortorella, F;

Publicação
IMAGE ANALYSIS AND PROCESSING (ICIAP 2017), PT II

Abstract
Microcalcifications are early indicators of breast cancer that appear on mammograms as small bright regions within the breast tissue. To assist screening radiologists in reading mammograms, supervised learning techniques have been found successful to detect micro-calcifications automatically. Among them, Convolutional Neural Networks (CNNs) can automatically learn and extract low-level features that capture contrast and spatial information, and use these features to build robust classifiers. Therefore, spatial enhancement that enhances local contrast based on spatial context is expected to positively influence the learning task of the CNN and, as a result, its classification performance. In this work, we propose a novel spatial enhancement technique for microcalcifications based on the removal of haze, an apparently unrelated phenomenon that causes image degradation due to atmospheric absorption and scattering. We tested the influence of dehazing of digital mammograms on the microcalcification detection performance of two CNNs inspired by the popular AlexNet and VGGnet. Experiments were performed on 1, 066 mammograms acquired with GE Senographe systems. Statistically significantly better microcalcification detection performance was obtained when dehazing was used as preprocessing. Results of dehazing were superior also to those obtained with Contrast Limited Adaptive Histogram Equalization (CLAHE).

2017

Robot Localization System in a Hard Outdoor Environment

Autores
Conceição, T; dos Santos, FN; Costa, PG; Moreira, AP;

Publicação
ROBOT 2017: Third Iberian Robotics Conference - Volume 1, Seville, Spain, November 22-24, 2017

Abstract
Localization and mapping of autonomous robots in a hard and unstable environment (Steep Slope Vineyards) is a challenging research topic. Typically, the commonly used dead reckoning systems can fail due to the harsh conditions of the terrain and the Global Position System (GPS) accuracy can be considerably noisy or not always available. One solution is to use wireless sensors in a network as landmarks. This paper evaluates a ultra-wideband time-of-flight based technology (Pozyx), which can be used as cost-effective solution for application in agricultural robots that works in harsh environment. Moreover, this paper implements a Localization Extended Kalman Filter (EKF) that fuses odometry with the Pozyx Range measurements to increase the default Pozyx Algorithm accuracy. © Springer International Publishing AG 2018.

2017

Evaluation and selection of 3PL provider using fuzzy AHP and grey TOPSIS in group decision making

Autores
Garside, AK; Saputro, TE;

Publicação

Abstract

2017

Building a Semi-Supervised Dataset to Train Journalistic Relevance Detection Models

Autores
Guimaraes, N; Figueira, A;

Publicação
2017 IEEE 15TH INTL CONF ON DEPENDABLE, AUTONOMIC AND SECURE COMPUTING, 15TH INTL CONF ON PERVASIVE INTELLIGENCE AND COMPUTING, 3RD INTL CONF ON BIG DATA INTELLIGENCE AND COMPUTING AND CYBER SCIENCE AND TECHNOLOGY CONGRESS(DASC/PICOM/DATACOM/CYBERSCI

Abstract
Annotated data is one of the most important components for supervised learning tasks. To ensure the reliability of the models, this data is usually labeled by several human annotators through volunteering or using Crowdsourcing platforms. However, such approaches are unfeasible (regarding time and cost) in datasets with an enormous number of entries, which in the specific case of journalistic relevance detection in social media posts, is necessary due to the wide scope of topics that can be considered relevant. Therefore, with the goal of building a relevance detection model, we propose an architecture to build a large scale annotated dataset regarding the journalistic relevance of Twitter posts (i.e. tweets). This methodology is based on the predictability of the content in Twitter accounts. Next, we used the retrieved dataset and build relevance detection models, combining text, entities, and sentiment features. Finally, we validated the best model through a smaller manually annotated dataset with posts from Facebook and Twitter. The F1-measure achieved in the validation dataset was 63% which is still far from excellent. However, given the characteristics of the validation data, these results are encouraging since 1) our model is not affected by content from other social networks and 2) our validation dataset was restrained to a specific time interval and specific keywords (which can affect the performance of the model). © 2017 IEEE.

  • 2005
  • 4201