2019
Autores
Carneiro, G; Tavares, JMRS; Bradley, AP; Papa, JP; Nascimento, JC; Cardoso, JS; Lu, Z; Belagiannis, V;
Publicação
Comp. Meth. in Biomech. and Biomed. Eng.: Imaging & Visualization
Abstract
2019
Autores
Pernes, D; Cardoso, JS;
Publicação
International Joint Conference on Neural Networks, IJCNN 2019 Budapest, Hungary, July 14-19, 2019
Abstract
2019
Autores
Araújo, RJ; Fernandes, K; Cardoso, JS;
Publicação
IEEE Trans. Image Process.
Abstract
2019
Autores
Ferreira, PM; Sequeira, AF; Pernes, D; Rebelo, A; Cardoso, JS;
Publicação
2019 International Conference of the Biometrics Special Interest Group, BIOSIG 2019 - Proceedings
Abstract
Despite the high performance of current presentation attack detection (PAD) methods, the robustness to unseen attacks is still an under addressed challenge. This work approaches the problem by enforcing the learning of the bona fide presentations while making the model less dependent on the presentation attack instrument species (PAIS). The proposed model comprises an encoder, mapping from input features to latent representations, and two classifiers operating on these underlying representations: (i) the task-classifier, for predicting the class labels (as bona fide or attack); and (ii) the species-classifier, for predicting the PAIS. In the learning stage, the encoder is trained to help the task-classifier while trying to fool the species-classifier. Plus, an additional training objective enforcing the similarity of the latent distributions of different species is added leading to a 'PAI-species'-independent model. The experimental results demonstrated that the proposed regularisation strategies equipped the neural network with increased PAD robustness. The adversarial model obtained better loss and accuracy as well as improved error rates in the detection of attack and bona fide presentations. © 2019 Gesellschaft fuer Informatik.
2019
Autores
Carneiro, G; Manuel, J; Tavares, RS; Bradley, AP; Papa, JP; Nascimento, JC; Cardoso, JS; Lu, Z; Belagiannis, V;
Publicação
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization
Abstract
2019
Autores
Gomes, DF; Luo, S; Teixeira, LF;
Publicação
Towards Autonomous Robotic Systems - 20th Annual Conference, TAROS 2019, London, UK, July 3-5, 2019, Proceedings, Part II
Abstract
Developing autonomous assistants to help with domestic tasks is a vital topic in robotics research. Among these tasks, garment folding is one of them that is still far from being achieved mainly due to the large number of possible configurations that a crumpled piece of clothing may exhibit. Research has been done on either estimating the pose of the garment as a whole or detecting the landmarks for grasping separately. However, such works constrain the capability of the robots to perceive the states of the garment by limiting the representations for one single task. In this paper, we propose a novel end-to-end deep learning model named GarmNet that is able to simultaneously localize the garment and detect landmarks for grasping. The localization of the garment represents the global information for recognising the category of the garment, whereas the detection of landmarks can facilitate subsequent grasping actions. We train and evaluate our proposed GarmNet model using the CloPeMa Garment dataset that contains 3,330 images of different garment types in different poses. The experiments show that the inclusion of landmark detection (GarmNet-B) can largely improve the garment localization, with an error rate of 24.7% lower. Solutions as ours are important for robotics applications, as these offer scalable to many classes, memory and processing efficient solutions.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.