Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Tatiana M. Pinho recebeu, em 2018, o grau de doutoramento em Engenharia Electrotécnica e de Computadores pela Universidade de Trás-os-Montes e Alto Douro (UTAD), Portugal, e INESC TEC, ao abrigo de uma bolsa da Fundação para a Ciência e a Tecnologia (FCT). Em 2011, licenciou-se em Engenharia de Energias na UTAD, e recebeu o grau de mestre em Engenharia de Energias pela mesma universidade, em 2013. Atualmente é investigadora no INESC TEC e em particular no TRIBE - Laboratório de Robótica e IoT para Agricultura e Floresta de Precisão Inteligente.

Detalhes

Detalhes

  • Nome

    Tatiana Martins Pinho
  • Cargo

    Investigador Auxiliar
  • Desde

    01 setembro 2013
034
Publicações

2023

Nano Aerial Vehicles for Tree Pollination

Autores
Pinheiro, I; Aguiar, A; Figueiredo, A; Pinho, T; Valente, A; Santos, F;

Publicação
APPLIED SCIENCES-BASEL

Abstract
Currently, Unmanned Aerial Vehicles (UAVs) are considered in the development of various applications in agriculture, which has led to the expansion of the agricultural UAV market. However, Nano Aerial Vehicles (NAVs) are still underutilised in agriculture. NAVs are characterised by a maximum wing length of 15 centimetres and a weight of fewer than 50 g. Due to their physical characteristics, NAVs have the advantage of being able to approach and perform tasks with more precision than conventional UAVs, making them suitable for precision agriculture. This work aims to contribute to an open-source solution known as Nano Aerial Bee (NAB) to enable further research and development on the use of NAVs in an agricultural context. The purpose of NAB is to mimic and assist bees in the context of pollination. We designed this open-source solution by taking into account the existing state-of-the-art solution and the requirements of pollination activities. This paper presents the relevant background and work carried out in this area by analysing papers on the topic of NAVs. The development of this prototype is rather complex given the interactions between the different hardware components and the need to achieve autonomous flight capable of pollination. We adequately describe and discuss these challenges in this work. Besides the open-source NAB solution, we train three different versions of YOLO (YOLOv5, YOLOv7, and YOLOR) on an original dataset (Flower Detection Dataset) containing 206 images of a group of eight flowers and a public dataset (TensorFlow Flower Dataset), which must be annotated (TensorFlow Flower Detection Dataset). The results of the models trained on the Flower Detection Dataset are shown to be satisfactory, with YOLOv7 and YOLOR achieving the best performance, with 98% precision, 99% recall, and 98% F1 score. The performance of these models is evaluated using the TensorFlow Flower Detection Dataset to test their robustness. The three YOLO models are also trained on the TensorFlow Flower Detection Dataset to better understand the results. In this case, YOLOR is shown to obtain the most promising results, with 84% precision, 80% recall, and 82% F1 score. The results obtained using the Flower Detection Dataset are used for NAB guidance for the detection of the relative position in an image, which defines the NAB execute command.

2023

Toward Grapevine Digital Ampelometry Through Vision Deep Learning Models

Autores
Magalhaes, SC; Castro, L; Rodrigues, L; Padilha, TC; de Carvalho, F; dos Santos, FN; Pinho, T; Moreira, G; Cunha, J; Cunha, M; Silva, P; Moreira, AP;

Publicação
IEEE SENSORS JOURNAL

Abstract
Several thousand grapevine varieties exist, with even more naming identifiers. Adequate specialized labor is not available for proper classification or identification of grapevines, making the value of commercial vines uncertain. Traditional methods, such as genetic analysis or ampelometry, are time-consuming, expensive, and often require expert skills that are even rarer. New vision-based systems benefit from advanced and innovative technology and can be used by nonexperts in ampelometry. To this end, deep learning (DL) and machine learning (ML) approaches have been successfully applied for classification purposes. This work extends the state of the art by applying digital ampelometry techniques to larger grapevine varieties. We benchmarked MobileNet v2, ResNet-34, and VGG-11-BN DL classifiers to assess their ability for digital ampelography. In our experiment, all the models could identify the vines' varieties through the leaf with a weighted F1 score higher than 92%.

2023

Reagent-less spectroscopy towards NPK sensing for hydroponics nutrient solutions

Autores
Silva, FM; Queirós, C; Pinho, T; Boaventura, J; Santos, F; Barroso, TG; Pereira, MR; Cunha, M; Martins, RC;

Publicação
SENSORS AND ACTUATORS B-CHEMICAL

Abstract
Nutrient quantification in hydroponic systems is essential. Reagent-less spectral quantification of nitrogen, phosphate and potassium faces challenges in accessing information-rich spectral signals and unscrambling interference from each constituent. Herein, we introduce information equivalence between spectra and sample composition, enabling extraction of consistent covariance to isolate nutrient-specific spectral information (N, P or K) in Hoagland nutrient solutions using orthogonal covariance modes. Chemometrics methods quantify nitrogen and potassium, but not phosphate. Orthogonal covariance modes, however, enable quantification of all three nutrients: nitrogen (N) with R = 0.9926 and standard error of 17.22 ppm, phosphate (P) with R = 0.9196 and standard error of 63.62 ppm, and potassium (K) with R = 0.9975 and standard error of 9.51 ppm. Including pH information significantly improves phosphate quantification (R = 0.9638, standard error: 43.16 ppm). Results demonstrate a direct relationship between spectra and Hoagland nutrient solution information, preserving NPK orthogonality and supporting orthogonal covariance modes. These modes enhance detection sensitivity by maximizing information of the constituent being quantified, while minimizing interferences from others. Orthogonal covariance modes predicted nitrogen (R = 0.9474, standard error: 29.95 ppm) accurately. Phosphate and potassium showed strong interference from contaminants, but most extrapolation samples were correctly diagnosed above the reference interval (83.26%). Despite potassium features outside the knowledge base, a significant correlation was obtained (R = 0.6751). Orthogonal covariance modes use unique N, P or K information for quantification, not spurious correlations due to fertilizer composition. This approach minimizes interferences during extrapolation to complex samples, a crucial step towards resilient nutrient management in hydroponics using spectroscopy.

2022

Benchmark of Deep Learning and a Proposed HSV Colour Space Models for the Detection and Classification of Greenhouse Tomato

Autores
Moreira, G; Magalhaes, SA; Pinho, T; dos Santos, FN; Cunha, M;

Publicação
AGRONOMY-BASEL

Abstract
The harvesting operation is a recurring task in the production of any crop, thus making it an excellent candidate for automation. In protected horticulture, one of the crops with high added value is tomatoes. However, its robotic harvesting is still far from maturity. That said, the development of an accurate fruit detection system is a crucial step towards achieving fully automated robotic harvesting. Deep Learning (DL) and detection frameworks like Single Shot MultiBox Detector (SSD) or You Only Look Once (YOLO) are more robust and accurate alternatives with better response to highly complex scenarios. The use of DL can be easily used to detect tomatoes, but when their classification is intended, the task becomes harsh, demanding a huge amount of data. Therefore, this paper proposes the use of DL models (SSD MobileNet v2 and YOLOv4) to efficiently detect the tomatoes and compare those systems with a proposed histogram-based HSV colour space model to classify each tomato and determine its ripening stage, through two image datasets acquired. Regarding detection, both models obtained promising results, with the YOLOv4 model standing out with an F1-Score of 85.81%. For classification task the YOLOv4 was again the best model with an Macro F1-Score of 74.16%. The HSV colour space model outperformed the SSD MobileNet v2 model, obtaining results similar to the YOLOv4 model, with a Balanced Accuracy of 68.10%.

2021

Prototyping IoT-Based Virtual Environments: An Approach toward the Sustainable Remote Management of Distributed Mulsemedia Setups

Autores
Adao, T; Pinho, T; Padua, L; Magalhaes, LG; Sousa, JJ; Peres, E;

Publicação
APPLIED SCIENCES-BASEL

Abstract
Business models built upon multimedia/multisensory setups delivering user experiences within disparate contexts-entertainment, tourism, cultural heritage, etc.-usually comprise the installation and in-situ management of both equipment and digital contents. Considering each setup as unique in its purpose, location, layout, equipment and digital contents, monitoring and control operations may add up to a hefty cost over time. Software and hardware agnosticity may be of value to lessen complexity and provide more sustainable management processes and tools. Distributed computing under the Internet of Things (IoT) paradigm may enable management processes capable of providing both remote control and monitoring of multimedia/multisensory experiences made available in different venues. A prototyping software to perform IoT multimedia/multisensory simulations is presented in this paper. It is fully based on virtual environments that enable the remote design, layout, and configuration of each experience in a transparent way, without regard of software and hardware. Furthermore, pipelines to deliver contents may be defined, managed, and updated in a context-aware environment. This software was tested in the laboratory and was proven as a sustainable approach to manage multimedia/multisensory projects. It is currently being field-tested by an international multimedia company for further validation.