Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CRIIS

2023

Almond cultivar identification using machine learning classifiers applied to UAV-based multispectral data

Authors
Guimaraes, N; Padua, L; Sousa, JJ; Bento, A; Couto, P;

Publication
INTERNATIONAL JOURNAL OF REMOTE SENSING

Abstract
In Portugal, almonds are a very important crop, due to their nutritional properties. In the northeastern part of the country, the almond sector has endured over time, with strong cultural traditions and key economic significance. In these areas, several cultivars are used. In effect, the presence of various almond cultivars implies differentiated management in irrigation, disease control, pruning system, and harvest planning. Therefore, cultivar classification is essential over large agricultural areas. Over the last decades, remote-sensing data have led to important breakthroughs in the classification of different cultivars for several crops. Nonetheless, for almonds, studies are incipient. Thus, this study aims to fill this knowledge gap and explore the classification of almond cultivars in an almond orchard. High-resolution multispectral data were acquired by an unmanned aerial vehicle (UAV). Vegetation indices (VIs) and tree structural parameters were, subsequently, estimated. To obtain an accurate cultivar identification, four machine learning classifiers, such as K-nearest neighbour (kNN), support vector machine (SVM), random forest (RF), and extreme gradient boosting (XGBoost), were applied and optimized through the fine-tuning process. The accuracy of machine learning classifiers was analysed. SVM and RF performed best with OAs of 76% and 74% using VIs and spectral bands (GREEN, GRVI, GN, REN, ClRE). Adding the canopy height model (CHM) improved performance, with RF and XGBoost having OAs of 88% and 84%. kNN performed worst with an OA of 73% using only VIs and spectral bands, 80% with VIs, spectral bands and CHM, and 93% with VIs, CHM, and tree crown area (TCA). The best performance was achieved by RF and XGBoost with OAs of 99% using VIs, CHM, and TCA. These results demonstrate the importance of the feature selection process. Moreover, this study reveals the feasibility of remote-sensing data and machine learning classifiers in the classification of almond cultivars.

2023

Comparison of 3D Sensors for Automating Bolt-Tightening Operations in the Automotive Industry

Authors
Dias, J; Simoes, P; Soares, N; Costa, CM; Petry, MR; Veiga, G; Rocha, LF;

Publication
SENSORS

Abstract
Machine vision systems are widely used in assembly lines for providing sensing abilities to robots to allow them to handle dynamic environments. This paper presents a comparison of 3D sensors for evaluating which one is best suited for usage in a machine vision system for robotic fastening operations within an automotive assembly line. The perception system is necessary for taking into account the position uncertainty that arises from the vehicles being transported in an aerial conveyor. Three sensors with different working principles were compared, namely laser triangulation (SICK TriSpector1030), structured light with sequential stripe patterns (Photoneo PhoXi S) and structured light with infrared speckle pattern (Asus Xtion Pro Live). The accuracy of the sensors was measured by computing the root mean square error (RMSE) of the point cloud registrations between their scans and two types of reference point clouds, namely, CAD files and 3D sensor scans. Overall, the RMSE was lower when using sensor scans, with the SICK TriSpector1030 achieving the best results (0.25 mm ± 0.03 mm), the Photoneo PhoXi S having the intermediate performance (0.49 mm ± 0.14 mm) and the Asus Xtion Pro Live obtaining the higher RMSE (1.01 mm ± 0.11 mm). Considering the use case requirements, the final machine vision system relied on the SICK TriSpector1030 sensor and was integrated with a collaborative robot, which was successfully deployed in an vehicle assembly line, achieving 94% success in 53,400 screwing operations.

2023

Toward grapevine digital ampelometry through vision deep learning models

Authors
Magalhaes S.C.; Castro L.; Rodrigues L.; Padilha T.C.; De Carvalho F.; Dos Santos F.N.; Pinho T.; Moreira G.; Cunha J.; Cunha M.; Silva P.; Moreira A.P.;

Publication
IEEE Sensors Journal

Abstract

2023

Tree Trunks Cross-Platform Detection Using Deep Learning Strategies for Forestry Operations

Authors
da Silva, DQ; dos Santos, FN; Filipe, V; Sousa, AJ;

Publication
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1

Abstract

2023

Using deep learning for automatic detection of insects in traps

Authors
Teixeira, AC; Morais, R; Sousa, JJ; Peres, E; Cunha, A;

Publication
Procedia Computer Science

Abstract

2023

Acacia dealbata classification from aerial imagery acquired using unmanned aerial vehicles

Authors
Pinto, J; Sousa, A; Sousa, JJ; Peres, E; Pádua, L;

Publication
Procedia Computer Science

Abstract

  • 1
  • 291