Análise bio(ética) da obra "Admirável mundo novo" de Aldous Huxley
Ribeiro, Maria Alexandra
Navigation system of autonomous multitask robotic rover for agricultural activities on peach orchards based on computer vision through tree trunk detection
Type
article
Publisher
Identifier
SIMÕES, J.P. [et al.] (2022) - Navigation system of autonomous multitask robotic rover for agricultural activities on peach orchards based on computer vision through tree trunk detection. Acta Horticulturae. 1352, p. 593-600. DOI: 10.17660/ActaHortic.2022.1352.80
10.17660/ActaHortic.2022.1352.80
Title
Navigation system of autonomous multitask robotic rover for agricultural activities on peach orchards based on computer vision through tree trunk detection
Subject
Precision agriculture
Object detection
Orchard
Navigation
Terrestrial rover
Robotic vision
CNN
Tensorflow
Raspberry Pi 4
SSD MobileNet
Quantization
Object detection
Orchard
Navigation
Terrestrial rover
Robotic vision
CNN
Tensorflow
Raspberry Pi 4
SSD MobileNet
Quantization
Relation
PDR2020-101-031358
Date
2023-01-16T15:06:30Z
2023-01-16T15:06:30Z
2022
2023-01-16T15:06:30Z
2022
Description
Introducing robotics in agriculture can allow a rise in productivity and a reduction in costs and waste. Its capabilities can be enhanced to or above the human level, enabling a robot to function as a human does, but with higher precision, repeatability, and with little to no effort. This paper develops a detection algorithm of peach trunks in orchard rows, as autonomous navigation and anti-bump auxiliary system of a terrestrial robotic rover for agricultural applications. The approach involved computational vision, more specifically, the creation of an object detection model based on Convolutional Neural Networks. The framework of the algorithm is Tensorflow, for implementation in a Raspberry Pi 4. The model’s core is the detection system SSD MobileNet 640×640 with transfer learning from the COCO 2017 database. 89 pictures were captured for the database of the model, of which 90% were used for training and the other 10% for testing. The model was converted for mobile applications with a full integer quantization, from 32floatto uint8, and it was compiled for Edge TPU support. The orientation strategy consists of two conditions: a double detection forms a linear function, represented by an imaginary line, which updates every two simultaneous trunks detected. Through the slope of this function and the horizontal deviation of a single detected bounding box from the created line, the algorithm orders the robot to adjust the orientation or keep moving forward. The arithmetic evaluation of the model shows a precision and recall of 94.4%. After the quantization, the new values of these metrics are 92.3 and 66.7%, respectively. These simulation results prove that, statistically, the model can perform the navigation task.
info:eu-repo/semantics/publishedVersion
info:eu-repo/semantics/publishedVersion
Access restrictions
restrictedAccess
Language
eng
Comments