Ayuda
Ir al contenido

Dialnet


Optical sensors for crop monitoring - from 2d to 3d reconstruction

  • Autores: Hugo Moreno Párrizas
  • Directores de la Tesis: Dionisio Andújar Sánchez (dir. tes.), Constantino Valero Ubierna (codir. tes.)
  • Lectura: En la Universidad Politécnica de Madrid ( España ) en 2021
  • Idioma: español
  • Tribunal Calificador de la Tesis: José Blasco Ivars (presid.), Natalia Hernández Sánchez (secret.), José Manuel Peña Barragán (voc.), Gerassimos Peteinatos (voc.), Manuel Pérez Ruiz (voc.)
  • Programa de doctorado: Programa de Doctorado en Agroingeniería por la Universidad Politécnica de Madrid
  • Materias:
  • Enlaces
  • Resumen
    • The global population is expected to rise from 7.6 billion to 10.5 billion people in 50 years, while arable land per capita will decrease by 25% (Britt et al., 2018). The World Summit on Food Security declared that in 2050 “Income growth in low and middle income countries would hasten a dietary transition towards higher consumption of meat, fruits and vegetables, relative to that of cereals, requiring commensurate shifts in output and adding pressure on natural resources” (Calicioglu et al., 2019).

      In this regard, due to the widespread availability of Precision Agriculture (PA) technologies, sensors and actuators in agricultural machinery have been incorporated to help farmers and scientists monitor and execute agricultural practices in order to overcome the prospective challenges to feed the world in an effective and sustainable manner. Furthermore, PA, whose growth is reinforced by extraordinary technology advances, has merged the physical, digital and biological worlds. The 4th industrial revolution has also its reflect on proximal sensing due to technological innovations, with more projection in PA due to the availability of new image acquisition tools and the rise in processing capacity of computer devices for their study.

      On this basis, this Doctoral Thesis has focused on analysing different optical sensors for crop reconstruction since crop 3D modeling allows site-specific management at different crop stages. On the earliest stage 2-Dimensional approach was carried out for crop discrimination in a maize field infested with various weeds. The evaluation of the accuracy and performance of a light detection and ranging (LiDAR) sensor for vegetation using distance and reflection measurements aiming to detect and discriminate maize plants and weeds from soil surface was done. A terrestrial LiDAR sensor was mounted on a tripod pointing to the inter-row area, with its horizontal axis and the field of view pointing vertically downwards to the ground, scanning a vertical plane with the potential presence of vegetation. Immediately after the LiDAR data acquisition (distances and reflection measurements) actual heights of plants were estimated using an appropriate methodology. For that purpose, digital images were taken of each sampled area. Data showed a high correlation between LiDAR measured height and actual plant heights (R2 = 0.75). Binary logistic regression between weed presence/absence and the sensor readings (LiDAR height and reflection values) was used to validate the accuracy of the sensor. This permitted the discrimination of vegetation from the ground with an accuracy of up to 95%. In addition, a Canonical Discrimination Analysis (CDA) was able to discriminate mostly between soil and vegetation and, to a far lesser extent, between crop and weeds. The studied methodology arises as a good system for weed detection, which in combination with other principles, such as vision-based technologies, could improve the efficiency and accuracy of herbicide spraying.

      In view of the promising results obtained in terms of 2D geometry, in a second study, a LiDAR sensor was installed on-board. It was affixed to a mobile platform equipped with an RTK-GNSS receiver for crop 2D scanning in a vineyard. Its accuracy and performance were assessed for vineyard crop characterization using distance measurements, aiming to obtain a 3D reconstruction. The LiDAR system consisted of a 2D time-of-flight sensor, a gimbal connecting the device to the structure, and an RTK-GPS to record the sensor data position. The LiDAR sensor was facing sideways installed on a mobile electric platform. It scanned planes perpendicularly to the travel direction. Measurements of distance between the LiDAR and the vineyards had a high spatial resolution, providing high-density 3D point clouds. The 3D point cloud was obtained since it contains all the points where the laser beam impacted. The fusion of LiDAR impacts and the positions of each associated to the RTK-GPS allowed the creation of the 3D architecture. Although point clouds were already filtered, i.e., discarding points out of the study area, the branch volume cannot be directly calculated, since it turns into a 3D solid cluster that encloses a volume. To obtain the 3D object surface, and therefore to be able to calculate the volume enclosed by this surface, a suitable alpha shape was generated as an outline that envelops the outer points of the point cloud. The 3D scenes were obtained during the winter season when only branches were present and fully defoliated. The models were used to extract information related to height and branch volume. These models might be used for automatic pruning or relating this parameter to evaluate the future yield at each location. The 3D map was correlated with ground truth, which was manually determined, pruning the remaining weight. The number of scans by LiDAR influenced the relationship with the actual biomass measurements and had a significant effect on the treatments. A positive linear fit was obtained for the comparison between actual dry biomass and LiDAR volume. The influence of individual treatments showed a low significance. The results proved strong correlations with actual values of biomass and volume with R2 = 0.75, and when comparing LiDAR scans with weight, the R2 rose up to 0.85. The obtained values show that this LiDAR technique is also valid for branch reconstruction with great advantages over other types of non-contact ranging sensors, in terms of high sampling resolution and rates. Even narrow branches were properly detected, which demonstrates the accuracy of the system when working on difficult scenarios such as defoliated crops.

      In the same experimental field, a non-destructive measuring technique was applied to test major vine geometric traits on measurements collected by a contactless sensor. Three dimensional optical sensors have evolved over the past decade, and these advancements may be useful in improving phenomics technologies for other crops, such as woody perennials. RGB-D cameras namely Microsoft Kinect have a significant influence on recent computer vision and robotics research. In this experiment an adaptable mobile platform was used for the acquisition of depth images for the non-destructive assessment of branch volume (pruning weight) and related to grape yield in vineyards crops. Vineyard yield prediction provides useful insights about the anticipated yield to the winegrower, guiding strategic decisions to accomplish optimum quantity and efficiency, and supporting the winegrower with decision-making. A Kinect v2 system on-board to an on-ground electric vehicle was capable of producing precise 3D point clouds of vine rows under six different management cropping systems. The generated models demonstrated strong consistency between 3D images and vine structures from the actual physical parameters when average values were calculated. Correlations of Kinect branch volume with pruning weight (dry biomass) resulted in high coefficients of determination (R2=0,80). In the study of vineyard yield correlations, the measured volume was found to have a good power law relationship (R2=0,87). However due to low capability of most depth cameras to properly build 3-D shapes of small details the results for each treatment when calculated separately were not consistent. Nonetheless, Kinect v2 has a tremendous potential as a 3D sensor in agricultural applications for proximal sensing operations, benefiting from its high frame rate, low price in comparison with other depth cameras and high robustness.

      Finally, a fourth study was conducted to test aerial and on-ground vegetation characterization, i.e., crop geometry. An experiment was conducted using three different measurement systems based on optical sensors in a vineyard field in order to test the economic feasibility of applying fertilizers site specifically based on different mapping systems. The capacity of UAV missions and on-ground systems was compared using depth cameras and LiDAR systems in order to provide the necessary vineyard volume maps for specific applications like fertilization. Aerial imagery was obtained using a UAV equipped with a high-resolution RGB camera, and a digital surface model was reconstructed using photogrammetry procedures. On-ground crop reconstruction was performed using LiDAR-based measurements taken with an RTK-GNSS along the crop rows. Furthermore, a Kinect v2 sensor was also used as a low-cost depth camera. All systems were tested in a commercial field, under sunlight conditions. Every technique provided a 3D dense point cloud from which volume was calculated.

      The results showed that volume values were always consistent and similar between the studied systems. The on-ground techniques provided the best details of the plants. However, the cost of acquisition was always higher than that of aerial imagery. Concerning the fertiliser application, it should be noted that, the changes in shape and size of plants obtained within the vineyard indicate that continuous adjustment of the applied dose would be required to optimize the performed application. When using site-specific spraying based on the created maps, the dose was reduced by up to 80% of the total dosage used with a conventional application. A detailed analysis of savings indicates differences between the systems. The use of aerial imagery techniques resulted in positive net returns, whereas the on-ground technologies needed a faster time of acquisition so that they are profitable. Regarding efficacy, no significant differences between applications based on the constructed maps were found. This important reduction in fertilizer application could be followed by an equivalent reduction in plant protection products (e.g., fungicides). Thus, the use of some 3D characterization technologies has shown to be profitable at the current stage of development while also reducing the inputs and the environmental impact of agricultural tasks.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno