Autonomous weeding, a task requiring expertise at the intersection of Computer Vision and Agronomy depends on accurate segmentation of crops and weeds from robot-collected images. Traditional segmentation models (i.e. YOLO) require large, densely annotated datasets, whose creation is costly and labor-intensive. In contrast, Few-Shot Learning (FSL) methods can learn from minimal annotated examples and significantly reduce the costs of dataset creation. This study evaluates the ability of a FSL architecture, HDMNet, to perform crop and weed segmentation using only a single annotated support image. Its performance retains 73–80% of the accuracy compared with widely used, annotation-intensive detectors designed for large datasets such as YOLOv5 and YOLOv8 when detecting bean and corn plants. Because reliable estimates of annotation effort are lacking in agriculture, we provide a quantitative assessment of the labor required to produce pixel-level labels. Preparing the 2,069-images ‘Early’ dataset required approximately 181 h, while 102-images ‘Refined’ dataset still required approximately 186 h. Labeling accounted for approximately 25 and 30 h, respectively. These findings show that increasing annotation granularity sharply raises effort without proportional accuracy gains, making dataset scale more beneficial than mask detail for YOLO-based models. In contrast, few-shot methods achieve competitive performance while eliminating most annotation labor. The study is further supported by the release of a new dataset from the 2023 ACRE field competition, including the ‘Early’ and ‘Refined’ versions. Overall, the findings offer practical guidance for designing efficient datasets for agricultural image analysis and demonstrate that FSL can substantially reduce autonomous weeding systems deployment costs.

Balancing accuracy and cost in precision agriculture: a few-shot learning approach for efficient weed − crop segmentation

Catalano, Nico;Chiatti, Agnese;Matteucci, Matteo
2026-01-01

Abstract

Autonomous weeding, a task requiring expertise at the intersection of Computer Vision and Agronomy depends on accurate segmentation of crops and weeds from robot-collected images. Traditional segmentation models (i.e. YOLO) require large, densely annotated datasets, whose creation is costly and labor-intensive. In contrast, Few-Shot Learning (FSL) methods can learn from minimal annotated examples and significantly reduce the costs of dataset creation. This study evaluates the ability of a FSL architecture, HDMNet, to perform crop and weed segmentation using only a single annotated support image. Its performance retains 73–80% of the accuracy compared with widely used, annotation-intensive detectors designed for large datasets such as YOLOv5 and YOLOv8 when detecting bean and corn plants. Because reliable estimates of annotation effort are lacking in agriculture, we provide a quantitative assessment of the labor required to produce pixel-level labels. Preparing the 2,069-images ‘Early’ dataset required approximately 181 h, while 102-images ‘Refined’ dataset still required approximately 186 h. Labeling accounted for approximately 25 and 30 h, respectively. These findings show that increasing annotation granularity sharply raises effort without proportional accuracy gains, making dataset scale more beneficial than mask detail for YOLO-based models. In contrast, few-shot methods achieve competitive performance while eliminating most annotation labor. The study is further supported by the release of a new dataset from the 2023 ACRE field competition, including the ‘Early’ and ‘Refined’ versions. Overall, the findings offer practical guidance for designing efficient datasets for agricultural image analysis and demonstrate that FSL can substantially reduce autonomous weeding systems deployment costs.
2026
Artificial Intelligence
Computer Vision
Few Shot Segmentation
Precision Agriculture
Weed Control
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0168169926001195-main.pdf

accesso aperto

Descrizione: manuscript full
: Publisher’s version
Dimensione 23.26 MB
Formato Adobe PDF
23.26 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1308435
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact