分割
计算机科学
人工智能
机器学习
图像分割
一般化
精准农业
比例(比率)
模式识别(心理学)
深度学习
市场细分
任务(项目管理)
遥感
基础(证据)
数据挖掘
数据建模
作物
增采样
作物产量
钥匙(锁)
作者
Jiafei Zhang,Shuyu Cao,Binghui Xu,Yanan Li,Weiwei Jia,Tingting Wu,H. H. Lu,Weijuan Hu,Zhiguo Han
出处
期刊:IEEE Journal of Selected Topics in Signal Processing
[Institute of Electrical and Electronics Engineers]
日期:2026-01-01
卷期号:: 1-13
标识
DOI:10.1109/jstsp.2026.3654362
摘要
DepthCropSeg++: a foundation model for crop segmentation, capable of segmenting different crop species under open in-field environment. Crop segmentation is a fundamental task for modern agriculture, which closely relates to many downstream tasks such as plant phenotyping, density estimation, and weed control. In the era of foundation models, a number of generic large language and vision models have been developed. These models have demonstrated remarkable real world generalization due to significant model capacity and largescale datasets. However, current crop segmentation models mostly learn from limited data due to expensive pixel-level labelling cost, often performing well only under specific crop types or controlled environment. In this work, we follow the vein of our previous work DepthCropSeg, an almost unsupervised approach to crop segmentation, to scale up a cross-species and crossscene crop segmentation dataset, with 28,406 images across 30+ species and 15 environmental conditions. We also build upon a state-of-the-art semantic segmentation architecture ViT-Adapter architecture, enhance it with dynamic upsampling for improved detail awareness, and train the model with a two-stage selftraining pipeline. To systematically validate model performance, we conduct comprehensive experiments to justify the effectiveness and generalization capabilities across multiple crop datasets. Results demonstrate that DepthCropSeg++ achieves 93.11% mIoU on a comprehensive testing set, outperforming both supervised baselines and general-purpose vision foundation models like Segmentation Anything Model (SAM) by significant margins (+0.36% and +48.57% respectively). The model particularly excels in challenging scenarios including night-time environment (86.90% mIoU), high-density canopies (90.09% mIoU), and unseen crop varieties (90.09% mIoU), indicating a new state of the art for crop segmentation.
科研通智能强力驱动
Strongly Powered by AbleSci AI