DepthCropSeg++: Scaling a Crop Segmentation Foundation Model With Depth-Labeled Data

分割 计算机科学 人工智能 机器学习 图像分割 一般化 精准农业 比例(比率) 模式识别(心理学) 深度学习 市场细分 任务(项目管理) 遥感 基础(证据) 数据挖掘 数据建模 作物 增采样 作物产量 钥匙(锁)
作者
Jiafei Zhang,Shuyu Cao,Binghui Xu,Yanan Li,Weiwei Jia,Tingting Wu,H. H. Lu,Weijuan Hu,Zhiguo Han
出处
期刊:IEEE Journal of Selected Topics in Signal Processing [Institute of Electrical and Electronics Engineers]
卷期号:: 1-13
标识
DOI:10.1109/jstsp.2026.3654362
摘要

DepthCropSeg++: a foundation model for crop segmentation, capable of segmenting different crop species under open in-field environment. Crop segmentation is a fundamental task for modern agriculture, which closely relates to many downstream tasks such as plant phenotyping, density estimation, and weed control. In the era of foundation models, a number of generic large language and vision models have been developed. These models have demonstrated remarkable real world generalization due to significant model capacity and largescale datasets. However, current crop segmentation models mostly learn from limited data due to expensive pixel-level labelling cost, often performing well only under specific crop types or controlled environment. In this work, we follow the vein of our previous work DepthCropSeg, an almost unsupervised approach to crop segmentation, to scale up a cross-species and crossscene crop segmentation dataset, with 28,406 images across 30+ species and 15 environmental conditions. We also build upon a state-of-the-art semantic segmentation architecture ViT-Adapter architecture, enhance it with dynamic upsampling for improved detail awareness, and train the model with a two-stage selftraining pipeline. To systematically validate model performance, we conduct comprehensive experiments to justify the effectiveness and generalization capabilities across multiple crop datasets. Results demonstrate that DepthCropSeg++ achieves 93.11% mIoU on a comprehensive testing set, outperforming both supervised baselines and general-purpose vision foundation models like Segmentation Anything Model (SAM) by significant margins (+0.36% and +48.57% respectively). The model particularly excels in challenging scenarios including night-time environment (86.90% mIoU), high-density canopies (90.09% mIoU), and unseen crop varieties (90.09% mIoU), indicating a new state of the art for crop segmentation.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
1秒前
沉默皮卡丘完成签到 ,获得积分10
1秒前
李健应助踏实的酸奶采纳,获得10
1秒前
ddd发布了新的文献求助10
2秒前
有机卡拉米完成签到,获得积分10
2秒前
chy发布了新的文献求助10
2秒前
Ava应助apong采纳,获得10
3秒前
量子星尘发布了新的文献求助10
4秒前
4秒前
4秒前
6秒前
6秒前
6秒前
领导范儿应助安静的猴子采纳,获得10
6秒前
6秒前
7秒前
无花果应助西奥采纳,获得10
9秒前
诚心无心发布了新的文献求助30
9秒前
腼腆的恶天完成签到,获得积分10
10秒前
10秒前
激情的逍遥完成签到,获得积分10
11秒前
Hello应助隐形的雪碧采纳,获得30
11秒前
英俊的铭应助七木采纳,获得10
12秒前
沈格发布了新的文献求助10
13秒前
西门长海完成签到,获得积分10
14秒前
拼搏飞柏完成签到,获得积分10
14秒前
16秒前
16秒前
16秒前
独步出营完成签到 ,获得积分10
16秒前
sheepm完成签到,获得积分10
16秒前
英姑应助Hey采纳,获得10
17秒前
丁点发布了新的文献求助10
18秒前
霍仁维思发布了新的文献求助10
18秒前
sswbzh应助Honeydukes采纳,获得50
19秒前
19秒前
还没想好完成签到,获得积分10
19秒前
20秒前
风中的善愁完成签到,获得积分10
20秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
2025-2031全球及中国金刚石触媒粉行业研究及十五五规划分析报告 9000
Encyclopedia of the Human Brain Second Edition 8000
The Cambridge History of China: Volume 4, Sui and T'ang China, 589–906 AD, Part Two 1000
The Composition and Relative Chronology of Dynasties 16 and 17 in Egypt 1000
Translanguaging in Action in English-Medium Classrooms: A Resource Book for Teachers 700
Real World Research, 5th Edition 680
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5684488
求助须知:如何正确求助?哪些是违规求助? 5036727
关于积分的说明 15184287
捐赠科研通 4843754
什么是DOI,文献DOI怎么找? 2596869
邀请新用户注册赠送积分活动 1549511
关于科研通互助平台的介绍 1508027