Toward Deeper Understanding of Camouflaged Object Detection

伪装 任务(项目管理) 分割 计算机科学 排名(信息检索) 人工智能 计算机视觉 二进制数 集合(抽象数据类型) 目标检测 对象(语法) 机器学习 模式识别(心理学) 数学 程序设计语言 管理 经济 算术
作者
Yunqiu Lv,Jing Zhang,Yuchao Dai,Aixuan Li,Nick Barnes,Deng-Ping Fan
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology [Institute of Electrical and Electronics Engineers]
卷期号:33 (7): 3462-3476 被引量:87
标识
DOI:10.1109/tcsvt.2023.3234578
摘要

Preys in the wild evolve to be camouflaged to avoid being recognized by predators. In this way, camouflage acts as a key defence mechanism across species that is critical to survival. To detect and segment the whole scope of a camouflaged object, camouflaged object detection (COD) is introduced as a binary segmentation task, with the binary ground truth camouflage map indicating the exact regions of the camouflaged objects. In this paper, we revisit this task and argue that the binary segmentation setting fails to fully understand the concept of camouflage. We find that explicitly modeling the conspicuousness of camouflaged objects against their particular backgrounds can not only lead to a better understanding about camouflage, but also provide guidance to designing more sophisticated camouflage techniques. Furthermore, we observe that it is some specific parts of camouflaged objects that make them detectable by predators. With the above understanding about camouflaged objects, we present the first triple-task learning framework to simultaneously localize, segment, and rank camouflaged objects, indicating the conspicuousness level of camouflage. As no corresponding datasets exist for either the localization model or the ranking model, we generate localization maps with an eye tracker, which are then processed according to the instance level labels to generate our ranking-based training and testing dataset. We also contribute the largest COD testing set to comprehensively analyse performance of the COD models. Experimental results show that our triple-task learning framework achieves new state-of-the-art, leading to a more explainable COD network. Our code, data, and results are available at: https://github.com/JingZhang617/COD-Rank-Localize-and-Segment .
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
唯泽发布了新的文献求助10
刚刚
Zheng完成签到,获得积分10
1秒前
今后应助金垚采纳,获得10
2秒前
酷波er应助HH采纳,获得10
3秒前
3秒前
4秒前
BowieHuang应助科研通管家采纳,获得10
4秒前
浮生应助科研通管家采纳,获得20
4秒前
完美世界应助科研通管家采纳,获得10
4秒前
李爱国应助科研通管家采纳,获得10
4秒前
4秒前
BowieHuang应助科研通管家采纳,获得10
4秒前
英姑应助科研通管家采纳,获得10
4秒前
传奇3应助科研通管家采纳,获得10
4秒前
大个应助科研通管家采纳,获得10
4秒前
4秒前
领导范儿应助科研通管家采纳,获得10
4秒前
星期五应助科研通管家采纳,获得10
5秒前
斯文败类应助科研通管家采纳,获得10
5秒前
斯文败类应助科研通管家采纳,获得10
5秒前
无极微光应助科研通管家采纳,获得20
5秒前
Bowingyang应助科研通管家采纳,获得10
5秒前
共享精神应助科研通管家采纳,获得10
5秒前
BowieHuang应助科研通管家采纳,获得10
5秒前
科研通AI2S应助科研通管家采纳,获得10
5秒前
5秒前
可爱的函函应助海南采纳,获得10
5秒前
BowieHuang应助科研通管家采纳,获得10
5秒前
Bowingyang应助科研通管家采纳,获得10
5秒前
聪明凡之应助科研通管家采纳,获得10
5秒前
星期五应助科研通管家采纳,获得10
5秒前
好好应助科研通管家采纳,获得20
5秒前
大胆初雪发布了新的文献求助10
6秒前
6秒前
Zheng发布了新的文献求助10
6秒前
Agnesma完成签到,获得积分10
7秒前
9秒前
顺风顺水发布了新的文献求助10
9秒前
汐白完成签到,获得积分10
10秒前
承乐应助风筝采纳,获得10
10秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
人脑智能与人工智能 1000
King Tyrant 720
Silicon in Organic, Organometallic, and Polymer Chemistry 500
Principles of Plasma Discharges and Materials Processing, 3rd Edition 400
Pharmacology for Chemists: Drug Discovery in Context 400
El poder y la palabra: prensa y poder político en las dictaduras : el régimen de Franco ante la prensa y el periodismo 400
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5604157
求助须知:如何正确求助?哪些是违规求助? 4688985
关于积分的说明 14857229
捐赠科研通 4696839
什么是DOI,文献DOI怎么找? 2541204
邀请新用户注册赠送积分活动 1507328
关于科研通互助平台的介绍 1471851