CVTStego-Net: A convolutional vision transformer architecture for spatial image steganalysis

隐写分析技术 隐写术 卷积神经网络 人工智能 计算机科学 模式识别(心理学) 预处理器 特征提取 计算机视觉 嵌入
作者
Mario Alejandro Bravo-Ortíz,Esteban Mercado-Ruiz,Juan Pablo Villa-Pulgarín,Carlos Angel Hormaza-Cardona,Sebastian Quiñones-Arredondo,Harold Brayan Arteaga-Arteaga,Simón Orozco-Arias,Oscar Cardona-Morales,Reinel Tabares-Soto
出处
期刊:Journal of information security and applications [Elsevier]
卷期号:81: 103695-103695
标识
DOI:10.1016/j.jisa.2023.103695
摘要

The principal investigations in image steganalysis in the spatial domain have concentrated on convolutional neural network (CNN) designs. However, existing CNNs increase the local receptive field of steganographic noise without considering global steganographic noise. This study introduces CVTStego-Net, a convolutional vision transformer for spatial domain image steganalysis that merges the strengths of convolutions and the advantages of attention mechanisms to capture both local and global dependencies. CVTStego-Net is composed of three stages: preprocessing stage, noise extraction, and analysis stage, and classification stage. The preprocessing stage involves a bifurcation with trainable and untrainable 30 SRM (Spatial Rich Models) filters to enhance steganographic noise. The noise extraction and analysis stage combines the SE-Block (Squeeze-and-Excitation) with residual operations to increase the sensitivity to steganographic noise and suppressing the influence of redundant information, and the classification stage combines SE-Block with a convolutional vision transformer to connect the local and global spatial relationships of the steganographic noise. This work enhanced the classification accuracies for steganographic algorithms compared to YEDROUDJ-Net, SR-Net, ZHU-Net, GBRAS-Net, and SNMC-Net. Specifically, the accuracy of CVTStego-Net for WOW at 0.2 bpp was 86.58%, and 0.4 bpp was 93.80%. Moreover, for S-UNIWARD at 0.2 and 0.4 bpp, the accuracies were 80.70% and 90.45%, respectively. For MiPOD at 0.2 and 0.4 bpp, the accuracies were 74.70% and 81.48%, respectively. For HILL at 0.2 and 0.4 bpp, the accuracies were 76.70% and 85.80%, respectively, and for HUGO at 0.2 and 0.4 bpp, the accuracies were 78.20% and 86.98%, respectively, using test data from the BOSSbase 1.01. The results demonstrate that convolutional vision transformers can classify steganographic images in the spatial domain.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
pineapple yang完成签到,获得积分10
1秒前
牛犊发布了新的文献求助10
2秒前
早川发布了新的文献求助10
3秒前
3秒前
zzpj应助AAAAA采纳,获得10
4秒前
研友_X89o6n完成签到,获得积分10
4秒前
wcy完成签到,获得积分20
5秒前
7秒前
9秒前
勤恳飞风给勤恳飞风的求助进行了留言
9秒前
10秒前
Doris完成签到 ,获得积分10
14秒前
benben应助FF采纳,获得10
16秒前
17秒前
20秒前
54zxy完成签到,获得积分10
22秒前
24秒前
小谷发布了新的文献求助10
26秒前
29秒前
SOLOMON应助何桉采纳,获得10
36秒前
北陌发布了新的文献求助10
36秒前
zzpj应助小谷采纳,获得10
38秒前
田様应助酷酷的小鸽子采纳,获得10
38秒前
38秒前
李冰发布了新的文献求助10
41秒前
41秒前
123发布了新的文献求助10
42秒前
科研通AI2S应助科研通管家采纳,获得10
42秒前
今后应助科研通管家采纳,获得10
42秒前
42秒前
45秒前
小徐发布了新的文献求助10
45秒前
47秒前
49秒前
50秒前
All发布了新的文献求助10
52秒前
开朗的骁应助翟函采纳,获得10
52秒前
蜗牛完成签到,获得积分10
55秒前
wjx发布了新的文献求助10
56秒前
个性的紫菜给勤恳飞风的求助进行了留言
57秒前
高分求助中
One Man Talking: Selected Essays of Shao Xunmei, 1929–1939 1000
Yuwu Song, Biographical Dictionary of the People's Republic of China 700
[Lambert-Eaton syndrome without calcium channel autoantibodies] 520
Sphäroguß als Werkstoff für Behälter zur Beförderung, Zwischen- und Endlagerung radioaktiver Stoffe - Untersuchung zu alternativen Eignungsnachweisen: Zusammenfassender Abschlußbericht 500
少脉山油柑叶的化学成分研究 430
Revolutions 400
MUL.APIN: An Astronomical Compendium in Cuneiform 300
热门求助领域 (近24小时)
化学 材料科学 医学 生物 有机化学 工程类 生物化学 纳米技术 物理 内科学 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 电极 光电子学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 2454623
求助须知:如何正确求助?哪些是违规求助? 2126300
关于积分的说明 5415390
捐赠科研通 1854881
什么是DOI,文献DOI怎么找? 922509
版权声明 562340
科研通“疑难数据库(出版商)”最低求助积分说明 493579