EG-Net: Appearance-based eye gaze estimation using an efficient gaze network with attention mechanism

计算机科学 凝视 人工智能 卷积神经网络 计算机视觉 面子(社会学概念) 眼动 特征(语言学) 集合(抽象数据类型) 姿势 任务(项目管理) 模式识别(心理学) 社会科学 语言学 哲学 管理 社会学 经济 程序设计语言
作者
Xinmei Wu,Lin Li,Haihong Zhu,Gang Zhou,Linfeng Li,Fei Su,Shen He,Yang‐Gang Wang,Xue Long
出处
期刊:Expert Systems With Applications [Elsevier]
卷期号:238: 122363-122363 被引量:7
标识
DOI:10.1016/j.eswa.2023.122363
摘要

Gaze estimation, which has a wide range of applications in many scenarios, is a challenging task due to various unconstrained conditions. As information from both full-face and eye images is instrumental in improving gaze estimation, many multiregion gaze estimation models have been proposed in recent studies. However, most of them simply use the same regression method on both eye and face images, overlooking that the eye region may contribute more fine-grained features than the full-face region, and the variation in the left and right eyes of an individual caused by head pose, illumination, and partially occluded eye may lead to inconsistent estimations. To address these issues, we propose an appearance-based end-to-end learning network architecture with an attention mechanism, named efficient gaze network (EG-Net), which employs a two-branch network for gaze estimation. Specifically, a base CNN is utilized for full-face images, while an efficient eye network (EE-Net), which is scaled up from the base CNN, is used for left- and right-eye images. EE-Net uniformly scales up the depth, width and resolution of the base CNN with a set of constant coefficients for eye feature extraction and adaptively weights the left- and right-eye images via an attention network according to its "image quality". Finally, features from the full-face image, two individual eye images and head pose vectors are fused to regress the eye gaze vectors. We evaluate our approach on 3 public datasets, the proposed EG-Net model achieves much better performance. In particular, our EG-Net-v4 model outperforms state-of-the-art approaches on the MPIIFaceGaze dataset, with prediction errors of 2.41 cm and 2.76 degrees in 2D and 3D gaze estimation, respectively. It also yields a performance improvement to 1.58 cm on GazeCapture and 4.55 degrees on EyeDIAP dataset, with 23.4 % and 14.2 % improvement over prior arts on the two datasets respectively. The code related to this project is open-source and available at https://github.com/wuxinmei/EE_Net.git.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
呆萌的xue关注了科研通微信公众号
1秒前
minmi发布了新的文献求助10
2秒前
3秒前
3秒前
3秒前
4秒前
4秒前
在水一方应助Allen采纳,获得10
4秒前
5秒前
香蕉觅云应助lxy采纳,获得10
5秒前
FashionBoy应助书记采纳,获得10
6秒前
kk完成签到,获得积分10
6秒前
7秒前
7秒前
7秒前
7秒前
科研通AI6应助liyanxin采纳,获得10
7秒前
安静绯完成签到,获得积分10
7秒前
素源发布了新的文献求助30
8秒前
量子星尘发布了新的文献求助10
8秒前
hexiao发布了新的文献求助10
9秒前
汉堡包应助xixi采纳,获得10
9秒前
科研通AI6应助Earnestlee采纳,获得10
10秒前
小卡发布了新的文献求助10
11秒前
安静绯发布了新的文献求助10
11秒前
小饼干发布了新的文献求助10
12秒前
SciGPT应助酶什么幺蛾子采纳,获得30
13秒前
15秒前
15秒前
ping完成签到 ,获得积分10
16秒前
arzw完成签到,获得积分10
16秒前
隐形曼青应助杜雨柔采纳,获得20
18秒前
戴帽子的花盆完成签到,获得积分10
19秒前
20秒前
20秒前
三土有兀完成签到 ,获得积分10
20秒前
科研鸟完成签到,获得积分10
21秒前
书记发布了新的文献求助10
21秒前
22秒前
22秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
List of 1,091 Public Pension Profiles by Region 1001
Clinical Microbiology Procedures Handbook, Multi-Volume, 5th Edition 1000
Active-site design in Cu-SSZ-13 curbs toxic hydrogen cyanide emissions 500
On the application of advanced modeling tools to the SLB analysis in NuScale. Part I: TRACE/PARCS, TRACE/PANTHER and ATHLET/DYN3D 500
L-Arginine Encapsulated Mesoporous MCM-41 Nanoparticles: A Study on In Vitro Release as Well as Kinetics 500
Virus-like particles empower RNAi for effective control of a Coleopteran pest 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5462452
求助须知:如何正确求助?哪些是违规求助? 4567179
关于积分的说明 14309253
捐赠科研通 4493038
什么是DOI,文献DOI怎么找? 2461391
邀请新用户注册赠送积分活动 1450497
关于科研通互助平台的介绍 1425841