计算机科学
平面设计
排版
人机交互
卷积神经网络
视觉语言
感觉
面部表情
人工智能
多媒体
自然语言处理
心理学
语言学
社会心理学
哲学
艺术
视觉艺术
作者
Zhenzhen Pan,Hong Pan,Junzhan Zhang
出处
期刊:Heliyon
[Elsevier BV]
日期:2024-04-26
卷期号:10 (9): e30180-e30180
被引量:1
标识
DOI:10.1016/j.heliyon.2024.e30180
摘要
Emotion Recognition is the experience of attitude in graphic language expression and composition. People use both verbal and non-verbal behaviours to communicate their emotions. Visual communication and graphic design are always evolving to meet the demands of an increasingly affluent and culturally conscious populace. When graphic designing works, designers should consider their own opinions about related works from the audience's or customer's standpoint so that the emotion between them can resonate. Hence, this study proposes a personalized emotion recognition framework based on convolutional neural networks (PERF-CNN) to create visual content for graphic design. Graphic designers prioritize the logic of showing objects in interactive designs and use visual hierarchy and page layout approaches to respond to users' demands via typography and imagery. This ensures that the user experience is maximized. This research identifies three tiers of emotional thinking: expressive signal, emotional experience, and emotional infiltration, all of which affect graphic design. This article explores the subject of graphic design language and its ways of emotional recognition, as well as the relationship between graphic images, shapes, and feelings. CNN initially extracted expressive features from the user's face images and the poster's visual information. The clustering process categorizes the poster or advertisement images into positive, negative, and neutral classes. Research and applications of graphic design language benefit from the proposed method's experimental results, demonstrating that it outperforms conventional classification approaches in the dataset. In comparison to other popular models, the experimental results demonstrate that the proposed PERF-CNN model improves each of the following: classification accuracy (97.4 %), interaction ratio (95.6 %), emotion recognition ratio (98.9 %), rate of influence of pattern and colour features (94.4 %), and prediction error rate (6.5 %).
科研通智能强力驱动
Strongly Powered by AbleSci AI