计算机科学
情绪分析
卷积神经网络
分类器(UML)
人工智能
深度学习
社会化媒体
水准点(测量)
训练集
机器学习
背景(考古学)
自然语言处理
万维网
古生物学
大地测量学
地理
生物
作者
Lucia Vadicamo,Fabio Carrara,Andrea Cimino,Stefano Cresci,Felice Dell’Orletta⋄,Fabrizio Falchi,Maurizio Tesconi
标识
DOI:10.1109/iccvw.2017.45
摘要
Much progress has been made in the field of sentiment analysis in the past years. Researchers relied on textual data for this task, while only recently they have started investigating approaches to predict sentiments from multimedia content. With the increasing amount of data shared on social media, there is also a rapidly growing interest in approaches that work "in the wild", i.e. that are able to deal with uncontrolled conditions. In this work, we faced the challenge of training a visual sentiment classifier starting from a large set of user-generated and unlabeled contents. In particular, we collected more than 3 million tweets containing both text and images, and we leveraged on the sentiment polarity of the textual contents to train a visual sentiment classifier. To the best of our knowledge, this is the first time that a cross-media learning approach is proposed and tested in this context. We assessed the validity of our model by conducting comparative studies and evaluations on a benchmark for visual sentiment analysis. Our empirical study shows that although the text associated to each image is often noisy and weakly correlated with the image content, it can be profitably exploited to train a deep Convolutional Neural Network that effectively predicts the sentiment polarity of previously unseen images.
科研通智能强力驱动
Strongly Powered by AbleSci AI