偏爱
偏好学习
计算机科学
期限(时间)
粒度
人工智能
变压器
机器学习
情报检索
工程类
物理
量子力学
电压
电气工程
经济
微观经济学
操作系统
作者
Chaoqun Wang,Qingxuan Chen,Peiyan Zhang
标识
DOI:10.1109/cecit58139.2022.00009
摘要
Next point-of-interest (POI) recommendation has become an important and challenging problem due to complex information and variety in user behavior patterns. Most of the prior studies utilized the RNN method to obtain their preference to various POIs. Recently, researchers integrate long- and short-term interests and achieve success. However, they fail to capture the influence of long-term preference on short-term preference. Besides, the granularity of the preference modeling is too coarse. To address the above limitations, we propose an end-to-end framework named Long- and Short-term Preference Learning with Transformer(LST), considering the user's preference for various places at both long-term and short-term levels. Specifically, the multi-head self-attention mechanism in Transformer is utilized to extract long-term preference. To learn user's short-term preference, we utilize spatial and temporal information of the POIs to model two different behavior patterns. In addition, our model incorporates long-term preference as background information into short-term preference to enhance the preference modeling ability. Results from extensive experiments performed on two Foursquare check-in datasets show that our model has advantages over the state-of-the-art baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI