Reinforcement learning of route choice considering traveler’s preference

偏爱 强化学习 偏好学习 计算机科学 钢筋 旅游行为 增强学习 过程(计算) 运筹学 人工智能 微观经济学 工程类 经济 心理学 社会心理学 操作系统
作者
Xueqin Long,Jianxu Mao,Zhongbao Qiao,Peng Li,Wei He
出处
期刊:Transportation Letters: The International Journal of Transportation Research [Taylor & Francis]
卷期号:: 1-14 被引量:3
标识
DOI:10.1080/19427867.2023.2231689
摘要

ABSTRACTABSTRACTTravelers always perform some preference during the decision-making process. The preference will affect the decision results and can be improved by continuously learning. In order to understand the influence of individual preference on travel behavior choice , two individual preferences, including indifference preference and compulsive preference are considered in the paper. Two updating mechanisms of compulsive preference are proposed to obtain the choosing probability of all alternatives. Reinforcement learning models are established integrating the gain stimulating and loss stimulating considering expected utility. Nguyen Dupuis network is adopted for numerical simulation to study the updating process. Simulation results denote that the equilibrium state is much more efficient when preference learning mechanism is considered comparing with the traditional stochastic user equilibrium model, and can decrease the total travel time greatly, which can be applied for urban traffic management. Personalized traffic guidance is the effective solution to traffic congestion in the futureKEYWORDS: Route choicereinforcement learninggeneralized travel timeindifference thresholdcompulsive preference AcknowledgmentsThis work was supported by the National Key Research and Development Program (2019YFB1600500); Science Program of Shaanxi Province (2021JQ-276).Disclosure statementNo potential conflict of interest was reported by the authors.Data availability statementNo data, models, or code were generated or used during the study.Additional informationFundingThe work was supported by the Science program of Shaanxi Province [2021JQ-276].
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
铁甲小宝完成签到,获得积分10
1秒前
1秒前
张昌辉完成签到 ,获得积分10
1秒前
Wanda完成签到,获得积分20
2秒前
umil发布了新的文献求助10
2秒前
科研助手6应助迷人若冰采纳,获得10
2秒前
安然完成签到 ,获得积分10
2秒前
new发布了新的文献求助10
2秒前
2秒前
量子星尘发布了新的文献求助10
3秒前
愉快乐瑶完成签到,获得积分10
3秒前
4秒前
无限魔镜发布了新的文献求助10
4秒前
mmiww发布了新的文献求助10
4秒前
超帅连虎应助Dore采纳,获得10
4秒前
4秒前
梅西完成签到 ,获得积分10
5秒前
含糊完成签到 ,获得积分10
5秒前
王王完成签到 ,获得积分10
5秒前
学不会发布了新的文献求助10
6秒前
红花完成签到,获得积分10
7秒前
丘比特应助ZD采纳,获得10
7秒前
CipherSage应助此晴可待采纳,获得10
8秒前
超帅连虎应助淡淡夕阳采纳,获得10
8秒前
799发布了新的文献求助10
8秒前
vivid完成签到,获得积分10
8秒前
9秒前
Efference完成签到,获得积分10
10秒前
10秒前
10秒前
王王关注了科研通微信公众号
10秒前
田浩然关注了科研通微信公众号
10秒前
科研通AI5应助new采纳,获得10
11秒前
狂野的访文完成签到,获得积分10
12秒前
圆锥香蕉应助vivid采纳,获得20
12秒前
从容的文涛完成签到,获得积分10
12秒前
13秒前
14秒前
CodeCraft应助学不会采纳,获得10
14秒前
此晴可待完成签到,获得积分10
14秒前
高分求助中
(禁止应助)【重要!!请各位详细阅读】【科研通的精品贴汇总】 10000
Semantics for Latin: An Introduction 1099
Biology of the Indian Stingless Bee: Tetragonula iridipennis Smith 1000
Battery Management Systems, Volume lll: Physics-Based Methods 800
Robot-supported joining of reinforcement textiles with one-sided sewing heads 740
Corpus Linguistics for Language Learning Research 300
Grammar in Action:Building comprehensive grammars of talk-in-interaction 300
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 4138012
求助须知:如何正确求助?哪些是违规求助? 3674693
关于积分的说明 11616228
捐赠科研通 3369333
什么是DOI,文献DOI怎么找? 1850859
邀请新用户注册赠送积分活动 914165
科研通“疑难数据库(出版商)”最低求助积分说明 829103