强化学习
马尔可夫决策过程
计算机科学
可扩展性
和声(颜色)
人工智能
群体决策
钢筋
决策问题
符号
机器学习
马尔可夫过程
数学
统计
心理学
算法
社会心理学
算术
艺术
视觉艺术
数据库
作者
Hossein Hassani,Roozbeh Razavi–Far,Mehrdad Saif,Enrique Herrera‐Viedma
标识
DOI:10.1109/tsmc.2022.3214221
摘要
The number of discussion rounds and harmony degree of decision makers are two crucial efficiency measures to be considered in the design of the consensus-reaching process for the group decision-making problems. Adjusting the feedback parameter and importance weights of the decision makers in the recommendation mechanism has a great impact on these efficiency measures. This work aims to propose novel and efficient reinforcement learning-based adjustment mechanisms to address the tradeoff between the aforementioned measures. To employ these adjustment mechanisms, we propose to extract the dynamics of state transition from consensus models based on the distributed trust functions and $Z$ -Numbers in order to convert the decision environment into a Markov decision process. Two independent reinforcement learning agents are then trained via a deep deterministic policy gradient algorithm to adjust the feedback parameter and importance weights of decision makers. The first agent is trained toward reducing the number of discussion rounds while ensuring the highest possible level of harmony degree among the decision makers. The second agent merely speeds up the consensus reaching process by adjusting the importance weights of the decision makers. Various experiments are designed to verify the applicability and scalability of the proposed feedback and weight-adjustment mechanisms in different decision environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI