期刊:Advances in computational intelligence and robotics book series日期:2025-07-25卷期号:: 149-188
标识
DOI:10.4018/979-8-3373-1419-8.ch006
摘要
The increasing integration of AI into economic, social, and political processes is creating new challenges for the governance of autonomous AI agents. Traditional governance models are reaching their limits. With AI agents, motivations based on monetary incentives or personal interests no longer apply. Instead, new forms of information asymmetry arise from the technological functioning of AI. This results in key challenges for AI governance, such as the balance between autonomy and control, the transparency issue due to difficult-to-understand AI decisions, ambiguities in accountability and liability, and ethical and societal risks such as algorithmic discrimination and manipulation. To address these challenges comprehensively, the paper proposes a hybrid governance model for AI agents. This model includes technical transparency mechanisms for the traceability of decision-making processes, regulatory frameworks for the disclosure of decision-making parameters, a risk-based liability model, and a dynamic adaptation of rules for the continuous evaluation of governance mechanisms.