透明度(行为)
透视图(图形)
管理科学
工程伦理学
计算机科学
设计理论
知识管理
设计要素和原则
碎片(计算)
设计科学
扎根理论
人工智能应用
风险分析(工程)
认识论
技术哲学
人工智能
研究设计
作者
Andrea Rivera,Kaveh Abhari,Bo Xiao
摘要
Rapid advancements in artificial intelligence (AI) have heightened the need for ethical AI design principles, positioning responsible AI at the forefront across academia, industry, and policy spheres. Despite the plethora of guidelines, responsible AI faces challenges due to fragmentation and the lack of a cohesive explanatory theory guiding research and practice. Existing AI literature frequently fixates on responsible AI attributes within usage contexts, operating under the misapprehension that responsibility can be achieved solely through specific system attributes, responsible algorithms, or minimization of harm. This narrow focus neglects the mechanisms that interlace design decisions with the realization of responsible AI, thereby undervaluing their profound significance. Similarly, information systems literature predominantly emphasizes the operation and usage of these systems, often bypassing the opportunity to weave ethical principles into AI design from its inception. In response, this study adopted a grounded theory approach to theorize responsible AI design from the perspective of AI designers. The authenticity, control, transparency (ACT) theory of responsible AI design emerged as a result. This theory posits that authenticity, control, and transparency are pivotal mechanisms in responsible AI design. These mechanisms ensure that ethical design decisions across three domains—architecture, algorithms, and affordances—translate into responsible AI. The ACT theory offers a parsimonious yet practical foundation for guiding research and practice, aligning ethical AI design with technological advancements and fostering accountability, including algorithmic accountability.
科研通智能强力驱动
Strongly Powered by AbleSci AI