For AI-driven companies, awareness of the urgency of the responsible application of AI became essential with the increase of interest from different stakeholders. Responsible artificial intelligence (RAI) emerged as a practice to guide the design, development, deployment, and use of AI systems to ensure benefit to users and those impacted by the systems’ outcomes. This benefit is achieved through trustworthy models and strategies that assimilate ethical principles to ensure compliance with regulations and standards for long-term trust. However, RAI comes with the challenge of lack of standardization when it comes to which principles to adopt, what they mean, and how they can be operationalized.
This survey aims to bridge the gap between principles and practice through a study of different approaches taken in the literature and a proposition of a foundational framework.