Rapid advancements in artificial intelligence (AI) have heightened the need for ethical AI design principles, positioning responsible AI at the forefront across academia, industry, and policy spheres. Despite the plethora of guidelines, responsible AI faces challenges due to fragmentation and the lack of a cohesive explanatory theory guiding research and practice. Existing AI literature frequently fixates on responsible AI attributes within usage contexts, operating under the misapprehension that responsibility can be achieved solely through specific system attributes, responsible algorithms, or minimization of harm. This narrow focus neglects the mechanisms that interlace design decisions with the realization of responsible AI, thereby undervaluing their profound significance. Similarly, information systems literature predominantly emphasizes the operation and usage of these systems, often bypassing the opportunity to weave ethical principles into AI design from its inception. In response, this study adopted a grounded theory approach to theorize responsible AI design from the perspective of AI designers. The authenticity, control, transparency (ACT) theory of responsible AI design emerged as a result. This theory posits that authenticity, control, and transparency are pivotal mechanisms in responsible AI design. These mechanisms ensure that ethical design decisions across three domains—architecture, algorithms, and affordances—translate into responsible AI. The ACT theory offers a parsimonious yet practical foundation for guiding research and practice, aligning ethical AI design with technological advancements and fostering accountability, including algorithmic accountability.