This paper investigates how trust in artificial intelligence (AI) influences its adoption in organizational settings, emphasizing the dynamic nature of attitudes towards AI. Using qualitative data from 29 interviews with AI developers, managers, and users, the study identifies three attitudinal positions: positive, negative, and instrumental. The findings reveal that attitudes towards AI are changing, often shifting from negative or instrumental to positive as individuals gain knowledge and experience with AI technologies. For example, we found evidence that instrumental attitudes, which require evidence before trust is established, become more positive when people become more familiar with AI. Negative attitudes, rooted in perceived threats like job displacement or privacy concerns, tend to shift when people begun to realize AI benefits. Building on organizational trust and trust in AI theory, this paper extends the understanding of differences in how AI developers, managers and users develop trust in AI. • Attitudes towards AI shift in a positive direction as experience with AI increases. • Trust in AI varies across roles: developers, managers, and users approach trust differently. • Over time, people with positive attitudes toward AI develop a calibrated trust, grounded in a realistic view of AI's limitations. • Organizational support and visible benefits are key to fostering trust and AI adoption. • Trust in AI is crucial for adoption, especially in high-risk sectors like healthcare and finance.