Understanding how trust in artificial intelligence evolves is crucial for predicting human behavior in AI-enabled environments. While existing research focuses on initial acceptance factors, the temporal dynamics of AI trust remain poorly understood. This study develops a temporal trust dynamics framework proposing three phases: formation through accuracy cues, single-error shock, and post-error repair through explanations. Two experiments in financial advisory contexts tested this framework. Study 1 (N = 189) compared human versus algorithmic advisors, while Study 2 (N = 294) traced trust trajectories across three rounds, manipulating accuracy and post-error explanations. Results demonstrate three temporal patterns. First, participants initially favored algorithmic advisors, supporting “algorithmic appreciation.” Second, single advisory errors resulted in substantial trust decline (η2 = 0.141), demonstrating acute sensitivity to performance failures. Third, post-error explanations significantly facilitated trust recovery, with evidence of enhancement beyond baseline. Financial literacy moderated these patterns, with higher-expertise users showing sharper decline after errors and stronger recovery following explanations. These findings reveal that AI trust follows predictable temporal patterns distinct from interpersonal trust, exhibiting heightened error sensitivity yet remaining amenable to repair through well-designed explanatory interventions. They offer theoretical integration of appreciation and aversion phenomena and practical guidance for designing inclusive AI systems.