As automated vehicles (AVs) become more prevalent on complex and dynamic roads, they may encounter unexpected challenges and fail to meet user expectations. For example, AVs may misidentify road elements because of technical limitations in perception and understanding. Such mistakes can damage the trust relationship between AVs and their users. To address this, AVs are expected to employ social strategies to repair these damaged relationships. This study focuses on the effectiveness of apologies—a common social strategy in human relationships—in repairing broken trust after an AV makes a false alarm mistake. In our pre-registered experiment, Chinese drivers ( N = 291) watched two online videos: one demonstrating the AV’s reliable operation and the other showing the AV incorrectly identifying a non-existent pedestrian, leading to an unnecessary stop. The driver participants were randomly assigned to three groups: no-repair (ignoring the mistake); an apology with a male Siri voice; and an apology with a female Siri voice. Surprisingly, the AV’s apology, regardless of the voice’s gender, negatively influenced trust repair. The apology damaged perceptions of the AV’s ability but did not affect perceptions of the AV’s integrity and benevolence. We explain that apologies may raise drivers’ concerns about AVs’ competence. Our findings highlight a potential side effect of using apologies in human–automation interaction.