Misinformation spreads rapidly on social media, whereas traditional countermeasures struggle to balance effectiveness, scalability, and free expression. Many platforms are now experimenting with crowdsourced fact-checking—systems that rely on users’ collective judgment to identify and annotate misleading content. This paper investigates the efficacy of such systems in curbing misinformation in the context of Community Notes, a pioneering crowdsourced fact-checking system from Twitter/X. Using a regression discontinuity design, we find that publicly displaying community notes significantly increases and accelerates the voluntary retraction of misleading tweets, demonstrating the viability of crowd-based fact-checking as an alternative to professional fact-checking and forcible content removal. The effect is primarily driven by authors’ reputational concerns and social pressure when corrections are visible to the public. Our findings carry meaningful implications for practice and policy. Individuals can play an active role by contributing to crowdchecking, strengthening collective information integrity. Platforms should adopt transparent, community-based systems, like Community Notes, as scalable, less controversial alternatives to forcible content removal. Policymakers can support these initiatives through regulatory guidance that promotes transparency and accountability.