Background: We conducted an online survey of Canadian healthcare providers through REDCAP to assess their level of recognition and support of caregivers under the age of 25. The survey was distributed through various channels (listservs, newsletters). Distribution through social media resulted in an immediate uptick of responses in fast succession, despite the use of institutional REDCAP-administered Captcha, raising the alarm about their validity. This project outlines an evidence-informed approach to filtering out fraudulent survey responses. Aimed at researchers and anyone interested in conducting surveys where recruitment may involve social media. The presentation will provide practical strategies and tools to help researchers identify and eliminate fraudulent responses to improve the reliability of their findings. Approach: We conducted a literature review to determine the best course of action after detecting potentially fraudulent responses. Our filter strategy involved several steps including ) identifying postal codes that were incorrect for listed province/territory or improbable age starting clinical practice, 2) clusters of survey responses completed within two minutes of each other, 3) speed bump' question to detect inattentive or careless respondents, 4) inconsistent responses to closed-ended questions (i.e., respondents identified that they do not encounter young caregivers, but do support them in clinical practice); and 5) verification through manual review of open-ended responses for those with AI-like structure such as "noun: description," or similar/duplicate answers. Results: The current number of completed survey responses is 656, with our algorithm identifying more than 283 (77.5%) of responses as fraudulent. A balanced approach between automated and manual processes was needed to deal with concerns of artificial intelligence-generated responses. As a result, we significantly narrowed down the pool of survey responses, the remaining data was reliable and valid for analysis. This algorithm builds on the work of several recent articles from research teams similarly navigating a rapid rise in fraudulent responses. Our work identifies that survey respondents may be using AI to complete open-ended questions, raising alarms for those considering online survey tools. Implication: The key learning from this project is the importance of an evolving strategy to filter out bots. A multi-faceted approach, combining automated filters and manual reviews, is essential for identifying and eliminating potentially fraudulent responses. Online survey research is an important avenue for reaching a wide audience of respondents; however, researchers and leaders interested in recruiting should consider incorporating these strategies into their questionnaires. Moving forward, we plan to publish the survey data, providing valuable insights into the recognition and support of young caregivers in Canada. Knowledge sharing and continuing collaboration with researchers across Canada will support the ongoing refinement of a bot detection strategy to maintain the integrity of research data. Researchers may also consider collaborating with their academic institutions to highlight necessary steps to prevent fraudsters from completing surveys hosted on institutional survey platforms (i.e., REDCap). Survey platforms hosted within institutions may be able to verify further respondents' validity, such as by implementing complex CAPTCHA features or tracking anonymized IP address duplicates.