Generative artificial intelligence (GenAI) tools based on large language models are quickly reshaping how researchers conduct surveys and experiments. From reviewing the literature and designing instruments to administering studies, coding data, and interpreting results, these tools offer substantial opportunities to improve research productivity and advance methodology. Yet with this potential comes a critical challenge: Researchers often use these systems without fully understanding how they work. This article aims to provide a practical guide for effective and responsible GenAI use in primary research. The authors begin by explaining how GenAI systems operate, highlighting the gap between their intuitive interfaces and the underlying model architectures. They then examine different use cases throughout the research process, noting both the opportunities and associated risks at each stage. Throughout this review, the authors provide flexible tips for best practice and rules for effective and responsible GenAI use, particularly in areas pertaining to ensuring the validity of GenAI-coding of unstructured data (i.e., open-ended responses). The hope is that these guidelines will help researchers integrate GenAI into their workflows in a transparent, rigorous, and ethically sound manner. An accompanying website (https://questionableresearch.ai) provides supporting materials, including reproducible coding templates in R and SPSS and sample preregistrations.