The rapid adoption of large language models (LLMs) in healthcare has created opportunities for innovation but also has raised critical concerns about scientific rigor. This article provides a toolbox for clinicians, researchers, and reviewers involved with LLM studies, highlighting the importance of methodologic transparency, reproducibility, and ethical considerations. It addresses foundational aspects of LLM functioning, including their training data, inherent biases, and black-box nature. Prompt engineering strategies are reviewed to understand and optimize model interaction, emphasizing the necessity of systematic evaluation of these methods. Key challenges around interpreting outputs are discussed, advocating for explainability and fairness. It stresses clear reporting of computational resources, environmental impacts, and the risks of rapid model iteration on study obsolescence. Given the pace at which LLMs evolve, traditional peer-review practices are often outpaced, requiring new guidelines and rigorous qualitative assessments to ensure validity, fairness, and clinical utility. Recommendations to enhance reporting and reproducibility standards are provided.