MedQA-USMLE is a challenging biomedical question answering (BQA) task, as its questions typically involve multi-hop reasoning. To solve this task, BQA systems should possess substantial medical professional knowledge and strong medical reasoning capabilities. While state-of-the-art larger language models, such as Med-PaLM 2, have overcome this challenge, smaller language models (SLMs) still struggle with it. To bridge this gap, we introduces a multi-hop medical knowledge infusion (MHMKI) procedure to endow SLMs with medical reasoning capabilities. Specifically, we categorize MedQA-USMLE questions into distinct reasoning types, then create pre-training instances tailored to each type of questions with the semi-structured information and hyperlinks of Wikipedia articles. To enable SLMs to efficiently capture the multi-hop knowledge embedded in these instances, we design a reasoning chain masked language model for further pre-training of BERT models. Moreover, we transform these pre-training instances into a combined question answering dataset for intermediate fine-tuning of GPT models. We evaluate MHMKI with six SLMs (three BERT models and three GPT models) across five datasets spanning three BQA tasks. Results show that MHMKI benefits SLMs in nearly all tasks, especially those requiring multi-hop reasoning. For instance, the accuracy of MedQA-USMLE shows a significant increase of 5.3% on average.