The rapid development of large language models (LLMs) has accelerated research into applying artificial intelligence (AI) to domains such as medical question answering and clinical decision support. However, LLMs face substantial limitations in medical contexts due to challenges in understanding specialized terminology, complex contextual information, hallucination issues (i.e., generating incorrect responses), and the black-box nature of their reasoning processes. To address these issues, methods like retrieval-augmented generation (RAG) and its graph-based variant, GraphRAG, have been proposed to incorporate external knowledge into LLMs. Nonetheless, these approaches often rely heavily on external resources and increase system complexity. In this study, we introduce MedSumGraph, a medical question-answering system that enhances GraphRAG by integrating structured medical knowledge summaries and optimized prompt designs. Our method enables LLMs to better interpret domain-specific knowledge without requiring additional training, and it enhances the reliability and interpretability of responses by directly embedding factual evidence and graph-based reasoning into the generation process. MedSumGraph achieves competitive performance on two out of eight multiple-choice medical QA benchmarks, including MedQA (USMLE), outperforming closed-source LLMs and domain-specific foundation models. Moreover, it generalizes effectively to open-domain QA tasks, yielding significant gains in reasoning over common knowledge and evaluating the truthfulness of answers. These findings demonstrate the potential of structured summarization and graph-based reasoning in enhancing the trustworthiness and versatility of LLM-driven medical AI systems.