Medical report generation which extracts pathological information within medical images and subsequently produces diagnostic text autonomously aims to alleviate the workload of medical experts and offers auxiliary support in diagnoses. Despite some preliminary progress have been made, several limitations still persist, including lack of specificity in extracted visual features, insufficient consideration of cross-modal alignment and extensive preparatory work required for prior knowledge.To address these issues, we, in this paper, propose a novel deep label-guided graph convolutional network for medical report generation which utilizes disease label to guide to extract pathological information from medical images. To be specific, we firstly construct graph convolutional network to guide the model to extract the specific visual features based on disease labels, which allowing us to selectively extract disease specificity information resided in medical images. Then, we develop cross-modal alignment module to guide the alignment across medical image, diagnose report and disease label, which enables more accurate generation with more precise description. Besides, we build pre-constructed relational matrix to guide report generation model to learn the relationship between visual features and disease types with minimal additional workload to further reduce intensive workload. Extensive experiments on three benchmark datasets, i.e., IU X-ray, MIMIC-CXR, and COV-CTR, demonstrate that the proposed method outperforms the recent state-of-the-art medical report generation methods. Ours shows a 9.2% improvement in BLEU-4 score on the IU X-ray dataset, and both BLEU-4 and CIDEr scores improve by 6.31% on the MIMIC-CXR dataset. Additionally, the results show that it can be easily to applied and extended to medical image report generation with different modalities.