In a fully heterogeneous federated learning environment, the client has significant differences in model structure and local data distribution (Non-IID), and the joint learning of the client model is blocked due to the limited communication content available for interaction in a fully heterogeneous scenario. In this context, the global knowledge constructed by the server through the simple aggregation of the client logits is essentially a fuzzy representation containing a lot of noise and information loss, which is difficult to effectively guide the client model update. To solve these problems, this paper proposes a heterogeneous federated learning framework (FedMkd) based on multi-knowledge distillation fusion to cope with multiple challenges in heterogeneous environments. The FedMkd framework uses a class-grained logits interaction architecture (CLIA) and introduces an efficient knowledge sharing mechanism. It innovatively integrates two knowledge distillation methods: 1) Temperature-Adaptive Knowledge Distillation (TAKD), which provides differentiated temperatures for teacher and student models by adaptively adjusting the distillation temperature, maximizing knowledge transfer between them; 2) Class-related Knowledge Distillation (CRKD), which introduces batch-level sample correlation loss to reduce over-reliance on specific samples or classes and improve the model's understanding of overall data features. We conducted a large number of experiments on four public data sets. The results show that in a variety of data and model heterogeneous scenarios, FedMkd still performs better than the comparison method when the communication overhead is reduced by more than one order of magnitude.