建筑
计算机科学
调度(生产过程)
计算机体系结构
系统工程
工程类
运营管理
艺术
视觉艺术
标识
DOI:10.2478/amns-2025-0156
摘要
Abstract The current library digitization management is plagued by issues related to resource utilization and service performance mismatch. In this paper, we design a decomposition-based ARIMA-LSTM resource prediction model. This model dynamically adjusts the threshold value by predicting the overall load degree of the cluster and the migration failure rate of the pods. It then uses the utilization rate of each resource indicator in the high-load nodes as the weight of the pods’ contribution. Then, the target node nodes are chosen by looking at the type of node nodes that have a lot of work to do and making sure that the low-load node node queue for each resource metric type is always up to date. This optimizes the scheduling of library resources. It has been found that Kubernetes’s default resource scheduling strategy has high and low overall utilization of CPU and memory in Node1~Node4 nodes. The resource scheduling effect of the baseline model IGAACO is slightly better than the default resource scheduling strategy of Kubernetes, but there still exists the problem of extremely unbalanced local load. In contrast, the resource scheduling model that utilizes the neural network algorithm in this paper balances the load of each node in the cluster and improves its load capacity. The dynamic scheduling model reduces the cluster’s overall latency to a certain extent after reallocating accesses, thereby improving efficiency and achieving better load balancing results.
科研通智能强力驱动
Strongly Powered by AbleSci AI