Federated learning (FL) has emerged as a significant distributed machine learning paradigm. It allows the training of a global model through user collaboration without the necessity of sharing their original data. Traditional FL generally assumes that each client's data remains fixed or static. However, in realworld scenarios, data typically arrives incrementally, leading to a dynamically expanding data domain. In this study, we examine catastrophic forgetting within Federated Incremental Learning (FIL) and focus on the training resources, where edge clients may not have sufficient storage to keep all data or computational budget to implement complex algorithms designed for the server-based environment. We propose a general and lowcost framework for FIL named Re-Fed+, which is designed to help clients cache important samples for replay. Specifically, when a new task arrives, each client initially caches selected previous samples based on their global and local significance. The client then trains the local model using both the cached samples and the new task samples. From a theoretical perspective, we analyze how effectively Re-Fed+ can identify significant samples for replay to alleviate the catastrophic forgetting issue. Empirically, we show that Re-Fed+ achieves competitive performance compared to state-of-the-art methods.