人工智能
计算机科学
期限(时间)
鉴定(生物学)
合并(业务)
机器学习
模式识别(心理学)
物理
生物
量子力学
业务
会计
植物
作者
Kunlun Xu,Z. Liu,Xu Zou,Yuxin Peng,Jiahuan Zhou
标识
DOI:10.1109/tpami.2025.3572468
摘要
Lifelong person re-identification (LReID) aims to learn from streaming data sources step by step, which suffers from the catastrophic forgetting problem. In this paper, we investigate the exemplar-free LReID setting where no previous exemplar is available during the new step training. Existing exemplar-free LReID methods primarily adopt knowledge distillation to transfer knowledge from an old model to a new one without selection, inevitably introducing erroneous and detrimental information that hinders new knowledge learning. Furthermore, not all critical knowledge can be transferred due to the absence of old data, leading to the permanent loss of undistilled knowledge. To address these limitations, we propose a novel exemplar-free LReID method named Long Short-Term Knowledge Decomposition and Consolidation (LSTKC++). Specifically, an old knowledge rectification mechanism is developed to rectify the old model predictions based on new data annotations, ensuring correct knowledge transfer. Besides, a long-term knowledge consolidation strategy is designed, which first estimates the degree of old knowledge forgetting by leveraging the output difference between the old and new models. Then, a knowledge-guided parameter fusion strategy is developed to balance new and old knowledge, improving long-term knowledge retention. Upon these designs, considering LReID models tend to be biased on the latest seen domains, the fusion weights generated by this process often lead to sub-optimal knowledge balancing. To settle this, we further propose to decompose a single old model into two parts: a long-term old model containing multi-domain knowledge and a short-term model focusing on the latest short-term old knowledge. Then, the incoming new data are explored as an unbiased reference to adjust the old models' fusion weight to achieve backward optimization. Furthermore, an extended complementary knowledge rectification mechanism is developed to mine and retain the correct knowledge in the decomposed models. Extensive experimental results demonstrate that LSTKC++ significantly outperforms state-of-the-art methods by large margins.
科研通智能强力驱动
Strongly Powered by AbleSci AI