Skeleton-based Action Recognition (SAR) is widely recognized for its robustness and efficiency in human action analysis, but its performance in cross-dataset tasks has been limited due to domain shifts between different datasets. To address this challenge, current methods typically approach cross-dataset SAR as an Unsupervised Domain Adaptation (UDA) task, which is tackled using domain adaptation or self-supervised learning strategies. In this paper, we propose a Dual-Domain Triple Contrast (D2TC) framework for cross-dataset SAR under the UDA setting. Unlike existing UDA methods that either focus on a single strategy or superficially combine strategies, our D2TC leverages contrastive learning to integrate both strategies into a unified framework. It performs three types of contrastive learning: Self-Supervised Contrastive Learning (SS-CL), Supervised Contrastive Learning (Sup-CL), and Unsupervised Domain Adaptation with Contrastive Learning (UDA-CL), across both source and target domains. The triple contrasts go beyond mere summation, effectively bridging the domain gap and enhancing the model’s representational capacity. Additionally, we introduce multi-modal ensemble contrast and extreme skeleton augmentation methods to further enhance the skeleton-based representation learning. Extensive experiments on six cross-dataset settings validate the superiority of our D2TC framework over state-of-the-art methods, demonstrating its effectiveness in reducing domain discrepancies and improving cross-dataset SAR performance. The codes are available on https://github.com/KennCoder7/DualDomainTripleContrast .