The performance of machine learning models in real-world applications is often challenged by data drift, in which the statistical properties of the data evolve over time. This phenomenon is particularly acute in dynamic domains such as financial fraud detection, where adversarial behaviors constantly change. Graph Neural Networks (GNNs), valued for their ability to model complex relational data, are increasingly applied in this domain; however, their inherent robustness to temporal data shifts remains underexplored. This study systematically investigates the impact of natural data drift on the robustness of three foundational GNN architectures: Graph Convolutional Networks (GCNs), Graph Attention Networks (GATs), and Graph Sample and Aggregate (GraphSAGE). Using the large-scale and highly imbalanced IEEE-CIS dataset as a challenging case study, we constructed homogeneous transaction graphs and employed a strict temporal data splitting methodology to simulate a realistic deployment scenario. Model performance was evaluated over 50 sequential, nonoverlapping monitoring windows, using AUC-PR and F2-Score as primary metrics due to their sensitivity in imbalanced contexts. Our findings confirm that all evaluated GNNs suffer from significant performance degradation over time, with GCN and GAT showing pronounced declines. A comparative analysis revealed that GraphSAGE exhibited substantially greater robustness, maintaining more stable and resilient performance. These results highlight that architectural choice is a critical factor for GNNs deployed in dynamic environments and underscore the need to develop adaptive strategies.