This study explores the progression of artificial intelligence (AI) systems through the lens of complexity theory, challenging conventional linear projections of advancement toward artificial general intelligence (AGI). We posit the existence of critical points, akin to phase transitions, where increasing system complexity may not lead to greater capability, but rather to performance plateaus or instability. To investigate this hypothesis, we used agent-based modelling (ABM) to simulate the evolution of AI systems, using evaluation benchmark performances as a proxy for complexity. Our simulations modeled the possible characteristics that systems could exhibit when crossing a critical threshold, transitioning from predictable improvement to a regime of erratic, volatile behavior. Practically, we introduced and validated a methodology for detecting these simulated critical transitions algorithmically. We proposed a heuristic Stochastic Gradient Descent-based approach and compared it with established CUmulative SUM (CUSUM) and Lyapunov exponent techniques, to show that different signatures of instability-from abrupt shifts to gradual volatility ramps-can be identified. We contextualized these findings with real-world phenomena, arguing that the empirically observed -"Jagged Capability Frontier" in large language models (LLMs) illustrates the kind of nonlinear performance boundaries that could be sharply accentuated by the onset of criticality. This research contributes not only a novel theoretical framework for understanding potential limits to AI scaling but also a practical, validated methodology for monitoring the systemic stability of AI systems, offering a new dimension to AGI evaluation and safety.